Salary
💰 $141,000 - $229,000 per year
Tech Stack
AWSCloudDistributed SystemsGoJavaPythonRubyScala
About the role
- Build and optimize scalable backend systems to support AI models and services
- Tackle complex technical problems, utilizing your expertise to analyze data and evaluate variables, providing innovative solutions that meet the unique needs of AI development and deployment
- Work closely with other engineering teams, product managers, and stakeholders to ensure that the backend architecture aligns with both technical and business objectives
- Determine methods, procedures, and processes for new assignments, ensuring the best practices are adhered to and that projects are completed efficiently and effectively
- Continuously optimize the performance, reliability, and scalability of backend systems, ensuring the seamless integration of AI models into production environments
- Contribute to the evolution of AI solutions by applying new technologies and techniques to improve backend processes, drive innovation, and push the boundaries of AI deployment
Requirements
- Extensive experience with server-side programming languages (Java/Scala, Ruby, Python, Go)
- Proficient in designing, implementing, and maintaining RESTful APIs
- Familiarity with computer science fundamentals such as data structures, distributed systems, concurrency, and threading
- Comfortable navigating ambiguous challenges in a rapidly evolving domain
- Passionate about learning new technologies and staying at the forefront of AI advancements
- Strong sense of ownership and accountability for delivering impactful solutions
- Proven ability to work closely with Product and Design teams to define requirements for new features in a fast-paced iterative environment
- You write code that can be easily understood by others, with an eye towards maintainability
- You hold yourself and others to a high bar when working with production systems
- You value high code quality, automated testing, and other engineering best practices
- Bonus: Hands-on experience building GenAI features or working with Large Language Models (LLMs), either through direct model integration or leveraging AI APIs like OpenAI, Google Cloud AI, or AWS Bedrock
- Bonus: Understanding of prompt engineering, fine-tuning models, or deploying AI services in production environments