Salary
💰 $200,000 - $280,000 per year
Tech Stack
ApolloAWSAzureCloudGoogle Cloud PlatformPython
About the role
- Build and productionize advanced AI systems powered by LLMs and intelligent agents, including AI Assistant, Autonomous AI Agents, Deep Research Agents, Conversational Assistant, Semantic Search, Search Personalization, and AI Power Automation
- Build sophisticated multi-agent systems that can reason, plan, and execute complex sales workflows
- Develop systems that maintain conversational context across complex multi-turn interactions
- Build scalable large language model and agentic platforms to enable agent development across Apollo ecosystem
- Design and deploy production LLM systems with high availability and performance requirements
- Create sophisticated AI agents that chain multiple LLM calls, integrate with external APIs, and maintain state across complex workflows
- Develop and optimize prompting strategies and implement advanced prompting techniques
- Build robust APIs and integrate AI capabilities with existing Apollo infrastructure and external services
- Implement evaluation frameworks, A/B testing, and monitoring systems for accuracy, safety, and reliability
- Optimize for cost, latency, and scalability across different LLM providers and deployment scenarios
- Collaborate with product teams, backend engineers, and stakeholders to translate business requirements into technical AI solutions
Requirements
- 8+ years of software engineering experience with a focus on production systems
- 1.5+ years of hands-on LLM experience (2023-present) building real applications with GPT, Claude, Llama, or other modern LLMs
- Demonstrated experience building customer-facing, scalable LLM-powered products with real user usage (not just POCs or internal tools)
- Experience building multi-step AI agents, LLM chaining, and complex workflow automation
- Deep understanding of prompting strategies, few-shot learning, chain-of-thought reasoning, and prompt optimization techniques
- Expert-level Python skills for production AI systems
- Strong experience building scalable backend systems, APIs, and distributed architectures
- Experience with LangChain, LlamaIndex, or other LLM application frameworks
- Proven ability to integrate multiple APIs and services to create advanced AI capabilities
- Experience deploying and managing AI models in cloud environments (AWS, GCP, Azure)
- Experience implementing rigorous evaluation frameworks for LLM systems including accuracy, safety, and performance metrics
- Understanding of experimental design for AI system optimization (A/B testing)
- Experience with production monitoring, alerting, and debugging complex AI systems
- Experience building and maintaining scalable data pipelines that power AI systems