Tech Stack
AWSAzureCloudGoogle Cloud PlatformGrafanaPythonSQL
About the role
- Architect and deploy Generative AI solutions, including LLM-based workflows and production-grade conversational AI systems
- Mentor junior AI engineers and ensure timely delivery of high-quality implementations
- Collaborate with enterprise clients to understand objectives and translate them into actionable AI workflows
- Design and integrate complex software systems with APIs, ensuring end-to-end functionality
- Oversee cloud deployment on platforms like AWS, GCP, or Azure
- Conduct A/B and A/A testing to refine workflows and improve prompt accuracy
- Implement monitoring tools for real-time metrics and system reliability
- Align AI initiatives with organizational goals and report progress to leadership
- Stay updated on the latest trends and innovations in Generative AI to ensure state-of-the-art solutions
Requirements
- 5+ years in customer-facing roles, consulting, or solution delivery for enterprise clients
- Hands-on experience with LLM-based solutions, including prompt engineering and multi-step workflows
- Proficiency in Python and SQL for AI workflow orchestration and data handling
- Expertise in cloud platforms (AWS, GCP, or Azure) and API integration
- Experience scaling AI workflows in a startup environment (preferred)
- Familiarity with production-grade conversational AI (e.g., chatbots, voicebots) (preferred)
- Knowledge of monitoring tools like Grafana or Datadog (preferred)
- Strong problem-solving and troubleshooting skills, especially in LLM outputs and integrations
- Strategic mindset with the ability to align AI efforts with business objectives