Tech Stack
DockerJavaScriptKubernetesNode.jsPythonTypeScript
About the role
- Design, develop, and optimize AI/ML solutions focusing on LLMs and NLP applications.
- Fine-tune, deploy, and maintain LLMs (OpenAI, Llama, Claude, Gemini, Grok).
- Implement semantic search, embeddings, and vector-based solutions.
- Integrate AI components using orchestration frameworks (LangChain, LangGraph, LlamaIndex).
- Ensure reliable deployment with Docker/Kubernetes and modern DevOps practices.
- Evaluate model performance using appropriate metrics and methodologies.
- Collaborate with cross-functional teams to deliver AI-powered products.
Requirements
- 5+ years of overall engineering experience, including 1+ year in AI/ML.
- Strong proficiency in Python and Node.js (TypeScript).
- Practical experience with fine-tuning, deployment, and optimization of LLMs (OpenAI, Llama, Claude, Gemini, Grok).
- Solid expertise in NLP, prompt engineering, embeddings, and semantic search.
- Experience working with vector databases (Pinecone, Weaviate, Faiss, etc.).
- Familiarity with orchestration frameworks for LLMs (LangChain, LangGraph, LlamaIndex).
- Hands-on experience with Docker and Kubernetes.
- Good understanding of evaluation methods and metrics for LLMs.
- English level: Upper-Intermediate.