Fine-tune and train LLMs using extensive datasets of input and existing output data to continually improve accuracy, speed, and cost-efficiency.
Design and implement improvements to the existing API service, focusing on performance, scalability, and reliability.
Manage the end-to-end lifecycle of the AI models, including data preprocessing, training, evaluation, deployment, and monitoring.
Work with models deployed in various environments, including self-hosted Docker containers and cloud-based services like AWS Bedrock or Azure OpenAI.
Collaborate with software engineers to ensure the AI components are seamlessly integrated into our microservices architecture and meet the required service-level objectives.
Troubleshoot and resolve issues related to model performance, data parsing accuracy, and the API's operational health.
Requirements
A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related technical field.
A minimum of 5 years of professional experience in software development.
A minimum of 3 years of hands-on commercial experience in an AI/ML engineering role.
Proven experience training and fine-tuning Large Language Models (LLMs) for specific Natural Language Processing (NLP) tasks, such as data extraction, classification, and normalization.
Strong proficiency in Python and common AI/ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face).
Experience deploying machine learning models into production environments.
Practical experience with containerization technologies, specifically Docker.
Solid understanding of API design and development (e.g., REST).
Benefits
None
📊 Resume Score
Upload your resume to see if it passes auto-rejection tools used by recruiters
Check Resume Score
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
Large Language Models (LLMs)Natural Language Processing (NLP)PythonPyTorchTensorFlowHugging FaceAPI designdata preprocessingmodel evaluationmodel deployment