Tech Stack
AWSCloudDockerGoGoogle Cloud PlatformKubernetesPython
About the role
- Design, develop, and deploy innovative AI applications using foundation models (LLMs, Multimodal Models) across Mercari.
- Build foundational platforms and services enabling product teams to ship scalable AI features.
- Build Foundational AI Services using prompt engineering, RAG, and fine-tuning to solve high-impact problems.
- Collaborate with product, engineering, and infrastructure teams to integrate AI into the Mercari app and architect modular scalable solutions.
- Develop evaluation pipelines to measure, monitor, and improve model performance focusing on accuracy, latency, cost, and responsible AI.
- Own features end-to-end: problem scoping, prototyping, production deployment, maintenance, and iteration based on metrics and user feedback.
- Stay at the forefront of AI engineering advancements and share knowledge to uplift organization capabilities.
Requirements
- 2-6 years of professional software engineering experience, with a demonstrated focus on building and deploying AI/ML-powered systems in production.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proficiency in Python and hands-on experience building applications with Large Language Models (LLMs) and familiarity with core concepts such as prompt engineering, RAG, and fine-tuning.
- Experience building Model Context Protocol(MCP) servers in either Python or Go.
- Solid understanding of software engineering fundamentals and the ability to write clean, maintainable, and production-ready code.
- Strong problem-solving skills and a data-driven approach to decision-making.
- Excellent communication skills to collaborate effectively with both technical and non-technical stakeholders across multiple teams.
- Preferred Skills:
- Experience designing and implementing systematic evaluation strategies for generative AI systems (e.g., AI-as-judge, human-in-the-loop).
- Experience with cloud platforms (GCP, AWS, etc.) and containerization technologies (Docker, Kubernetes).
- Contributions to open-source projects or publications in top-tier AI/ML conferences.
- Experience with microservice architecture and development.