
Staff AI/ML Platform Engineer
Albert Invent
full-time
Posted on:
Location Type: Remote
Location: California • United States
Visit company websiteExplore more
Job Level
About the role
- You'll own the APIs, data pipelines, and workflow orchestration that power our AI products—from real-time model inference to long-running optimization pipelines.
- This role sits at the intersection of backend engineering and data engineering: you'll build the services that serve up models, manage workflows, and connect AI capabilities to the structured data that makes them useful.
- You'll work closely with our Active Learning and LLM/Agents team leads, translating their product vision into scalable, production-grade systems.
- The infrastructure you build will power model playgrounds for chemists, inverse design pipelines that optimize experiments across high-dimensional spaces, and orchestrated agent workflows that reason through complex scientific problems.
- Design and build high-performance Python APIs that serve models, manage workflows, and expose AI capabilities to the broader platform
- Architect backend services for scalability, reliability, and low latency
- Build integrations between AI/ML systems, graph databases, and external data sources
- Build and maintain long-running workflow pipelines using Ray and Temporal.
- Design orchestration patterns for multi-step agent pipelines, batch inference, and numerical optimization workflows
- Ensure fault tolerance, graceful degradation, and efficient resource utilization.
- Architect and maintain data pipelines that feed AI/ML workflows
- Work with Neptune (graph), Redis, DynamoDB, and other data stores to enable efficient data access patterns
- Implement observability including logging, metrics, tracing, and alerting
- Own system reliability—troubleshoot issues, conduct post-mortems, and continuously improve.
- Design CI/CD pipelines and promote automation best practices.
Requirements
- Deep expertise in Python backend development and building production APIs
- Experience designing and operating data pipelines and workflow orchestration systems
- A builder's mindset—you want to create foundational systems that others build on
- Genuine curiosity about how your work enables scientific discovery
- A commitment to rigor: AI makes mistakes confidently, and our customers won't accept hand-waving—neither should we
- A degree in Computer Science or a related field with 7+ years of industry experience (Bachelor's) or 5+ years (Master's or PhD) in software engineering
- Advanced proficiency in Python including async programming and performance optimization
- Experience building and maintaining REST APIs using FastAPI or similar frameworks
- Experience with workflow orchestration tools (Ray, Temporal, or similar)
- Strong background in data engineering: pipelines, transformations, and working with diverse data stores
- Experience with cloud platforms (AWS preferred) and containerization (Docker, Kubernetes)
- Familiarity with graph databases, key-value stores, or other NoSQL systems (Neptune, Redis, DynamoDB a plus)
- Track record of operating production systems at scale.
Benefits
- We care about you.
- We love distributed teams.
- We value diversity.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonAPI developmentdata pipelinesworkflow orchestrationasync programmingperformance optimizationREST APIsFastAPIcloud platformscontainerization
Soft Skills
builder's mindsetgenuine curiositycommitment to rigor
Certifications
Bachelor's in Computer ScienceMaster's in Computer SciencePhD in Computer Science