Docker, Inc

Staff Software Engineer, AI Gateway

Docker, Inc

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Salary

💰 $200,400 - $275,600 per year

Job Level

Lead

Tech Stack

AWSCloudDockerGoKubernetesMicroservicesRust

About the role

  • At Docker, we make app development easier so developers can focus on what matters. Our remote-first team spans the globe, united by a passion for innovation and great developer experiences. With over 20 million monthly users and 20 billion image pulls, Docker is the #1 tool for building, sharing, and running apps—trusted by startups and Fortune 100s alike. We’re growing fast and just getting started. Come join us for a whale of a ride! Docker AI Gateway is our answer to the complexity of taking AI agents from prototype to production. It’s a powerful, intelligent, and secure control point that eliminates the toil of model orchestration, tool management, observability, and governance—so developers can focus on building incredible AI agents, not gluing together infrastructure. The Gateway sits at the center of modern AI applications, offering: A model and tool routing layer with built-in security and cost optimization A familiar OpenAI-compatible interface and MCP server Unified observability and policy enforcement Auto-RAG, tool injection, session summarization, and more We’re just getting started—and we need exceptional engineers to help us build the backbone of the future of agent-based development. Responsibilities Design and implement core systems powering the AI Gateway, including the model router, MCP gateway, and control plane Build infrastructure that supports dynamic model selection, auto-failover, cost-based routing, and policy enforcement Own critical capabilities such as secure credential storage, session summarization, caching, and rate limiting Develop APIs for developers building with OpenAI-compatible interfaces and the Model Context Protocol Build the underlying infrastructure to support evaluation, telemetry, replay, and backtesting for agents and LLM workflows Lead architectural decisions and mentor engineers as the team scales Collaborate with product and design to create delightful experiences in our control plane UI Contribute to roadmap planning, technical strategy, and cross-functional alignment Key Problems You’ll Help Solve Build a unified abstraction layer across diverse model and tool providers (OpenAI, Anthropic, Google, AWS Bedrock) Implement secure and scalable identity and credential vaulting for tool and model access Create infrastructure to support real-time and historical analytics of AI agent behavior Ensure policy enforcement and logging works end-to-end—from prompt to tool to response Develop seamless developer experiences through intuitive APIs and first-class observability

Requirements

  • 8+ years of backend engineering experience with production-grade systems Deep knowledge of distributed and highly scalable systems, cloud-native infrastructure, and API design Experience building secure, high-throughput services (e.g., gateways, proxies, load balancers, policy engines) Fluency in Go, and/or Rust (both preferred) Familiarity with AI/ML platforms or model serving infrastructure A strong product mindset—you're excited about building developer-facing tools Ownership mentality with a bias for shipping, learning, and iterating