Salary
💰 $176,166 - $251,666 per year
Tech Stack
AirflowApacheAWSCloudGoogle Cloud PlatformPythonPyTorchRayTensorflow
About the role
- Lead the ML strategy for a 200+ person organization by defining and maintaining a clear roadmap for the Spotify for Creators and Megaphone apps, collaborating closely with engineers and PMs to align on product requirements and enhance impact.
- Advise leadership on ML initiatives, guiding prioritization and shaping decisions on the development and rollout of ML features.
- Serve as the liaison ML efforts within the Podcast mission and ML engineers/PMs across Spotify, ensuring deep understanding of existing systems and facilitating their seamless integration into the Spotify for Creators product.
- Mentor engineers in ML practices to level up Podcast Mission’s capabilities in the space.
- Work closely with backend, data, and client engineers as well as PMs and designers to ship features that directly impact podcast engagement metrics.
- Prototype novel agentic workflows (e.g., multi-component pipelines or tool-using agents) and contribute to experimentation around Creator-Consumer interactivity.
- Own and evolve ML model lifecycle: data annotation, data pipeline construction, feature engineering, model training, deployment, and monitoring.
- Design, develop, fine-tune, and deploy machine learning systems that power podcast growth.
- Optimize models for scale and reliability using modern ML infra tools like Ray, Apache Beam, and Google Cloud Platform (GCP).
- Lead with an experimentation attitude. Implement A/B tests and contribute to continuous model evaluation and improvement loops to productionize solutions at scale for our millions of active podcast users.
- Participate in Spotify’s ML community: share findings, explore new tools and paradigms, and contribute to scaling ML standards across the company.
Requirements
- You have 5+ years of professional experience in machine learning, with expertise in building and productionize ML systems at scale.
- You’re fluent in Python; experience with PyTorch or TensorFlow is a strong plus.
- You’ve worked with large-scale data systems and owning end-to-end ML workflows (data ingestion to serving).
- You have hands-on experience with cloud platforms like GCP or AWS, and with ML infra tools such as Ray, Apache Beam, or Airflow.
- You have experience, or strong interest, in agent-based systems and LLM integrations.
- You thrive in agile environments, care deeply about product impact, and bring a user-centered outlook to ML development.
- Bonus: Experience with content recommendation, interactive media formats, or real-time systems.
- Bonus: You’ve led initiatives within an organization, expertly balancing trade-offs in large-scale systems, engaging with collaborators, and crafting clear, actionable roadmaps.