Design, build and maintain large-scale, low-latency data stores that power personalized content recommendations at global scale
Develop and optimize backend APIs and services that reliably deliver personalization data to downstream clients for our hundreds of millions of streaming subscribers
Craft and evolve data production and ingestion pipelines to ensure timely, performant and economical persistence of foundational personalization data
Collaborate with machine learning engineers to design and implement elegant solutions used for feature storage, retrieval, and online inference
Champion operational excellence by leading observability best practices and participating in an on-call rotation for our tier one critical services
Requirements
Bachelor’s degree in Computer Science (or related field), or equivalent work experience
5+ years of related experience working with large scale distributed systems and data at scale