Design, build and maintain large-scale, low-latency data stores that power personalized content recommendations at global scale
Develop and optimize backend APIs and services that reliably deliver personalization data to downstream clients for our hundreds of millions of streaming subscribers
Craft and evolve data production and ingestion pipelines to ensure timely, performant and economical persistence of foundational personalization data
Collaborate with machine learning engineers to design and implement elegant solutions used for feature storage, retrieval, and online inference
Champion operational excellence by leading observability best practices and participating in an on-call rotation for our tier one critical services
Requirements
Bachelor’s degree in Computer Science (or related field), or equivalent work experience
5+ years of related experience working with large scale distributed systems and data at scale
Strong programming skills in Java, Python, or comparable object-oriented language
Hands-on experience with NoSQL data stores (e.g. DynamoDB, Cassandra, ScyllaDB, or other)
Familiarity with Databricks or equivalent large-scale data processing platforms
Demonstrated knowledge of caching technologies
Demonstrated knowledge of operating within a Public Cloud Provider (e.g. AWS, Microsoft Azure, Google Cloud)
Demonstrated knowledge of source control systems and CI/CD pipelines
Strong communication skills, both written and verbal
Benefits
A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.