Salary
💰 $136,600 - $182,600 per year
Tech Stack
Amazon RedshiftAWSAzureCloudDistributed SystemsDockerGoogle Cloud PlatformHadoopJavaKafkaKubernetesPythonScalaSparkSQL
About the role
- Design, build, and maintain scalable, high-performance data services and infrastructure
- Develop clean, robust, and well-tested software that processes large-scale datasets reliably
- Build APIs, frameworks, and libraries to enable consistent data ingestion, transformation, and serving
- Engineer high-availability and fault-tolerant systems for data processing and storage
- Collaborate with partner teams to integrate with upstream systems and enable data-driven features across the company
- Evaluate and implement distributed storage, compute, and query technologies
- Continuously improve the reliability, efficiency, and observability of our data platform
- Automate testing, deployment, and monitoring to ensure data service SLAs are consistently met
Requirements
- B.S. degree in Computer Science or a related technical field (or equivalent experience)
- 5+ years of experience as a Software Engineer or Data Engineer building production systems at scale
- Strong software engineering fundamentals: design patterns, code quality, testing, debugging, and performance optimization
- Proficiency in one or more programming languages (Python, Java, Scala, or similar)
- Solid understanding of distributed systems concepts
- Hands-on experience with big data technologies (e.g. Snowflake, Redshift, Spark, Hadoop/Hive)
- Strong SQL skills; experience with Snowflake is a plus
- Familiarity with modern data tooling (e.g. dbt for transformations, Kafka or Kinesis for streaming) is desirable
- Experience with CI/CD, containerization (Docker, Kubernetes), and cloud infrastructure (AWS, GCP, or Azure) is a plus