Salary
💰 $40 - $60 per hour
Tech Stack
Amazon RedshiftBigQueryPythonSparkSQL
About the role
- Assist in designing, building, and deploying systems for implementing and tracking key performance indicators (KPIs) and engineering metrics.
- Contribute to the development and maintenance of distributed data processing pipelines and system performance evaluation frameworks.
- Work closely with mentors and team members across data science, engineering, and product management to understand data needs and contribute to effective solutions.
- Help investigate and resolve stability and performance issues in our data ingestion and processing platforms to ensure our systems run smoothly.
- Support our data integrity monitoring and validation systems to ensure that company-wide metrics are based on accurate and reliable data.
Requirements
- Currently pursuing a BS, MS, or PhD degree in Computer Science, Data Science, or a related technical field.
- Strong programming skills in Python and C++.
- Solid understanding of data structures and algorithms from your coursework.
- Familiarity with data pipelines and SQL, ideally through academic projects or previous internships.
- Strong problem-solving, communication, and collaboration skills.
- Eagerness to learn and a passion for working with large-scale data.