Salary
💰 $92,000 - $160,000 per year
Tech Stack
AirflowAzurePySparkSparkSQL
About the role
- Demonstrate deep knowledge of the data engineering domain to build and support automated batch and streaming data pipelines.
- Provide consultation and lead the design and implementation of complex pipelines.
- Develop and maintain documentation relating to all assigned systems and projects.
- Tune queries running over billions of rows of data using Spark as a distributed compute engine.
- Perform root cause analysis to identify permanent resolutions to software or business process issues.
- Manage communication with project stakeholders as you develop and execute on timelines.
Requirements
- Proven industry experience executing data engineering, analytics and/or data science projects
- Strong/expert Spark (PySpark) on Databricks
- Hands-on data pipeline development, real world experience with orchestration tools such as Azure Data Factory (ADF) or Airflow
- Strong/expert level of experience in SQL
- Collaborative, proactive, communicative, able to work remotely while remaining engaged as a team member.
- Experience analyzing needs and designing high volume pipelines
- Flexibility, with remote and hybrid work options (country-dependent)
- Career advancement, with international mobility and professional development programs
- Learning and development, with access to cutting-edge tools, training and industry experts
- Medical, dental, and vision insurance for you and your family, plus employer contributions to Health Savings Accounts
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringdata pipelinesSparkPySparkSQLAzure Data FactoryAirflowdata analyticsdata scienceroot cause analysis
Soft skills
collaborativeproactivecommunicativeteamworkremote work engagement