Salary
💰 $118,139 - $177,209 per year
Tech Stack
ApacheCloudETLJavaPythonScalaSpark
About the role
- Architect, design, and develop high-performance, scalable data pipelines.
- Develop large scale and complex Big Data ETL using modern frameworks such as DBT and Databricks Delta Live Tables (DLT)
- Define and implement data lake architectures and open table formats (e.g., Delta Lake, Iceberg).
- Establish and enforce best practices for data engineering, ensuring data quality, integrity, and performance.
- Drive observability and monitoring for data pipelines, implementing data catalogs and lineage tracking.
- Ability to own critical modules for complex and large-scale Batch ETL models and ability to articulate models to Product, Engineering and Leadership.
- Develop complex Semantic layers and help create customer facing self-service analytics.
Requirements
- At-least 4+ years of experience in Data Engineering
- Proficient in at-least one of languages such as Python, Scala or Java or GoLang
- Ability to craft SQLs for complex Lakehouse or warehouses
- Hands-on experience in Spark and/or Snowflake is preferred
- Firm grasp on concepts such as Lakehouse (like Apache Iceberg) and Warehouse
- Firm grasp on Data Warehouse modeling paradigms
- Firm understanding of Cloud based architectures
- Health insurance
- Retirement plans
- Paid time off
- Flexible work arrangements
- Professional development opportunities
- Bonus programs
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringETLBig DataDBTDatabricks Delta Live TablesSQLSparkSnowflakeLakehousedata warehouse modeling
Soft skills
communicationleadershipproblem-solvingcollaborationarticulation