Tech Stack
AirflowAmazon RedshiftAWSAzureCloudGoogle Cloud PlatformKafkaPythonSparkSQL
About the role
- Design, build, and maintain robust data pipelines for batch and real-time processing across cloud platforms (Azure, AWS, GCP).
- Architect scalable data warehousing solutions using Snowflake, Amazon Redshift, and Azure Synapse Analytics.
- Implement data segregation, de-duplication, cleanup, and persistence strategies to ensure high-quality, reliable datasets.
- Integrate OLTP, OLAP, and Timeseries DB systems into unified data platforms for analytics and operational use.
- Ensure secure and efficient data exposure for downstream consumers including BI, analytics, and ML Ops workflows.
- Collaborate with DevOps and SRE teams to implement logging, metrics, observability, and distributed tracing across data systems.
- Optimize data infrastructure for scalability, performance, and reliability under high-volume workloads.
- Contribute to data governance, security, and compliance initiatives (e.g., GDPR, HIPAA, SOC 2).
- Evaluate and integrate emerging technologies to enhance data capabilities and system resilience.
- Mentor junior engineers and contribute to technical leadership within the team.
- Comfortable with agile processes and rotating on-call duty.
Requirements
- 7+ years of hands-on experience in Data Engineering, with a strong track record of delivering complex Data solutions.
- Deep expertise in Cloud-native Data Platforms.
- Strong understanding of OLTP, OLAP, and Timeseries DB architectures.
- Proficiency in building Data Pipelines using Spark, Kafka, Flink, Airflow, and DBT.
- Advanced SQL and Python skills, with experience in distributed computing and performance tuning.
- Experience with observability tools and practices.
- Solid grasp of Data Security protocols, encryption standards, and access control mechanisms.
- Strong communication and collaboration skills, with the ability to work across teams and influence technical direction.
- Experience with Data Cataloguing and Metadata Management tools
- Background in supporting AI/ML applications with robust data infrastructure.
- A competitive base pay
- Medical Benefits
- Discretionary incentive plan based on individual and company performance
- Focus on development: Access to a learning & development platform and opportunity for you to own your career pathways
- Flexible work policy.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringdata pipelinesSQLPythonSparkKafkaFlinkAirflowDBTdata warehousing
Soft skills
communicationcollaborationmentoringtechnical leadershipinfluencingagile processesproblem-solvingteamworkadaptabilitycritical thinking