Carrier

Senior Data Engineer

Carrier

full-time

Posted on:

Location Type: Hybrid

Location: BangaloreIndia

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • The Senior Data Engineer – Lakehouse & Data Products is responsible for designing, building, and operating scalable, production‑grade data solutions and data products using a modern Lakehouse architecture deployed across AWS and Google Cloud Platform (GCP)
  • Collaborate closely with data product owners, architects, and platform teams to deliver trusted, reusable, analytics‑ready data products that support reporting, advanced analytics, and AI/ML use cases.
  • Design and implement Lakehouse architectures using object storage and open table formats (e.g., Apache Iceberg) to support ACID transactions, schema evolution, and time‑travel
  • Build and maintain batch and streaming data pipelines on AWS and GCP
  • Implement medallion architecture patterns consistently across clouds
  • Develop curated, governed, and consumption‑ready data products aligned to business domains
  • Partner with Data Product Owners and stakeholders to translate requirements into robust technical implementations
  • Ensure data products are discoverable, reusable, and well‑documented, supporting analytics and downstream AI/ML use cases
  • Design and maintain analytical data models optimized for BI, reporting, and advanced analytics
  • Implement data quality checks, validation rules, and monitoring within pipelines
  • Manage schema evolution and ensure adherence to enterprise data standards across AWS and GCP
  • Apply DataOps and DevOps best practices by implementing CI/CD pipelines for data pipelines and data products.
  • Collaborate with platform and DevOps teams to improve observability, reliability, and operational maturity
  • Optimize pipelines for performance, scalability, and cost efficiency on both AWS and GCP
  • Monitor and tune compute and storage usage in collaboration with platform and FinOps teams
  • Act as a senior technical contributor and mentor for junior and mid‑level data engineers
  • Participate in design and code reviews to maintain high engineering standards.

Requirements

  • 4 to 6 years of experience in data engineering or related roles
  • Strong hands‑on experience building Lakehouse‑based data platforms
  • Proven experience with AWS data services (e.g., S3, Glue, Kinesis, Athena, Redshift, EMR)
  • Proven experience with GCP data services (e.g., GCS, Dataproc, Dataflow, BigQuery, Pub/Sub)
  • Experience using open table formats such as Apache Iceberg (or equivalent)
  • Proficiency in Python and SQL
  • Strong understanding of batch and streaming data processing
  • Experience delivering production‑ready data products.
Benefits
  • Enjoy your best years with our retirement savings plan
  • Have peace of mind and body with our health insurance
  • Make yourself a priority with flexible schedules, parental leave and our holiday purchase scheme
  • Drive forward your career through professional development opportunities
  • Achieve your personal goals with our Employee Assistance Programme.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringLakehouse architecturebatch data processingstreaming data processingdata quality checksCI/CD pipelinesdata modelingschema evolutionPythonSQL
Soft Skills
collaborationmentorshipcommunicationproblem-solvingtechnical leadership