Medsuite Inc

Snowflake Data Engineer

Medsuite Inc

full-time

Posted on:

Location: 🇮🇳 India

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

ApacheAzureCloudETLKafkaMatillionPythonSparkSQL

About the role

  • Participate in the design, development, and maintenance of a scalable Snowflake data solution serving our enterprise data & analytics team.
  • Implement data pipelines, ETL/ELT workflows, and data warehouse solutions using Snowflake and related technologies.
  • Optimize Snowflake database performance.
  • Collaborate with cross-functional teams of data analysts, business analysts, data scientists, and software engineers, to define and implement data solutions.
  • Ensure data quality, integrity, and governance.
  • Troubleshoot and resolve data-related issues, ensuring high availability and performance of the data platform.
  • Conduct independent data analysis and data discovery to understand existing source systems, fact and dimension data models.
  • Implement an enterprise data warehouse solution in Snowflake and take direction from the Lead Snowflake Data Engineer and Director of Data Engineering.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
  • 4+ years of experience in-depth data engineering, with at least 1+ minimum year(s) of dedicated experience engineering solutions in an enterprise scale Snowflake environment.
  • Tactical expertise in ANSI SQL, performance tuning, and data modeling techniques.
  • Strong experience with cloud platforms (preference to Azure) and their data services.
  • Experience in ETL/ELT development using tools such as Azure Data Factory, dbt, Matillion, Talend, or Fivetran.
  • Hands-on experience with scripting languages like Python for data processing.
  • Snowflake SnowPro certification; preference to the engineering course path.
  • Experience with CI/CD pipelines, DevOps practices, and Infrastructure as Code (IaC).
  • Knowledge of streaming data processing frameworks such as Apache Kafka or Spark Streaming.
  • Familiarity with BI and visualization tools such as PowerBI.
  • Familiarity working in an agile scum team, including sprint planning, daily stand-ups, backlog grooming, and retrospectives.
  • Ability to self-manage medium complexity deliverables and document user stories and tasks through Azure Dev Ops.
  • Personal accountability to committed sprint user stories and tasks.
  • Strong analytical and problem-solving skills with ability to handle complex data challenges.
  • Strong oral, written, and interpersonal communication skills.
  • Ability to read, understand, and apply state/federal laws, regulations, and policies.
Poppulo

SDE 2, Data Engineer

Poppulo
Mid · Seniorfull-time🇮🇳 India
Posted: 53 minutes agoSource: boards.greenhouse.io
AirflowAmazon RedshiftAWSBigQueryCloudPySparkPythonSparkSQLTypeScript
EXL

Lead Assistant Manager – Data Engineering, Cloud Data Engineering

EXL
Seniorfull-time🇮🇳 India
Posted: 2 hours agoSource: fa-ewjt-saasfaprod1.fa.ocs.oraclecloud.com
AirflowAWSAzureCloudETLGoogle Cloud PlatformHadoopHDFSInformaticaPySparkPythonSpark+1 more
Western Digital

Principal Engineer – Enterprise Data Platform, Data Warehouse, Data Modeling, GCP, Azure, AWS

Western Digital
Leadfull-time🇮🇳 India
Posted: 12 hours agoSource: jobs.smartrecruiters.com
Amazon RedshiftAWSAzureBigQueryCloudERPETLInformaticaOraclePythonScalaSpark+1 more
Novartis

Business Data Migration Expert – Order to Cash

Novartis
Mid · Seniorfull-time🇮🇳 India
Posted: 13 hours agoSource: novartis.wd3.myworkdayjobs.com
ERP
Caterpillar Inc.

Senior Manager, Software Engineering – Data Engineering

Caterpillar Inc.
Seniorfull-time🇮🇳 India
Posted: 2 days agoSource: cat.wd5.myworkdayjobs.com
ApacheAWSAzureCloudETLJenkinsKafkaNoSQLSparkSQL