SingleOps

Senior Data Engineer

SingleOps

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Job Level

Senior

Tech Stack

AirflowAzureCloudPythonScalaSQLTerraform

About the role

  • Build and maintain scalable, modular data pipelines using tools like dbt and Azure Data Factory
  • Design batch and streaming data workflows that support near-real-time reporting and operational intelligence
  • Deliver high-quality, trusted datasets to enable analytics, dashboards, embedded apps, and AI use cases
  • Influence and guide the evolution of our data platform tooling and architectural decisions
  • Contribute to structured architectural patterns such as Medallion for layered, reusable data models
  • Drive data quality through testing, observability, and proactive alerting (e.g. dbt test, data contacts)
  • Partner across engineering, product, and analytics teams to improve velocity, reusability, and access to data with documentation, lineage, and governance
  • Collaborate on architecture and tooling that powers insight and action for customer-facing and internal use cases

Requirements

  • 5+ years of experience in data engineering or analytics engineering roles
  • Deep mastery of SQL and extensive, hands-on experience with Snowflake
  • Strong experience with dbt or similar data transformation frameworks
  • Proficient in Python, Scala, or similar languages used in data pipeline logic/automation
  • Experience with orchestration tools like Azure Data Factory, Airflow, or similar
  • Comfortable working in a modern, git-based development environment with CI/CD
  • Experience with cloud-native data streaming technologies such as Azure Event Grid
  • Exposure and understanding of Data Architectural patterns such as Medallion
  • Experience using Infrastructure as Code tooling; Terraform is a bonus
  • Bonus: Experience with Snowflake features such as Cortex, Data Shares, Snowpark
  • Bonus: Experience with data visualization tools such as Redash and Gooddata
  • Bonus: Experience with semantic modeling or enabling data for AI applications
  • Experience productionizing batch and streaming pipelines that scale
  • Experience contributing to tooling decisions or platform evolution in a growing data team
  • Experience supporting external data products, analytics features, or ML/AI-powered applications
  • Ability to balance speed and governance across data lifecycle and tooling
  • Must have proper work authorization to work for any employer in the United States; company does not sponsor work authorization