dentsu Austria

Senior Director, Data Engineering

dentsu Austria

full-time

Posted on:

Location Type: Hybrid

Location: New York CityColoradoMissouriUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $113,000 - $182,850 per year

Job Level

About the role

  • Build, scale, and maintain robust data pipelines/models using DBT, Python, PySpark, Databricks, and SQL
  • Design and manage semantic models, star schemas, ontologies, taxonomies, knowledge graphs, and glossaries using DBT YAML, GitHub, Unity Catalog, Fabric/OneLake, and Power BI
  • Utilize low-code/no-code tools (Trifacta, DBT, Power BI, Tableau, Fabric/OneLake, Copilot Studio)
  • Own AI deployment pipelines with containerized agents and automation using Kubernetes, n8n, LangChain, Azure AI Foundry
  • Strengthen AI accuracy/governance via metadata, access controls, and grounding
  • Design modular, reusable data models for analytics, reporting, AI enablement, and agentic apps
  • Develop and monitor mapping tables, validation rules, lineage, error logging, and observability for ETL/ELT health
  • Collaborate with analysts, engineers, and stakeholders to transform raw data into governed datasets
  • Implement agentic AI and Copilot integrations to enhance data accessibility
  • Drive innovation in Data Quality Suite roadmap.
  • Contribute to medallion architecture (bronze/silver/gold) and best practices for reusable components.
  • Manage Databricks Unity Catalog, Workflows, SQL Analytics, Notebooks, and Jobs for governed analytics.
  • Develop pipelines/tools with Microsoft Fabric, Power BI, Power Apps, Azure Data Lake/Blob, and Copilot Studio.

Requirements

  • 8+ years of experience as a Data Engineer or in a similar role
  • Bachelor's Degree in Computer Science, Engineering, Information Systems, or related field required
  • Advanced expertise in SQL, Python, DBT
  • Strong experience with PySpark, Databricks, and semantic layer tools like DBT YAML, Unity Catalog, and knowledge graphs required
  • Hands-on experience with ETL/ELT design tools like Trifacta (Alteryx), Adverity, Azure Data Factory, Fabric/Power BI DAX, or similar
  • Proven experience building and extending semantic layers for AI applications
  • Deep experience in the Microsoft Tech Data Stack, including Power BI, Power Apps, Fabric/OneLake, Azure Data Lakes (ADLS Gen2), Azure Blob Storage
  • Experience with AI deployment and orchestration tools such as Kubernetes, n8n, LangChain
  • Strong experience in developing and managing API endpoints
  • Proficiency in Java or Scala for large-scale data processing
  • Experience supporting data observability, quality frameworks
  • Strong familiarity with Git-based development.
Benefits
  • Medical, vision, and dental insurance
  • Life insurance
  • Short-term and long-term disability insurance
  • 401k
  • Flexible paid time off
  • At least 15 paid holidays per year
  • Paid sick and safe leave
  • Paid parental leave

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
SQLPythonDBTPySparkDatabricksETLELTAPI developmentJavaScala
Soft skills
collaborationinnovationdata governanceproblem-solvingcommunication