TeamViewer

Data Engineer

TeamViewer

full-time

Posted on:

Location Type: Hybrid

Location: Linz • 🇦🇹 Austria

Visit company website
AI Apply
Apply

Salary

💰 €3,843 per month

Job Level

Mid-LevelSenior

Tech Stack

ApacheAzureBigQueryCloudETLGoogle Cloud PlatformHadoopKafkaNoSQLPythonSparkSQL

About the role

  • Design and implement effective data models and table structures across various storage systems, including relational databases, NoSQL stores, data warehouses, and data lakes
  • Build, maintain, and optimize robust data pipelines (ETL/ELT) to ingest, transform, and load data from production systems and external sources
  • Use workflow orchestration tools to schedule, automate, and monitor data pipelines, ensuring their reliability and performance
  • Define and implement data quality standards and processes (e.g., bronze, silver, gold tiering), including handling missing values and ensuring data integrity, accuracy, and completeness
  • Establish and enforce data governance policies and procedures, manage data lineage and metadata, implement access controls and encryption, and support compliance with data privacy regulations (e.g., GDPR, CCPA)
  • Implement and manage scalable data platforms (data warehouses, data lakes) to support efficient analytics, feature engineering, and model training for AI applications
  • Conduct statistical analyses and evaluations of datasets, and develop dashboards or monitoring systems to track pipeline health and data quality metrics
  • Collaborate closely with AI Engineers, AI Software Engineers, QA Engineers, and Data Analysts to understand data requirements and deliver reliable, high-quality data solutions.

Requirements

  • Proven experience as a Data Engineer, including designing and building data pipelines and infrastructure
  • Strong proficiency in SQL and Python for data manipulation, automation, and pipeline development
  • Hands-on experience with cloud platforms (e.g., GCP, Azure) and their respective data services (e.g., BigQuery, Azure Data Factory, Databricks)
  • Familiarity with big data tools and frameworks such as Apache Spark, Kafka, or Hadoop
  • Solid understanding of data modeling, ETL/ELT development, and data warehousing concepts
  • Experience with data quality management and data governance principles
  • Proficiency with version control systems (e.g., Git)
  • Excellent problem-solving skills and high attention to detail.
Benefits
  • 28 days of well-deserved holidays
  • Onsite Onboarding in our HQ office for an optimal start
  • Great compensation and benefits packages including company achievement bonus and stock-based options
  • Regular salary reviews
  • Public transport friendly office
  • Special terms for local gyms
  • Access to Corporate Benefits platform with many discounts
  • Regular Team events and company-wide celebrations
  • Open door policy, no dress code rules, frequent all Hands and Leadership Lunches
  • Hybrid and Flexible work time with up to 50% home office
  • Work From Abroad Program allowing up to 40 days of work outside your contracting country

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
SQLPythonETLELTdata modelingdata warehousingdata quality managementdata governancestatistical analysisdata pipeline development
Soft skills
problem-solvingattention to detailcollaboration