Hultafors Group

Data Engineer

Hultafors Group

full-time

Posted on:

Location Type: Hybrid

Location: HultaforsSweden

Visit company website

Explore more

AI Apply
Apply

About the role

  • Build and run our central Data Foundation platform, the single source of truth for the organization
  • Work mainly in Azure with a lakehouse architecture (Azure Data Lake, Data Factory, Databricks) to collect, transform and consolidate data from ERPs, CRMs and other systems
  • Enable analytics and Power BI reporting for Finance, Sales, Logistics, Sustainability, Management and other functions
  • Design, develop and maintain data pipelines and models in our Data Foundation platform
  • Build and optimize ingestion in Azure Data Factory from ERPs, CRMs and internal/external sources
  • Develop and maintain transformation logic in Databricks (SQL and Python) and implement scalable lakehouse / dimensional models
  • Profile, map and validate data to ensure high data quality and consistency for downstream analytics
  • Monitor and operate pipelines and platform components (scheduling, performance, availability, error handling) according to SLAs
  • Troubleshoot and resolve incidents, perform root cause analysis and implement preventative improvements
  • Act as a 2nd/3rd line expert for data platform and pipeline issues
  • Contribute to architecture, standards, reusable components and best practices
  • Collaborate with business stakeholders, application owners, BI developers, data analysts, architects and vendors to translate requirements into data solutions
  • Create and maintain technical documentation for pipelines, datasets and models

Requirements

  • Bachelor’s degree in computer science, Information Systems, Engineering, Mathematics or similar, or equivalent experience
  • 3–5+ years as Data Engineer, BI/Data Warehouse Developer or similar
  • Hands-on experience with Azure Data Lake, Azure Data Factory and Databricks
  • Strong skills in building and operating batch and/or streaming data pipelines in a lakehouse or data warehouse
  • Solid SQL and Python skills for data engineering, ideally in Databricks
  • Good understanding of relational/analytical databases and performance optimization
  • Knowledge of data governance, data quality, security and handling of sensitive data
  • Experience with monitoring, logging, alerting and ITSM processes (incident/problem/change management)
  • Familiarity with CI/CD, version control and automated testing for data pipelines
Benefits
  • Competitive salary
  • Flexible working hours
  • Professional development opportunities
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringdata pipelinesSQLPythondata modelingdata transformationdata qualityperformance optimizationdata governancemonitoring and logging
Soft Skills
collaborationtroubleshootingroot cause analysiscommunicationproblem-solving
Certifications
Bachelor’s degree in computer scienceBachelor’s degree in Information SystemsBachelor’s degree in EngineeringBachelor’s degree in Mathematics