Tech Stack
AirflowAmazon RedshiftAWSAzureBigQueryCloudETLGoogle Cloud PlatformKafkaPythonSparkSQL
About the role
- Technical owner of data pipelines and infrastructure: design, build, and optimise robust systems to ingest, process, and deliver data at scale
- Design, build, and maintain scalable data pipelines that ingest, transform, and process automotive parts and vehicle data from multiple sources
- Partner with Product, Engineering, and Customer teams to understand data needs and ensure customer success
- Implement best practices: schema management/versioning, pipeline version control, automated validation, lineage tracking, orchestration, testing, and monitoring
- Optimise performance and cost of pipelines and infrastructure while maintaining reliability and scalability
- Develop visualisations and APIs to surface insights and support customer-facing features
- Collaborate cross-functionally to solve customer problems and meet business requirements
- Build reusable data models and transformation logic to generalise structures for new customers and markets
- Act as a technical voice for data within the organisation, shaping Partly's data platform
Requirements
- Experience in data engineering — designing, building, and scaling pipelines and ETL/ELT processes
- Proficiency in SQL and Python (or similar programming languages used for data engineering)
- Hands-on experience with modern data tools (e.g., Airflow, dbt, Spark, Kafka, Snowflake, BigQuery, Redshift, etc.)
- Strong understanding of databases and data modelling (relational and non-relational)
- Experience with cloud platforms (AWS, GCP, or Azure) and data infrastructure management
- Detail-oriented and systems thinker — able to design for scale and reusability
- Strong communicator — able to explain complex data topics to technical and non-technical audiences
- (Preferred, but not essential) Experience working with e-commerce, automotive, or cataloguing data
- Ownership mindset — driven to solve problems end-to-end and continuously improve systems