Tech Stack
Amazon RedshiftApacheAWSEC2ETLGradleHadoopJavaJenkinsJUnitLinuxMavenMockitoMySQLOraclePostgresPythonScalaSparkSQLTerraform
About the role
- This Principal Data Engineer role will work exclusively within our AdTech business unit whilst ensuring alignment to best practices, standards, and policies.
- Work within the AdTech squad focusing on priorities to enable consistent, efficient, effective use of the Data Lake, Enterprise Data Warehouse, and marketing tools.
- Oversee impact assessments, discovery, technical design, delivery, and release into production, often working cross functionally and collaboratively.
- Technically manage AdTech priorities across various technologies and collaborate with architecture, product owners and delivery lead to ensure timely, efficient solution creation/maintenance.
- Interact with the core Data Engineering team, and collaborate and align with agreed solutions, policies, and delivery mechanisms.
- Lead daily stand ups and help lead resolutions to development or support challenges, consulting with Architects, Product Owners, and Delivery Managers for prioritisation.
- Build and maintain strong business relationships with Third Party vendors ensuring cost effective and performant technology use and ensure new solutions comply with regulatory legislation such as GDPR.
- Provide out of hours support for applications to ensure the shop stays open and fully functional.
Requirements
- Proficient in Python or Scala
- Familiarity in Java
- Experience in a Marketing technical stack and 3rd party tools
- Broad experience of working within AWS; including infrastructure (VPC, EC2, Security groups, S3 etc) to AWS data platforms (Redshift, RDS, Glue etc).
- Working Knowledge of infrastructure automation with Terraform.
- Working knowledge of test tools and frameworks such as Junit, Mockito, ScalaTest, pytest.
- Working knowledge of build tools– Maven, Gradle, SBT or equivalent.
- Understanding of source control management and associated tools such as Git/Bitbucket.
- Experience in the use of CI /CD tools such as Jenkins or an understanding of their role.
- Experience with Apache Spark or Hadoop.
- Experience in building data pipelines.
- Experience of designing warehouses, ETL pipelines and data modelling.
- Good knowledge in designing, building, using, and maintaining REST APIs.
- Good SQL skills with any mainstream database – Teradata, Oracle, MySQL, Postgres.
- Proficient Linux skills.
- Agile exposure with Scrum using tools such as Jira and Confluence.
- Strong analytical, technical, organizational and communication skills.
- Flexible in learning and being exposed to innovative technologies.
- A Team player with experience of line management and strong soft skills.