
Principal Data Platform Engineer
Simple Machines
full-time
Posted on:
Location Type: Hybrid
Location: Sydney • Australia
Visit company websiteExplore more
Job Level
Tech Stack
About the role
- Own the end-to-end architecture of modern, cloud-native data platforms
- Design scalable data ecosystems using **data mesh, data products, and data contracts**
- Make high-impact architectural decisions across ingestion, storage, processing, and access layers
- Ensure platforms are secure, compliant, and production-grade by design
- Design and deliver cloud-native data platforms using **Databricks, Snowflake, AWS, and GCP**
- Apply modern architectural patterns: **data mesh, data products, and data contracts**
- Integrate deeply with client systems to enable scalable, consumer-oriented data access
- Build and optimise **batch and real-time pipelines**
- Work with streaming and event-driven tech such as **Kafka, Flink, Kinesis, Pub/Sub**
- Orchestrate workflows using **Airflow, Dataflow, Glue**
- Process and transform large datasets using **Spark and Flink**
- Work across relational, NoSQL, and analytical stores (Postgres, BigQuery, Snowflake, Cassandra, MongoDB)
- Optimise storage formats and access patterns (Parquet, Delta, ORC, Avro)
- Implement secure, compliant data solutions with **security by design**
- Embed governance without killing developer velocity
- Translate business needs into pragmatic engineering decisions
- Act as a trusted technical advisor, not just an order taker
- Set engineering standards, patterns, and best practices across teams
- Review designs and code, providing clear technical direction and mentorship
- Raise the bar on data quality, testing, observability, and operational excellence
Requirements
- Strong **Python and SQL**
- Deep experience with **Spark** and modern data platforms (Databricks / Snowflake)
- Solid grasp of cloud data services (AWS or GCP)
- Demonstrated ownership of large-scale data platform architectures
- Strong data modelling skills and architectural decision-making ability
- Comfortable balancing trade-offs between performance, cost, and complexity
- Built and operated **large-scale data pipelines** in production
- Strong data modelling capability and architectural judgement
- Comfortable with multiple storage technologies and formats
- Infrastructure-as-code experience (**Terraform, Pulumi**)
- CI/CD pipelines using tools like **GitHub Actions, ArgoCD**
- Data testing and quality frameworks (**dbt, Great Expectations, Soda**)
- Experience in consulting or professional services environments
- Strong consulting instincts — able to challenge assumptions and guide clients toward better outcomes
- Comfortable mentoring senior engineers and influencing technical culture
Benefits
- You’ll work on **interesting, high-impact problems**
- You’ll build **modern platforms**, not maintain legacy mess
- You’ll be surrounded by senior engineers who actually know their craft
- You’ll have autonomy, influence, and room to grow
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonSQLSparkdata meshdata productsdata contractslarge-scale data pipelinesdata modellinginfrastructure-as-codecloud-native data platforms
Soft skills
architectural decision-makingconsulting instinctsmentoringtechnical advisorybalancing trade-offsinfluencing technical culturetranslating business needsproviding clear technical directionraising data qualityoperational excellence