Quantiphi

Data Engineer

Quantiphi

full-time

Posted on:

Location Type: Remote

Location: Canada

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Lead and guide the design and implementation of scalable streaming data pipelines.
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming.
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset.
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies.
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads.
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment.
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response.
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work.

Requirements

  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems.
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor.
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming.
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines.
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads.
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations).
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools.
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments.
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi.
Benefits
  • Make an impact at one of the world’s fastest-growing AI-first digital engineering companies.
  • Upskill and discover your potential as you solve complex challenges in cutting-edge areas of technology alongside passionate, talented colleagues.
  • Work where innovation happens - work with disruptive innovators in a research-focused organization with 60+ patents filed across various disciplines.
  • Stay ahead of the curve—immerse yourself in breakthrough AI, ML, data, and cloud technologies and gain exposure working with Fortune 500 companies.

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
streaming data pipelinesreal-time data solutionsApache KafkaFlinkSpark Streamingevent-driven architecturescloud-native technologiesdata quality practicesCI/CD integrationLakehouse architectures
Soft skills
leadershipmentoringcollaborationconstructive feedbackpeer reviewscommunicationproblem-solvingorganizational skillsproactive monitoringincident response