Tech Stack
AWSAzureCloudDockerGoogle Cloud PlatformGrafanaKafkaKotlinKubernetesPrometheusPythonSparkTerraform
About the role
- Define and own the product vision and roadmap for Kafka/Data Engineering capabilities; translate business needs into technical requirements and prioritized backlogs; ensure clear acceptance criteria and measurable KPIs
- Collaborate with Data, Analytics, Marketing, and Product teams to define event models, SLAs, and integration needs; align stakeholders on priorities, trade-offs, and delivery timelines
- Shape the architecture and evolution of Kafka-based pipelines (topics, partitions, retention, compaction, Connect/Debezium, Streams/ksqlDB); partner with engineers to ensure scalable, secure, and cost-efficient solutions
- Drive schema governance (Avro/Protobuf), data quality enforcement, and regulatory compliance (GDPR/CCPA, PII handling); ensure monitoring, observability, and incident management practices are in place
- Own backlog grooming, sprint planning, and delivery tracking; ensure throughput, latency, and consumer lag targets are met; manage risks, dependencies, and SLAs
- Continuously evaluate and introduce improvements in reliability, cost-efficiency, and latency reduction; assess new tools, frameworks, and best practices in data streaming and event-driven systems
Requirements
- 4+ years of proven experience as a Product Owner or Technical Product Owner in data engineering or streaming domains
- Demonstrated ability to own a product vision and roadmap, align it with business goals, and communicate it effectively to technical and non-technical stakeholders
- Hands-on experience in backlog management, user story writing, prioritization (MoSCoW, WSJF, RICE), and defining acceptance criteria
- Strong experience with Agile/Scrum/Kanban frameworks, backlog grooming, and sprint ceremonies
- Skilled at gathering and refining requirements from diverse stakeholders (data, analytics, product, marketing)
- Ability to translate business outcomes into technical user stories for Kafka/data engineering teams
- Experience in balancing trade-offs (cost, reliability, time-to-market, compliance) and negotiating priorities
- Comfortable defining KPIs, SLAs, and success metrics for platform capabilities
- Strong understanding of event-driven architectures and the Kafka ecosystem (Confluent, Kafka Connect, Schema Registry, Streams/ksqlDB)
- Familiarity with data governance, compliance, and security requirements (GDPR/CCPA, PII handling, encryption, RBAC/ACLs)
- Working knowledge of cloud platforms (AWS/Azure/GCP), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform/Helm)
- Ability to engage in technical discussions on latency, throughput, consumer lag, schema evolution, and cost optimization
- Excellent written and verbal communication skills (English C1 or higher)
- Proven ability to present technical concepts in business language and influence decisions
- Experience in cross-functional leadership within engineering squads or consulting/client-facing roles
- Preferred: experience with platform product ownership, observability/reliability practices (Prometheus, Grafana, Datadog), exposure to Customer Data Platforms (mParticle) or tag management systems (Tealium), and background in data engineering/software development (Python, Kotlin, Spark/Flink)