F5

Principal Product Manager – AI Data Quality

F5

full-time

Posted on:

Location Type: Hybrid

Location: SeattleWashingtonUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $156,800 - $235,200 per year

Job Level

About the role

  • Build the AI-Ready Data Quality Platform
  • Define and ship native data quality capabilities inside Databricks Lakehouse
  • Productize policies and controls within Unity Catalog (lineage, access, schema enforcement)
  • Embed data contracts and validation logic directly into pipelines
  • Partner with data engineering to integrate dbt-based transformation layers into quality frameworks
  • Drive metadata, lineage, and semantic standardization as first-class platform features
  • Operationalize Data Quality in the AI Data Fabric
  • Design real-time anomaly detection systems (statistical + ML-driven)
  • Build upstream schema validation into CI/CD workflows (shift-left quality)
  • Define SLOs/SLAs for data products
  • Enable automated drift detection for training and inference datasets
  • Implement observability across streaming and batch architectures
  • Drive Data Ownership as a Product Discipline
  • Establish a data product ownership model across service teams
  • Define what “production-grade data” means for AI use cases
  • Build self-service tooling for teams to monitor and certify their data
  • Incentivize measurable quality accountability at the domain level
  • Define how governed datasets become AI-ready assets
  • Enable traceability from raw source → curated feature sets → model inputs
  • Align catalog metadata with AI feature stores and inference pipelines
  • Partner with ML teams to support model reproducibility and dataset versioning

Requirements

  • 5+ years in Product Management for Data Platforms, Analytics, or AI Infrastructure
  • Deep working knowledge of: Databricks Lakehouse architecture
  • Unity Catalog governance constructs
  • dbt transformation workflows
  • CI/CD patterns for data pipelines
  • Data observability and monitoring patterns
  • Strong SQL fluency and comfort reading Python/Scala data pipeline code
  • Experience defining data contracts and schema evolution strategies
  • Understanding of streaming frameworks (Kafka, Spark Structured Streaming, etc.)
  • Experience supporting AI/ML workloads in production environments
  • Bonus: Experience with modern data observability platforms (Monte Carlo, Bigeye, etc.)
  • Familiarity with feature stores and model lifecycle tooling
  • Knowledge of domain-oriented data mesh architectures
Benefits
  • incentive compensation
  • bonus
  • restricted stock units
  • benefits
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data qualitydata governanceSQLPythonScaladbtCI/CDanomaly detectiondata observabilitystreaming frameworks
Soft Skills
product managementdata ownershipcollaborationaccountabilitycommunication