
Manager, Data Operations – Engineering
Pfizer
full-time
Posted on:
Location Type: Hybrid
Location: Chennai • India
Visit company websiteExplore more
Job Level
Tech Stack
About the role
- Manage a hands‑on data engineering operations team responsible for supporting production data pipelines, databases, and AI data products.
- Ensure issues are investigated and resolved using strong engineering discipline, clear ownership, and consistent technical standards.
- Remain actively hands‑on in complex investigations involving Python code, SQL logic, data pipelines, transformations, and database behavior.
- Review code, debug data issues, validate fixes, and guide engineers toward durable solutions.
- Drive deep technical root cause analysis across ingestion, transformation, and consumption layers.
- Define, enforce, and evolve data engineering coding standards, including Python and SQL best practices, version control discipline, and code review expectations.
- Define, implement, and improve SLAs for data operations by reducing manual intervention, improving automation, and raising engineering quality.
- Serve as the front‑line technical leader for AI and data‑driven applications, supporting model outputs, data pipelines feeding AI solutions, feature/embedding generation, and downstream data consumers.
- Own operational reliability across data platforms and databases, including schema management, query performance, access patterns, and data correctness.
- Provide clear, technically grounded communication to stakeholders regarding data issues, impacts, and remediation actions.
Requirements
- Bachelor’s degree (Master’s preferred) in Computer Science, Data Engineering, or a related technical field.
- 5 - 10 years of hands‑on Data Engineering experience, including operating and supporting production data systems.
- Experience leading or acting as a Technical Lead for Data engineering or Data operations teams.
- Strong hands‑on programming experience with one or more general‑purpose languages, including Python, SQL, Java, Scala, PySpark, C, C++, C#, Swift/Objective‑C, or JavaScript.
- Proven experience with data preparation, ingestion, and ETL/ELT frameworks, such as Airflow, dbt, Fivetran, Kafka, Informatica, Talend, Alteryx, or equivalent technologies.
- Strong experience with software engineering best practices, including version control (Git, TFS, Subversion), CI/CD pipelines (Jenkins, Maven, Gradle, or similar), automated unit testing, and DevOps practices.
- Hands‑on experience with cloud data platforms and storage technologies, such as Snowflake, Databricks, Amazon S3, Redshift, BigQuery, or equivalent platforms.
- Demonstrated experience architecting and operating end‑to‑end data pipelines, using cloud‑based and/or on‑premises stacks.
- Prior hands‑on experience as a data modeler is required, including dimensional modeling and analytical data model design.
- Strong understanding of database management fundamentals, including schemas, tables, views, permissions, query performance, and operational troubleshooting.
- Proven ability to diagnose and resolve data quality issues at the engineering level, including logic errors, transformation issues, and source‑to‑target alignment.
Benefits
- 20% travel may be required based on delivery and project priorities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonSQLJavaScalaPySparkCC++C#SwiftJavaScript
Soft Skills
leadershipcommunicationproblem-solvingtechnical analysiscollaborationownershipattention to detailstakeholder managementmentoringquality assurance
Certifications
Bachelor's degree in Computer ScienceMaster's degree in Computer ScienceData Engineering certification