Tech Stack
AnsibleCloudDistributed SystemsDockerGrafanaHadoopHDFSJavaKubernetesMapReduceMicroservicesNoSQLPrometheusPythonSDLCSpark
About the role
- Join us as we build the future of data at scale.
- We’re looking for a hands-on technical leader to guide our Data Quality Engineering team in designing cloud-native platforms, distributed pipelines, and scalable microservices that power mission-critical applications like WorldCat.
- In this role, you’ll shape technical strategy, drive DevOps best practices, and partner across teams to deliver high-impact solutions.
- You’ll also mentor engineers, foster a culture of innovation, and ensure our platforms remain resilient, modern, and world-class.
Requirements
- Bachelor’s degree in Computer Science, Software Engineering, or related field.
- 8+ years of hands-on software engineering experience in distributed systems, cloud infrastructure, or data transformation.
- 5+ years of technical team leadership, including mentoring engineers and leading cross-functional projects.
- Deep experience with data transformation, matching, and merging pipelines across diverse data formats.
- Proven experience implementing CI/CD pipelines, infrastructure as code, containerization (Docker, Kubernetes).
- Advanced knowledge of NoSQL databases, big data ecosystems (Hadoop, HDFS, MapReduce, Spark), and data lake architecture.
- Strong programming skills in Java, Python.
- Experience working with real-time event-driven systems and RESTful API design.
- Familiarity with observability tools (Prometheus, Grafana, etc.) to monitor pipeline performance and reliability.
- Familiarity with Agile/Scrum methodologies, with ability to serve as Scrum Master or Agile facilitator.
- Strong cross-functional communication skills and experience managing production operations and incident response.
- Understanding of data governance, privacy, and regulatory compliance requirements in large-scale data systems.
- Experience integrating AI/ML or LLM technologies into production workflows is a plus.