
Senior Data Engineer
hatch I.T.
full-time
Posted on:
Location Type: Hybrid
Location: Somerville • Massachusetts • United States
Visit company websiteExplore more
Salary
💰 $120,000 - $160,000 per year
Job Level
About the role
- Design and implement robust, cloud-based data storage solutions, optimizing schemas for multi-tenant environments while ensuring data accessibility and security and a high standard of trust and transparency
- Develop, deploy, and maintain resilient ETL/ELT pipelines for both real-time streaming and batch processing, ensuring seamless data flow from raw ingestion to production-ready applications
- Build and manage data access layers, including REST APIs and streaming services, to empower downstream users
- Drive data governance and best practices: Contribute across teams to recommend tools, processes, and best practices for maintaining data health, integrity, and security
- Support AI operations (MLOps) by managing versioning, containerization, and deployment of AI models
- Build monitoring and alerting systems to track data health and system performance, proactively identifying and remediating bottlenecks
Requirements
- Bachelor’s degree or higher in Computer Science, Engineering, or Data Science
- 5+ years of professional experience in data engineering or a related role
- A strong foundation in Python (or equivalent), including testing frameworks (e.g., pytest) and ORMs (e.g., SQLAlchemy)
- You understand modularity and how to define clear scopes and responsibilities within a large codebase
- Proven experience architecting scalable relational and non-relational (SQL/noSQL) schemas
- You manage the end-to-end database lifecycle, from initial design to production maintenance
- Expertise in maximizing system performance through advanced query tuning, strategic indexing, and execution plan analysis to eliminate technical bottlenecks
- Experience with one or more cloud-based databases (e.g., AWS RDS, Azure Database)
- You are comfortable configuring compute resources, backups, and geolocation requirements
- Experience building resilient pipelines using frameworks such as Dagster or Apache Airflow
- You have a track record of maintaining data health for both real-time streaming and batch processing
- A strong understanding of how data infrastructure integrates into the broader application architecture
- Experience with modern software development practices, including version control (Git), CI/CD pipelines, and a commitment to high-quality, maintainable code .
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonETLELTREST APIsMLOpsSQLnoSQLquery tuningstrategic indexingApache Airflow
Soft Skills
data governancecollaborationproblem-solvingmodularitycommunicationproactive identificationresponsibility definitionbest practicesdata integritytrust and transparency
Certifications
Bachelor’s degree in Computer ScienceBachelor’s degree in EngineeringBachelor’s degree in Data Science