
Software Engineer, Data Infrastructure
Docker, Inc
full-time
Posted on:
Location Type: Remote
Location: Remote • Washington • 🇺🇸 United States
Visit company websiteSalary
💰 $132,000 - $181,500 per year
Job Level
JuniorMid-Level
Tech Stack
AirflowAmazon RedshiftApacheAWSAzureBigQueryCloudDockerGoGoogle Cloud PlatformKotlinPythonSQL
About the role
- Contribute to the design and implementation of highly scalable data infrastructure leveraging Snowflake, AWS, Airflow, DBT, and Sigma.
- Implement and maintain end-to-end data pipelines supporting batch & realtime analytics across Docker's product ecosystem.
- Follow and contribute to the technical standards for data quality, testing, monitoring, and operational excellence.
- Design, build, and maintain robust data processing systems, focusing on data volume and latency requirements.
- Implement data transformations and modeling using DBT for analytics and business intelligence use cases.
- Develop and maintain data orchestration workflows using Apache Airflow under the direction of senior engineers.
- Assist with optimizing Snowflake performance and cost efficiency.
- Contribute to building data APIs and services to enable self-service analytics.
- Work with Product, Engineering, and Business teams to understand data requirements and translate them into technical tasks.
- Support Data Scientists and Analysts by providing access to reliable, high-quality data.
- Collaborate with business teams to deliver and maintain accurate reporting and operational dashboards.
- Engage with Security and Compliance teams to support data governance implementation.
- Assist with monitoring, alerting, and incident response for critical data systems.
- Support the implementation of data quality frameworks and automated testing in data pipelines.
- Participate in performance optimization and cost management initiatives.
- Contribute to troubleshooting and resolution of technical issues affecting data availability and accuracy.
Requirements
- 2+ years of software engineering experience, preferably with a focus on data engineering or analytics systems.
- Experience with a major cloud platform (AWS, GCP, or Azure), including basic data services (S3, GCS, etc.).
- Proficiency with SQL and experience with a cloud data warehouse (e.g., Snowflake, Redshift, BigQuery).
- Familiarity with data transformation tools (e.g., DBT) and modern BI platforms (e.g., Sigma).
- Familiarity with workflow orchestration tools (e.g., Apache Airflow, Dagster).
- Proficiency in Python, Go, Kotlin and other programming languages used in data engineering.
- Familiarity with version control (Git) and modern software development practices (CI/CD).
- Basic understanding of data warehousing concepts (e.g., dimensional modeling) and analytics architectures.
- An eagerness to learn about distributed data systems and stream processing concepts.
- Foundational knowledge of data quality and testing principles.
- Strong communication and collaboration skills.
- Ability to take direction and work effectively as part of a team.
- A proactive attitude toward problem-solving and self-improvement.
Benefits
- Freedom & flexibility; fit your work around your life
- Designated quarterly Whaleness Days
- Home office setup; we want you comfortable while you work
- 16 weeks of paid Parental leave
- Technology stipend equivalent to $100 net/month
- PTO plan that encourages you to take time to do the things you enjoy
- Quarterly, company-wide hackathons
- Training stipend for conferences, courses and classes
- Equity; we are a growing start-up and want all employees to have a share in the success of the company
- Docker Swag
- Medical benefits, retirement and holidays vary by country
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringdata pipelinesdata transformationsdata modelingSQLPythonGoKotlindata qualityautomated testing
Soft skills
communicationcollaborationproblem-solvingteamworkproactive attitudeeagerness to learntaking direction