Blue Coding

Data Architect

Blue Coding

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇺🇸 United States

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

Amazon RedshiftAWSETLPythonSQL

About the role

  • Design and implement end-to-end data integrations from files and source systems into analytical and operational data stores.
  • Evaluate and recommend tooling, frameworks, and platforms for ingestion, transformation, and orchestration.
  • Set up, build, and maintain ELT/ETL pipelines primarily using AWS services, with optional use of Microsoft tooling when appropriate.
  • Develop ingestion patterns for structured and semi-structured data (JSON, CSV, Parquet, Avro), including CDC and streaming integrations.
  • Create and manage data models, staging layers, and integration structures that support analytics and downstream applications.
  • Implement orchestration, scheduling, retries, and error handling to ensure scalable, stable, and cost-efficient pipelines.
  • Partner with stakeholders to define data requirements, SLAs, and integration contracts.
  • Design and implement monitoring, alerting, and observability, including metrics, logging, tracing, lineage, and data-quality controls.
  • Apply data governance practices: metadata management, access controls, retention, and auditability.
  • Support CI/CD pipelines for data workloads and contribute to Infrastructure-as-Code when relevant.
  • Troubleshoot production issues and drive root-cause analyses with clear follow-through.

Requirements

  • 3+ years of experience building and supporting production data pipelines in a Data Warehouse environment.
  • Deep understanding of Data Warehouse modeling (facts/dimensions, SCD Type 2, relational modeling, and performance tuning).
  • Strong experience with AWS data services, ideally including S3, IAM, Lambda, Glue, Athena, Redshift, RDS/DMS, Kinesis, Step Functions, and CloudWatch.
  • Proficiency in SQL and a strong understanding of analytical and relational database concepts.
  • Knowledge of modern storage formats and patterns (delta tables, data lakes).
  • Proficiency in at least one programming language used for data engineering (Python preferred; C# also welcome).
  • Practical knowledge of batch, streaming, and CDC integration patterns.
  • Comfort with Git-based workflows, automated deployments, and CI/CD.
  • Experience building monitoring and observability for pipelines (metrics, alerts, logging, lineage, quality checks).
Benefits
  • Salary in USD
  • 100% Remote
  • Integration into a high-performing international engineering team
  • Opportunity to shape data integration standards and contribute to architectural decisions
  • Work on meaningful technology that impacts real people’s well-being

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data integrationELTETLdata modelingSQLPythonCDCstreaming integrationsperformance tuningdata governance
Soft skills
stakeholder partnershiptroubleshootingroot-cause analysiscommunication