NuView Treatment Center

Data Engineer

NuView Treatment Center

full-time

Posted on:

Location Type: Remote

Location: United States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $100,000 - $140,000 per year

About the role

  • Design, build, and maintain scalable data pipelines for clients across industries
  • Architect and optimize cloud data warehouse solutions, adapting to each client's stack, which may include Snowflake, BigQuery, Redshift, Microsoft Fabric, or similar platforms
  • Lead data integration projects from source system to analytical layer, including scoping, delivery, and handoff
  • Work fluidly across a range of modern data tools and platforms as client engagements demand, picking up new technologies quickly and applying best practices regardless of the toolset
  • Collaborate with analysts and data scientists to ensure data is clean, reliable, and well-modeled
  • Champion data quality, testing, and observability best practices across client engagements
  • Produce and maintain clear technical documentation including pipeline architecture, data dictionaries, lineage maps, and runbooks so clients can understand and own their infrastructure long-term
  • Document engineering decisions, standards, and workflows in a way that supports knowledge transfer to both clients and junior team members
  • Research and evaluate new technologies and advocate for tooling investments that benefit the firm
  • Train and mentor junior team members on engineering standards, pipeline design, and best practices
  • Participate in client-facing communication, including requirements gathering and progress updates
  • Flex support when capacity allows: contribute to analyst-side deliverables such as Power BI dashboard development, ad-hoc reporting, or data visualization.

Requirements

  • Bachelor's Degree in Computer Science, Engineering, Mathematics, or a related field
  • 2–5+ years of relevant data engineering or software engineering experience
  • SQL Expert: complex query authoring, query optimization, stored procedures
  • Python Required: pipeline scripting, automation, data processing
  • Transformation Tools: dbt required; Spark experience a plus
  • Ingestion Tools: Fivetran, Airbyte, Rivery, Microsoft Fabric Data Factory, or similar
  • Orchestration: Airflow, Prefect, Azure Data Factory, Microsoft Fabric, or equivalent
  • Cloud Platforms: Azure (preferred), AWS, or GCP experience
  • Data Warehouses: Snowflake, BigQuery, Redshift, Microsoft Fabric, Azure Synapse, or equivalent
  • Version Control: Git required; branching strategies, pull requests, and code review workflows
  • Strong communication skills with the ability to translate technical concepts for non-technical stakeholders
  • Self-starter who thrives in a remote environment and can manage multiple client workstreams
  • Player-coach mindset: capable of leading projects while growing junior teammates
  • Intellectually curious about evolving data tooling, architecture patterns, and AI-augmented engineering.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
SQLPythondbtSparkFivetranAirbyteRiveryAirflowAzure Data FactoryGit
Soft Skills
strong communication skillsself-starterplayer-coach mindsetintellectually curious
Certifications
Bachelor's Degree in Computer ScienceBachelor's Degree in EngineeringBachelor's Degree in Mathematics