Nift

Data Engineer

Nift

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇺🇸 United States

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowAmazon RedshiftAWSCloudDockerETLKafkaKubernetesPySparkSparkSQLTerraform

About the role

  • Architecture & storage: Design and implement our data storage strategy (warehouse, lake, transactional stores) with scalability, reliability, security, and cost in mind
  • Pipelines & ETL: Build and maintain robust data pipelines (batch/stream), including orchestration, testing, documentation, and SLAs
  • Reliability & cost control: Optimize compute/storage (e.g., spot, autoscaling, lifecycle policies) and reduce pipeline fragility
  • Engineering excellence: Refactor research code into reusable components, enforce repo structure, testing, logging, and reproducibility
  • Cross-functional collaboration: Work with DS/Analytics/Engineers to turn prototypes into production systems, provide mentorship and technical guidance
  • Roadmap & standards: Drive the technical vision for data platform capabilities and establish architectural patterns that become team standards

Requirements

  • Experience: 5+ years in data engineering, including ownership of data infrastructure for large-scale systems
  • Software engineering strength: Strong coding, debugging, performance analysis, testing, and CI/CD discipline; reproducible builds
  • Cloud & containers: Production experience on AWS, Docker + Kubernetes (EKS/ECS or equivalent)
  • IaC: Terraform or CloudFormation for managed, reviewable environments
  • Data engineering: Expert SQL, data modeling, schema design, modern orchestration (Airflow/Step Functions) and ETL tools
  • Warehouses & Data Lakes: Databricks (experience is a must), Spark, Redshift and Data lake formats (Parquet)
  • Monitoring/observability: Data monitoring (quality, drift, performance) and pipeline alerting
  • Collaboration: Excellent communication, comfortable working with data scientists, analysts, and engineers in a fast-paced startup
  • PySpark/Glue/Dask/Kafka: Experience with large-scale batch/stream processing
  • Analytics platforms: Experience integrating 3rd party data
  • Experience building data products on medallion architectures
  • Be mission-oriented: Proactive and self-driven with a strong sense of initiative; takes ownership, goes beyond expectations, and does what's needed to get the job done
Benefits
  • Competitive compensation, flexible remote work
  • Unlimited Responsible PTO
  • Great opportunity to join a growing, cash-flow-positive company while having a direct impact on Nift's revenue, growth, scale, and future success

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data engineeringSQLdata modelingschema designETLmodern orchestrationperformance analysisdebuggingtestingCI/CD
Soft skills
cross-functional collaborationexcellent communicationmentorshiptechnical guidanceproactiveself-drivenownershipinitiativeteam standardsengineering excellence