Northbeam

Senior Software Engineer, Python, Data & Infrastructure

Northbeam

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Salary

💰 $170,000 - $200,000 per year

Job Level

Senior

Tech Stack

AirflowAWSBigQueryCloudDistributed SystemsDockerERPETLGoogle Cloud PlatformGraphQLKubernetesPythonSQLTerraform

About the role

  • Northbeam is fundamentally a data product - the whole company. We don’t sell shoes, or ads, or games. We sell data: quality integrations with a variety of platforms, fresh and reliable data pulls, robust data ingest APIs, correct aggregations, and algorithmic insights on top of that data—all packaged into a user-facing application.
  • High-quality, reliable data integration is at the core of what we do, and your work will directly shape the company’s success. But building great data products also means building great infrastructure: scalable systems, resilient pipelines, and reliable platforms that empower everything else.
  • We are looking for a Senior Software Engineer with experience in data integration, API-based ETL pipelines, and cloud-native architecture, plus a strong interest in infrastructure engineering. You’ll help us not just ship data pipelines, but also design the systems that make them observable, secure, and resilient at scale.
  • You’ll work with a small engineering team to create a platform that consolidates third-party data from advertising platforms, e-commerce systems, customer data warehouses, ERP, POS, and CRM systems. Along the way, you’ll tackle questions of scalability, multi-tenancy, system reliability, data validation, cost optimization, and developer ergonomics.
  • Curiosity, willingness to do the hard thing, and an enjoyment of a startup pace of development will be the key to success in this role.
  • This is a startup. The only constant is change. Early on, you can expect to:
  • Design and implement scalable, high-performance data pipelines to ingest and transform data from a variety of sources, with reliability and observability baked in.
  • Engineer the infrastructure behind those pipelines, including containerized workloads, orchestration, monitoring, and CI/CD that enables the team to move quickly without breaking things.
  • Build and maintain APIs that enable flexible, secure, and tenant-aware data integrations with external systems.
  • Balance event-driven and batch processing architectures, ensuring data freshness, correctness, and cost efficiency.
  • Implement observability, monitoring, and alerting to track system health, failures, and performance issues—covering both data quality and infrastructure reliability.
  • Contribute to platform resilience by designing for fault tolerance, autoscaling, and graceful failure handling in a multi-tenant cloud environment.
  • Collaborate across data engineering, infrastructure, and product teams to ensure that the integration platform is flexible, extensible, and easy to onboard new data sources.

Requirements

  • 5+ years of experience in data engineering, software engineering, or infrastructure-focused engineering, with a focus on ETL, APIs, and cloud-native orchestration.
  • Strong proficiency in Python.
  • Experience with API-based ETL, handling REST, GraphQL, and Webhooks.
  • Experience implementing authentication flows.
  • Proficiency in SQL and BigQuery.
  • Experience with orchestration frameworks (e.g., Airflow, Prefect) to manage and monitor complex workflows.
  • Familiarity with containerization (Docker, Kubernetes) and cloud infrastructure (GCP/AWS) to deploy and scale workloads.
  • Strong grounding in infrastructure as code (Terraform, Pulumi, CloudFormation) for repeatable, auditable environments.
  • Ability to drive rapid development while ensuring maintainability, balancing short-term delivery with long-term platform stability.