Chainalysis

Staff Software Engineer, Data Solutions

Chainalysis

full-time

Posted on:

Origin:  • 🇺🇸 United States • New York

Visit company website
AI Apply
Apply

Salary

💰 $165,000 - $220,000 per year

Job Level

Lead

Tech Stack

AWSCloudDistributed SystemsDockerGoogle Cloud PlatformKafkaKubernetesMicroservicesPySparkPythonSoliditySparkSQLTerraform

About the role

  • Design and lead delivery of new platform capabilities that serve mission‑critical investigations and monitoring workflows
  • Operate services that ingest, transform, and serve hundreds of terabytes of data with clear SLOs for latency, freshness, and availability
  • Improve the scalability, performance, and cost efficiency of our data plane and APIs
  • Raise the quality bar across reliability, security, and compliance for both cloud and on‑premises deployments
  • Mentor engineers across teams and influence technical strategy beyond your immediate group
  • Own and evolve backend services powering customer‑facing APIs, usage/billing, alerting, and data observability
  • Lead team and cross-team initiatives end‑to‑end: discovery, architecture, implementation, rollout, and post‑launch learning
  • Architect event‑driven and streaming workflows (e.g., Kafka) with strong data contracts and schema evolution
  • Drive operational excellence: SLOs, runbooks, on‑call, incident reviews, and capacity plans for high‑QPS systems
  • Partner with product, data engineering/science, and security to translate customer requirements into durable systems

Requirements

  • Expert backend engineering experience building cloud‑hosted services and data pipelines on AWS or GCP (bonus: both)
  • Deep proficiency with APIs, streaming systems, and distributed systems (e.g., microservices on Kubernetes)
  • Strong SQL skills and experience with analytical data models (e.g., lakehouse/warehouse patterns)
  • Demonstrated ownership of systems operating at scale (hundreds to thousands of RPS; TB–PB data volumes)
  • High judgment on reliability, security, and cost, with a track record of measurable improvements
  • Ability to lead without authority—mentoring, design reviews, and cross‑org influence
  • Python experience
  • Kubernetes, Docker, Cloud Functions/Cloud Run, Terraform
  • Kafka and streaming architecture experience
  • Spark, Delta Lake, DLT, and SQL databases
  • Experience with GCP and AWS
  • Legal authorization to work in the United States (application asks about authorization and sponsorship)
  • Nice to have: Blockchain domain knowledge (protocol fundamentals; smart contracts/Solidity)
  • Nice to have: Databricks experience (Spark, Delta Lake, Delta Live Tables) or PySpark at scale
  • Nice to have: Multi‑tenant, usage tracking, and billing systems experience
  • Nice to have: On‑premises or regulated/air‑gapped deployments