Chainalysis

Senior Software Engineer, Data Solutions

Chainalysis

full-time

Posted on:

Origin:  • 🇺🇸 United States • New York

Visit company website
AI Apply
Apply

Salary

💰 $160,000 - $195,000 per year

Job Level

Senior

Tech Stack

AWSCloudDistributed SystemsDockerGoogle Cloud PlatformKafkaKubernetesMicroservicesPySparkPythonSoliditySparkSQLTerraform

About the role

  • Design and lead delivery of new platform capabilities that serve mission‑critical investigations and monitoring workflows.
  • Operate services that ingest, transform, and serve hundreds of terabytes of data with clear SLOs for latency, freshness, and availability.
  • Improve the scalability, performance, and cost efficiency of our data plane and APIs.
  • Raise the quality bar across reliability, security, and compliance for both cloud and on‑premises deployments.
  • Mentor engineers across teams and influence technical strategy beyond your immediate group.
  • Own and evolve backend services powering customer‑facing APIs, usage/billing, alerting, and data observability.
  • Lead team and cross-team initiatives end‑to‑end: discovery, architecture, implementation, rollout, and post‑launch learning.
  • Architect event‑driven and streaming workflows (e.g., Kafka) with strong data contracts and schema evolution.
  • Drive operational excellence: SLOs, runbooks, on‑call, incident reviews, and capacity plans for high‑QPS systems.
  • Partner with product, data engineering/science, and security to translate customer requirements into durable systems.

Requirements

  • Expert backend engineering experience building cloud‑hosted services and data pipelines on AWS or GCP (bonus: both).
  • Deep proficiency with APIs, streaming systems, and distributed systems (e.g., microservices on Kubernetes).
  • Strong SQL skills and experience with analytical data models (e.g., lakehouse/warehouse patterns).
  • Demonstrated ownership of systems operating at scale (hundreds to thousands of RPS; TB–PB data volumes).
  • High judgment on reliability, security, and cost, with a track record of measurable improvements.
  • Ability to lead without authority—mentoring, design reviews, and cross‑org influence.
  • Proficiency in Python.
  • Experience with Kubernetes, Docker, Cloud Functions/Cloud Run, Terraform.
  • Experience with Kafka and streaming workflows.
  • Experience with Spark, Delta Lake, DLT, and SQL databases.
  • Experience with GCP and AWS.
  • (Nice to have) Blockchain domain knowledge (protocol fundamentals; smart contracts/Solidity).
  • (Nice to have) Databricks experience (Spark, Delta Lake, Delta Live Tables) or PySpark at scale.
  • (Nice to have) Multi‑tenant, usage tracking, and billing systems experience.
  • (Nice to have) On‑premises or regulated/air‑gapped deployments experience.
  • Application requires confirmation of work authorization: Are you legally authorized to work for any employer in the United States without restrictions?
  • Application asks: Will you now or in the future require sponsorship to maintain that work authorization (e.g. H1-B status)?
  • Application asks: Can you confirm that you have all necessary permits/authorization to work in the EU?