Salary
💰 $209,800 - $246,800 per year
Tech Stack
AirflowAWSCloudDistributed SystemsDynamoDBElasticSearchGoKafkaKubernetesMySQLPostgresPythonRedisSparkTerraform
About the role
- Enterprises of all sizes trust Abnormal Security’s cloud products to stop cybercrime—and those products depend on data: reliable, scalable, and secure access to it. The Data Platform team builds and operates the core storage, streaming, and processing systems that power Abnormal’s AI-driven detection and prevention. Our mission is to provide robust, self-service data platforms that enable engineering and data science teams to innovate quickly, confidently, and at scale.
- We’re looking for a Staff Software Engineer to drive the next generation of Abnormal’s data platform. In this role, you’ll set technical direction, lead ambitious cross-team initiatives, and mentor engineers while shaping how data flows and scales across our systems.
- Define and drive the architecture and roadmap for Abnormal’s Data Platform, spanning storage, streaming, batch processing, and data infrastructure.
- Partner with engineers and data scientists to make pragmatic trade-offs, enabling a platform-first operating model and self-service data capabilities.
- Lead high-leverage technical initiatives such as scaling data systems across tenants and regions, improving resilience, and evolving our next-gen storage layer.
- Act as the technical lead for the team: shape quarterly plans, de-risk delivery, mentor engineers, and land impactful cross-org initiatives.
- Champion operational excellence across SLOs, availability, performance, incident response, and cost efficiency.
- Advocate for platform-as-a-product practices: crisp APIs, clear SLAs/SLOs, great docs, telemetry by default, and paved paths for developers.
- Guide Abnormal’s AI-native data workflows: data pipelines, feature storage, offline/online consistency, model evaluation, and data governance.
Requirements
- Proven experience building and scaling data-intensive, distributed systems in high-growth environments.
- 5+ years as a Senior+/Staff engineer building data platforms, infrastructure, or tools that materially increase engineering velocity and reliability.
- Depth in at least two of the following: Streaming systems (e.g., Kafka, Kinesis, SQS); Batch processing systems (e.g., Spark, Databricks, Airflow, DBT); Storage systems (e.g., PostgreSQL, MySQL, DynamoDB, RocksDB, Redis, OpenSearch, S3)
- Hands-on with our stack (or equivalent): Python, Golang, AWS, Databricks, Spark, Airflow, Kafka, Redis, RocksDB, PostgreSQL, Elasticsearch, Terraform, Kubernetes, etc.
- Strong fundamentals in distributed systems, observability, and reliability engineering (SLOs, incident management, capacity planning).
- A strong track record as a change agent, reshaping data platform strategy and delivering impactful, self-service offerings.
- Excellent ability and strong desire to onboard and mentor other engineers.