Tech Stack
AWSAzureCloudCyber SecurityGoogle Cloud PlatformGrafanaPythonSplunkTerraform
About the role
- Design, build, and operate high-scale data ingestion and normalization pipelines
- Work on company-critical projects for ingesting, normalizing, and exposing user data across data stores
- Use and deploy cloud and data-lake technologies such as AWS, Azure, GCP, Snowflake, Databricks, and Splunk
- Work with product management to map non-functional requirements and implement SLOs
- Deploy and monitor cloud and data lake resources
- Mentor colleagues and spread knowledge through documentation and brown-bag sessions
Requirements
- 4+ years of software development experience
- Excellent written and verbal communication skills
- Experience with data lakes such as Snowflake or Databricks
- Experience with cloud providers such as AWS, GCP, or Azure
- Experience with defining non-functional requirements, measuring SLOs, and balancing tech foundation and product timelines
- Ability to quickly come up to speed on our data pipeline techstack, which uses Python deployed on Snowflake and Databricks via AWS, Azure, and GCP
- Experience with ingesting large amounts of user data into Snowflake or Databricks
- Experience deploying services using infrastructure-as-code (Terraform, AWS SAM, CloudFormation, or CDK)
- Experience with observability technologies like Grafana and Sentry
- Some experience with LLMs, implementing standard patterns (Agents, RAG, Tools), and leveraging popular frameworks
- Familiarity with security data (e.g., endpoint and network logs)