Tech Stack
AWSDynamoDBEC2JenkinsPythonSparkSQLTerraform
About the role
- AWS and Databricks administration
- Configure and manage core AWS services
- Work with Data Scientists to manage Databricks environment in conjunction with Data Engineering AWS environment
- Implement security best practices, network segmentation, and secrets management
- Own relationships with external vendors
- Support development and maintenance of data pipelines (Python/Spark/SQL/API Gateway/EC2/DynamoDB/Glue), data quality checks, and documentation.
- Contribute to code reviews and standards.
- Design, build, and maintain IaC with Terraform for AWS and Databricks (modules, workspaces, remote state, CI/CD).
- Establish tagging, budgets, and chargeback/showback; rightsized compute, storage, and job configurations
- Manage Savings Plans/Reserved Instances, Spot strategies, storage lifecycle policies.
- Work closely with data engineers, data scientists, and stakeholders to unblock deliverables
- Document designs, standards, and operational practices to enable a small team to move quickly and safely
Requirements
- 3+ years in data/platform/DevOps engineering with significant AWS experience
- Proficient in Python and SQL
- Proficient in AWS Budget and Cost Management, AWS Trusted Advisor, CloudWatch & CloudTrail
- CI/CD for infra and data (GitHub Actions/Jenkins), code reviews, testing, and change management
- Security-first mindset: IAM/RBAC, least privilege, network controls, encryption, secrets
- FinOps knowledge: RI/Savings Plans, rightsizing, storage lifecycle, cost allocation/tagging, monitoring/alerts
- Excellent communication and ability to operate in a small, high-impact team
- Health insurance
- 401(k) matching
- Flexible working hours
- Paid time off
- Remote work options
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
AWSDatabricksPythonSQLTerraformCI/CDAPI GatewayEC2DynamoDBGlue
Soft skills
communicationcollaborationproblem-solvingteamworkdocumentation