
Senior Engineer – DevOps, DataOps
FICO
full-time
Posted on:
Location Type: Remote
Location: United States
Visit company websiteExplore more
Salary
💰 $119,000 - $187,000 per year
Job Level
About the role
- Design, build, and maintain scalable, resilient data and ML pipelines, infrastructure, and workflows using tools such as GitHub Actions, ArgoCD, Crossplane, Terraform, Helm, and others.
- Automate infrastructure provisioning and configuration management using cloud-native services (preferably AWS) with tools like Terraform, CloudFormation, or Crossplane.
- Design, containerize, and manage Kubernetes (EKS) clusters and/or ECS environments in AWS.
- Collaborate with development teams to optimize performance, deployment, and cost.
- Partner with DevOps and SRE teams to ensure high availability, observability, scalability, and security of the data and ML infrastructure.
- Work closely with Data Scientists and ML Engineers to operationalize machine learning models, including building CI/CD pipelines for model training, validation, and deployment.
- Implement observability for data pipelines and ML services using tools like Prometheus, Grafana, Datadog, or similar.
- Develop and maintain automated pipelines for model retraining, monitoring drift, and versioning in production.
- Support experimentation and prototyping in areas such as Machine Learning and Generative AI, transitioning successful prototypes into production systems.
- Ensure cloud infrastructure is secure, compliant, and cost-efficient, following best practices in governance, identity, and access management.
Requirements
- 7+ years of experience in DataOps, MLOps, or related fields, with at least 2 years focused on ML model operationalization and workflow automation.
- Proficient in AWS services including EC2, S3, IAM, ACM, Route 53, CloudWatch, EKS, and ECS.
- Experience with infrastructure as code (IaC) tools such as Terraform, CloudFormation, and Helm.
- Familiarity with CI/CD for ML pipelines, GitOps practices, and tools like GitHub Actions, Jenkins, or Argo Workflows.
- Strong scripting and automation skills using Python, or GitHub workflows.
- Understanding of observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or OpenTelemetry).
- Solid understanding of security best practices for cloud and Kubernetes environments, including secrets management, identity & access control, and policy enforcement.
- Familiarity with data governance, lineage, and metadata management is a plus.
- Excellent collaboration and communication skills, with a proven ability to work effectively in cross-functional, globally distributed teams.
- A bachelor’s degree in computer sciences, Engineering, or a related discipline, or equivalent hands-on industry experience.
Benefits
- An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others.
- The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences.
- Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so.
- An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
DataOpsMLOpsML model operationalizationworkflow automationinfrastructure as codescriptingautomationobservabilitymonitoringdata governance
Soft Skills
collaborationcommunicationcross-functional teamwork
Certifications
Bachelor's degree in computer scienceBachelor's degree in Engineering