Thermo Fisher Scientific

Senior Staff IT Architect

Thermo Fisher Scientific

full-time

Posted on:

Location Type: Office

Location: Carlsbad • California, North Carolina, Pennsylvania • 🇺🇸 United States

Visit company website
AI Apply
Apply

Salary

💰 $143,000 - $185,000 per year

Job Level

Senior

Tech Stack

Amazon RedshiftAWSAzureCloudETLKafkaPySparkPythonSparkSQLTerraform

About the role

  • Build, develop, and implement secure, scalable, and high-performance data solutions using AWS, Microsoft Azure, Databricks, and Microsoft Fabric (Power BI)
  • Architect end-to-end multi-cloud data platforms, ensuring detailed integration across enterprise data sources, storage, processing, AI services, and consumption layers
  • Set up and enforce data governance, security, and compliance protocols within AWS and Azure cloud environments, guaranteeing appropriate access controls and data protection strategies
  • Manage the assessment and implementation of cloud-based data structures, such as data lakes, data warehouses, real-time streaming, and advanced analytics
  • Collaborate with cross-functional teams, including Data Engineers, Data Scientists, AI/ML Engineers, Security Architects, and Business Analysts, to drive the adoption of modern data and AI architectures
  • Maintain guidelines for data modeling, cataloging, lineage tracking, and metadata management across Databricks, Redshift, Synapse, Fabric Lakehouse, and Power BI ecosystems
  • Drive innovation in data integration, AI-powered data mesh strategies, agentic AI workflows, and self-service analytics capabilities for enterprise users

Requirements

  • Over 8 years of expertise in data architecture, data engineering, or cloud-based data/AI solutions
  • BS/MS or equivalent experience in Computer Science, Information Systems, or a related field
  • Shown proficiency in multi-cloud solutions (AWS and Azure), encompassing S3, Redshift, Glue, Lambda, Kinesis, Azure Data Lake, Synapse Analytics, and Fabric
  • 3+ years working with Databricks and/or Fabric Lakehouse, developing scalable data pipelines, improving Delta Lake storage efficiency, and incorporating security measures
  • Demonstrated proficiency in PySpark, Python, SQL, and Power BI for tasks involving data processing, visualization, ETL/ELT, and transformation
  • Exposure to RAG architectures, vector databases, and agentic AI frameworks for enterprise-scale knowledge and retrieval systems
  • Expertise in cloud security guidelines, IAM roles, encryption strategies, and compliance frameworks (GDPR, HIPAA)
  • Proficiency in CI/CD practices for data solutions using GitHub, GitHub Actions, Terraform, and Azure DevOps or equivalent experience
  • Experience implementing real-time and batch data processing solutions using Kafka, Kinesis, Spark Streaming, or Event Hubs
  • Strong problem-solving skills with debugging, performance tuning, and AI/ML model deployment in cloud environments
Benefits
  • A choice of national medical and dental plans, and a national vision plan, including health incentive programs
  • Employee assistance and family support programs, including commuter benefits and tuition reimbursement
  • At least 120 hours paid time off (PTO), 10 paid holidays annually, paid parental leave (3 weeks for bonding and 8 weeks for caregiver leave), accident and life insurance, and short- and long-term disability in accordance with company policy
  • Retirement and savings programs, such as our competitive 401(k) U.S. retirement savings plan
  • Employees’ Stock Purchase Plan (ESPP) offers eligible colleagues the opportunity to purchase company stock at a discount

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data architecturedata engineeringcloud-based data solutionsmulti-cloud solutionsPySparkPythonSQLETLreal-time data processingbatch data processing
Soft skills
problem-solvingcollaborationinnovation
Certifications
BS in Computer ScienceMS in Information Systems