
Senior Software Engineer
S&P Global
full-time
Posted on:
Location Type: Office
Location: Ahmedabad • India
Visit company websiteExplore more
Job Level
Tech Stack
About the role
- Design, develop, and maintain scalable data pipelines and data platforms across cloud environments using AWS and Google Cloud managed services.
- Build and optimize ETL/ELT workflows using AWS Glue, AWS Step Functions, Amazon MWAA (Managed Workflows for Apache Airflow), and Google Cloud Data Fusion.
- Work with large-scale structured and semi-structured data using formats such as Google Protocol Buffers and Apache Parquet (via cloud-managed engines/services that support Parquet storage/processing).
- Develop and manage cloud analytics and warehouse/lakehouse solutions leveraging Databricks on AWS/GCP, Google BigQuery, and Microsoft Fabric.
- Implement efficient data storage and retrieval solutions using managed databases such as Amazon DynamoDB, Amazon Aurora, Amazon RDS for SQL Server, Google Cloud SQL, and Azure SQL (as applicable).
- Collaborate with product, data, and business stakeholders to enable analytics, reporting, and data-driven decision-making.
- Create and maintain dashboards and visualizations using Microsoft Power BI, Tableau, and Amazon QuickSight.
- Ensure data quality, governance, lineage, and observability using platform-native and enterprise tooling such as Databricks Unity Catalog, AWS CloudWatch, Google Cloud Operations Suite, and Microsoft Purview.
- Write high-quality, maintainable code using Python and SQL/T-SQL, applying software engineering best practices (testing, code reviews, design patterns, performance tuning).
Requirements
- 6-10 yrs strong experience with Databricks (e.g., workspaces, SQL Warehouses, governance capabilities such as Unity Catalog, and production-grade job orchestration).
- Hands-on experience with AWS services such as Amazon S3, Amazon EC2, Amazon EMR, AWS Lambda, Amazon Athena, AWS Glue, and AWS Step Functions.
- Experience with Google Cloud services including BigQuery, Cloud Storage, Compute Engine, Pub/Sub, Dataflow, and Cloud Data Fusion.
- Proficiency in Python and SQL/T-SQL for data processing, transformation, and automation.
- Strong knowledge of cloud-managed relational and NoSQL databases such as Amazon DynamoDB, Amazon Aurora, Amazon RDS for SQL Server, and Google Cloud SQL.
- Familiarity with modern BI and analytics tools including Power BI, Tableau, and/or Amazon QuickSight.
- Experience with modern data interchange/serialization formats such as Google Protocol Buffers and columnar storage formats such as Parquet commonly used in cloud analytics platforms.
- Strong experience with GitHub for version control and engineering workflows.
- Experience with Terraform for infrastructure provisioning and environment management.
- Experience designing secure, scalable systems with strong operational rigor (SLAs/SLOs, monitoring, incident response, cost optimization).
Benefits
- Health & Wellness: Health care coverage designed for the mind and body.
- Flexible Downtime: Generous time off helps keep you energized for your time on.
- Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
- Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
- Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
- Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
ETLELTdata pipelinesdata platformsdata processingdata transformationdata storagedata retrievalcloud analyticsdata governance
Soft Skills
collaborationcommunicationdecision-makingproblem-solvingorganizational skills