
Senior Data Systems Engineer
Qorvo, Inc.
full-time
Posted on:
Location Type: Office
Location: Bangalore • 🇮🇳 India
Visit company websiteJob Level
Senior
Tech Stack
Amazon RedshiftAWSAzureCloudETLJenkinsPySparkPythonSQLUnityVault
About the role
- Establish and grow a data engineering framework to ensure the reliability, scalability, quality, and efficiency of data pipelines, storage, processing, and integration
- Establish data pipelines to ingest and curate data containing SAP business content from S/4 to Databricks
- Improve, maintain and execute the data strategy at Qorvo including governance, project prioritization, resourcing, and value delivery
- Follow the Medalion Architecture (Bronze, Silver, Gold) to logically organize data in a lakehouse, with the goal of incrementally and progressively improving the structure and quality of data as it flows
- Work effectively in an Agile Scrum environment
- Create technical, functional, and operational documentation for data pipelines and applications
- Use business requirements to drive the design of data solutions/applications and technical architecture
- Work with other developers, designers, and architects to ensure data applications meet requirements and performance, data security, and analytics goals
- Work with test team efficiently and effectively structure requirements, define test scenarios, and validate changes.
- Anticipate, identify, track, and resolve issues and risks affecting delivery
- Coordinate and participate in structured peer review/ walkthroughs/code reviews
- Provide application/technical support
- Maintain and/or update technical and/or industry knowledge and skills through continuous learning activities
- Adhere to lean principles and standard processes to ensure continuous improvement
- Communicate clearly and effectively
Requirements
- B.S. in Computer Science/Engineering or relevant field; Masters degree preferred
- 5+ years of experience in the IT industry
- 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS/Azure cloud infrastructure and functions
- Expert understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized) implementing highly performant data ingestion pipelines from multiple sources
- Expert level skills with Python / PySpark and SQL
- Experience with CI/CD on Databricks using tools such as Unity Catalog, Jenkins, GitHub Actions, and Databricks CLI
- Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained
- Strong understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloging)
- Evaluating the performance and applicability of multiple tools against customer requirements
- Working within an Agile delivery/DevOps methodology to deliver proof of concept and production implementation in iterative sprints
- Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT)
- Hands on experience developing batch and streaming data pipelines
- Able to work independently
- Energetic and self-motivated, willingness to learn and openness to change are important
- Ability to work in a fast-paced, changing environment, and with all levels of the organization and cope with rapidly changing information
- Experience with SAP ECC or S/4, AWS Redshift, Power BI
- Experience consuming CDS views from SAP S/4
Benefits
- Health insurance
- Professional development opportunities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringETLDatabricksAWSAzurePythonPySparkSQLCI/CDdata warehousing
Soft skills
communicationproblem-solvingindependenceself-motivationadaptabilitycollaborationcontinuous learningagilityorganizational skillsattention to detail
Certifications
B.S. in Computer ScienceMasters degree in relevant field