
Staff Data Engineer
NPS Prism
full-time
Posted on:
Location Type: Remote
Location: India
Visit company websiteExplore more
Job Level
About the role
- Design and own scalable data architectures for ingestion, transformation, and analytics on Databricks
- Build robust ETL/ELT pipelines using PySpark, SQL, and Databricks Workflows
- Lead performance tuning, partitioning, and data optimization across large distributed systems
- Mentor junior data engineers and enforce best practices for code quality, testing, and version control
- Develop and maintain data lakes and data warehouses on cloud platforms
- Utilize Azure Data Factory, AWS Glue, or similar orchestration tools to manage large-scale data workflows
- Ensure compliance with Bain’s data security and privacy standards
- Implement CI/CD pipelines for data code deployments using Git, Azure DevOps, or Jenkins.
Requirements
- 6–8 years of data engineering experience
- Advanced proficiency in Databricks
- Strong command of Python, SQL, and PySpark for big data processing
- Experience with Delta Lake, Spark optimization, and cluster management
- Hands-on with ETL/ELT design, data lake and warehouse architecture
- Cloud expertise in Azure, AWS, or GCP (Azure preferred)
- Proven ability to design end-to-end data solutions and influence engineering best practices
- Strong mentorship and stakeholder management skills
- Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
Benefits
- Work From Home 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data architectureETLELTPySparkSQLdata optimizationDelta Lakedata lake architecturedata warehouse architectureCI/CD
Soft skills
mentorshipstakeholder managementinfluence engineering best practices