Tech Stack
AWSAzureCloudETLGoogle Cloud PlatformHadoopJavaKafkaMySQLPostgresPythonScalaSparkSQL
About the role
- Design, develop, and maintain scalable ETL/ELT pipelines for structured and unstructured data.
- Architect and optimize data storage solutions (e.g., data warehouses, data lakes) for performance and reliability.
- Implement data quality, data governance, and data security best practices.
- Collaborate with cross-functional teams to identify and deliver data solutions that drive business value.
- Monitor, troubleshoot, and improve existing data pipelines and infrastructure.
- Contribute to the selection and implementation of new technologies and tools.
- Document data systems, pipelines, and processes for maintainability and transparency.
- Mentor junior engineers and share best practices within the team.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
- 5+ years of experience in data engineering or related roles.
- Expertise in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, MS SQL).
- Proficiency with big data technologies (e.g., Spark, Hadoop, Kafka).
- Strong programming skills in Python, Scala, or Java.
- Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions.
- Familiarity with data modeling, data warehousing concepts, and BI tools.
- Knowledge of data governance, data privacy, and compliance regulations.
- Excellent problem-solving, communication, and collaboration skills.
- Paid Time Off (PTO) - Work From Home
- Professional development opportunities
- Collaborative and inclusive company culture
- Training & Development Programs
- Competitive salary and performance-based bonuses
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
ETLELTSQLPostgreSQLMySQLMS SQLSparkHadoopKafkaPython
Soft skills
problem-solvingcommunicationcollaborationmentoring