Tech Stack
Amazon RedshiftAWSAzureCloudETLGoogle Cloud PlatformGraphQLGRPCHadoopJavaJenkinsPythonSQLTableau
About the role
- Design, build, and maintain scalable and high-quality data solutions
- Optimize data pipelines and ensure data quality
- Enable advanced analytics and AI/ML initiatives through reliable data infrastructure
- Shape data strategy and contribute to architectural decisions
- Mentor peers and support the growth of other engineers
- Collaborate with cross-functional teams to deliver data-driven solutions
Requirements
- Bachelor’s degree in Computer Science, Data Engineering, or a related field
- 5+ years of experience working with data engineering technologies in production environments
- Advanced English proficiency (written and spoken)
- Strong communication and collaboration abilities
- Excellent time management and organizational skills
- Ability to mentor and support other engineers
- Strong understanding of data fundamentals (ETL, transformations, data cleaning, EDA, compliance)
- Proficiency with SQL and experience across database systems
- Hands-on experience with Big Data technologies such as Hadoop, Snowflake, Storm, Redshift
- Knowledge of data storage solutions (Data Lakes, Data Warehouses, Data Lakehouses, Blob Storage)
- Proficiency in Python, R, Java, or C# for data engineering tasks
- Familiarity with APIs (REST, gRPC, GraphQL) for data integration
- Experience with CI/CD tools (GitHub Actions, Jenkins) and version control systems (Git, GitHub, GitLab)
- Skilled in data testing (querying, validation, integrity checks)
- Experience with data visualization tools such as Power BI or Tableau
- Strong background in cloud platforms (AWS, Azure, GCP) and their data services
- Understanding of security and compliance practices (e.g., OWASP, OAuth)
- Familiarity with Agile methodologies (Scrum, Kanban)
- Open to continuous learning and adopting emerging technologies