Tech Stack
Amazon RedshiftAWSAzureBigQueryCloudETLGoogle Cloud PlatformRDBMSSQL
About the role
- Design, build, and maintain data pipelines and robust data architectures, including data warehouses and large-scale processing systems
- Implement and manage ETL or ELT processes for data ingestion from various data sources
- Ensure data quality, security, and compliance by implementing best practices in data governance
- Continuously monitor and optimize data pipelines and infrastructure for performance and efficiency
- Use BI services to support strategic and self-service models and reports
- Practice internal standards for coding, project tracking, and documentation
- Partner with data analysts to understand their data needs and provide clean, accessible data
- Work closely with product owners, data architects, and software engineers to ensure reliable and efficient data infrastructure
Requirements
- Bachelor’s degree in Computer Science, Information Technology, or related courses
- Minimum 5 years of relevant experience in Data Engineering, Data Warehousing, Database Management
- Solid understanding of data warehousing concepts and experience with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake)
- Extensive experience connecting to various data sources and structures such as RDBMS, Data Lake, SQL Database, Azure, or any Cloud Infrastructure
- Hands-on experience with at least one major cloud platform (e.g., AWS, Azure, GCP)
- Proficiency with SQL
- Experience with Azure, Microsoft Fabric, and Power BI
- Knowledge of ETL/ELT processes and data pipeline development
- Experience with data governance, data quality, and security best practices
- Ability to monitor and optimize data pipelines and infrastructure for performance and efficiency
- Experience partnering with data analysts, product owners, data architects, and software engineers
- Preferred: Working experience in Azure Services
- Preferred: Knowledge of Agile and/or Scrum methods of delivery