
Senior Data Engineer
EXL
full-time
Posted on:
Location Type: Remote
Location: Remote • 🇨🇦 Canada
Visit company websiteJob Level
Senior
Tech Stack
Amazon RedshiftAWSPySparkPythonSQL
About the role
- Design and build robust, scalable data transformation pipelines using SQL, DBT, and Jinja templating
- Develop and maintain data architecture and standards for Data Integration and Data Warehousing projects using DBT and Amazon Redshift
- Monitor and ensure the smooth operation of data pipelines between OneTrust (OT) application and external data platforms (ESPs, CRMs, etc.).
- Collaborate with cross-functional teams to gather requirements and deliver dimensional data models that serve as a single source of truth
- Assist in the ongoing data integration tasks (e.g., connectors, APIs) between OT and various business systems.
- Own the full stack of data modeling in DBT to empower analysts, data scientists, and BI engineers
- Enhance and maintain the analytics codebase, including DBT models, SQL scripts, and ERD documentation.
- Familiarize regulatory compliance guidelines across different geographical areas (CCPA, GDPR) and integrate them into current and new pipelines for consent collection.
- Ensure data quality, governance alignment, and operational readiness of data pipelines
- Apply software engineering best practices such as version control, CI/CD, and code reviews
- Optimize SQL queries for performance, scalability, and maintainability across large datasets
- Implement best practices for SQL performance tuning, including partitioning, clustering, and materialized views
- Build and manage infrastructure as code using AWS CDK for scalable and repeatable deployments. Integrate and automate deployment workflows using AWS CodeCommit, CodePipeline, and related DevOps tools
- Support Agile development processes and collaborate with offshore teams
Requirements
- Bachelor’s or Master’s (preferred) degree in a quantitative or technical field such as Statistics, Mathematics, Computer Science, Information Technology, Computer Engineering or equivalent
- 5+ years of experience in data engineering and analytics on modern data platforms
- 3+ years’ extensive experience with DBT or similar data transformation tools, including building complex & maintainable DBT models and developing DBT packages/macros
- Deep familiarity with dimensional modeling/data warehousing concepts and expertise in designing, implementing, operating, and extending enterprise dimensional models
- Understand change data capture concepts
- Experience working with AWS Services (Lambda, Step Functions, MWAA, Glue, Redshift)
- Hands-on experience with AWS CDK, CodeCommit, and CodePipeline for infrastructure automation and CI/CD
- Python proficiency or general knowledge of Jinja templating in Python and/or PySpark
- Agile experience and willingness to work with extended offshore teams and assist with design and code reviews with customer
- Familiarity with OneTrust or similar consent management platforms is a plus.
- A great teammate and self-starter, strong detail orientation is critical in this role.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
SQLDBTJinja templatingdata transformationdata architecturedata warehousingdata modelingdata qualityperformance tuningPython
Soft skills
collaborationdetail orientationself-startercommunicationteamworkAgile experienceproblem-solvingleadershiporganizational skillsadaptability
Certifications
Bachelor's degreeMaster's degree