Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and design data models and schemas that facilitate data analysis and reporting
Design, develop, and maintain scalable and efficient data pipelines and ETL processes to ingest, process, and transform large volumes of data from various sources into usable formats
Build and optimize data storage and processing systems, including data warehouses, data lakes, and big data platforms, using AWS services such as Amazon Redshift, AWS Glue, AWS EMR, AWS S3, and AWS Lambda, to enable efficient data retrieval and analysis
Implement and manage real-time data streaming architectures using AWS services like Amazon Kinesis or Apache Kafka to enable real-time data processing and analytics
Perform data profiling, data cleansing, and data transformation tasks to prepare data for analysis and reporting
Implement data security and privacy measures to protect sensitive and confidential data using AWS security services and features
Design and implement data architectures following Data Mesh principles within the AWS environment, including domain-oriented data ownership, self-serve data infrastructure, and federated data governance
Provide technical guidance and mentorship to junior data engineers, reviewing their work and ensuring adherence to best practices and standards
Requirements
Strong programming skills in languages like Python, Java, or Scala, with experience in data manipulation and transformation frameworks
Proven experience as a data engineer, with experience in designing and building large-scale data processing systems
Strong understanding of data modeling concepts and data management principles
In-depth knowledge of SQL and experience working with relational and non-relational databases
Knowledge of Data Mesh principles and experience designing and implementing data architectures following Data Mesh concepts within the AWS ecosystem
Experience with real-time data streaming architectures using AWS services like Amazon Kinesis or Apache Kafka
Familiarity with AWS cloud services, such as AWS Glue, AWS Lambda, AWS EMR, AWS S3, Amazon Redshift, and their data-related features and functionalities
Familiarity with AWS security services and features for data security and privacy
Bachelor's or Master's degree in Computer Science, Information Systems, or a related field
Benefits
Remote-First Culture – Flexibility to work from home in your country of hire
“Leave Your Way” PTO– Take the time you need, when you need it
401(k) with Generous Employer Match– Invest in your future
Comprehensive Benefits– Medical, dental, vision, & mental health
Global Tuition and Gym Reimbursement– Learn and grow on us
Standby Flight Program– Explore the world
Inclusive, Collaborative Culture– Be seen, heard, and valued
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.