
Mid-level Data Engineer – Python, PySpark
Avanade
full-time
Posted on:
Location Type: Hybrid
Location: Sao Paulo • Brazil
Visit company websiteExplore more
About the role
- Develop and maintain data solutions using Python and PySpark.
- Build, evolve, and maintain ETL processes, ensuring data quality and consistency.
- Perform transactional and multidimensional data modeling.
- Create, optimize, and maintain SQL queries.
- Contribute to the definition and implementation of software architecture in data solutions.
- Use GitHub for code versioning and team collaboration.
- Plan, develop, and execute tests, ensuring delivery quality.
- Work collaboratively with technical teams and business stakeholders.
Requirements
- Experience with Python and PySpark.
- Strong knowledge of ETL processes.
- Experience in data modeling (transactional and multidimensional).
- Proficient in SQL for creating and optimizing queries.
- Experience with code versioning using GitHub.
- Knowledge of data solutions architecture.
- Prior experience with cloud data environments (advantageous).
- Experience in data projects focused on scalability and quality (advantageous).
Benefits
- Meal or food allowance;
- Multibenefits card (up to Senior Consultant level);
- Health and dental insurance;
- Certifications and training;
- Life insurance;
- Private pension plan;
- Avababy: pregnancy support and a starter kit for new Avanade parents;
- Profit-sharing (company results participation);
- Wellhub;
- Childcare assistance;
- Career counselor - career mentoring;
- Birthday off (day off on your birthday);
- Well-being sessions;
- For managerial or higher positions - company vehicle, parking and fuel allowance.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonPySparkETL processesdata modelingSQLdata solutions architecturecloud data environmentsscalabilitydata quality
Soft Skills
collaborationcommunication