Work on automating and optimizing data flows and processes
Build technical solutions to ingest, process, and store data from multiple sources and in various formats (structured / unstructured data / files / XML or JSON / Parquet / APIs)
Perform tasks related to the analysis, development, and maintenance of data processes and data structures
Provide data architecture perspectives, data mapping, and data modeling
Maintain enterprise-level perspectives aligned with market best practices, solution blueprints, and solution designs
Nice-to-have: experience with Artificial Intelligence solutions and Google Cloud Platform (GCP).
Requirements
Experience in data projects
Experience with the Databricks platform
Experience with Azure Data Factory
Programming experience — Python, SQL, PySpark
Experience automating information processes (ETL/ELT)
Experience with version control using Git
Experience with relational databases and NoSQL databases
Experience with Azure Cloud services: Data Factory, Synapse, ADLS Gen2, Delta Lake
Experience in data engineering and integration (e.g., ETL, APIs, microservices)
Experience in Linux environments, basic commands, and shell scripting
Knowledge of streaming processes with Event Hub
Knowledge of data representation formats and scripting structures such as JSON, XML, YAML.
Benefits
Meal/food allowance: R$38/day (flexible card)
Flexible allowance: R$210/month (flexible card)
TotalPass
Health and dental insurance
Life insurance
Profit-sharing (PLR)
Birthday day off
Trainings and certifications covered by Aggrandize
Employee referral bonus program
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonSQLPySparkETLELTGitDatabricksAzure Data FactoryNoSQL databasesLinux