
Job Level
Senior
Tech Stack
AirflowCloudKafka
About the role
- Develop and maintain data models and structures that enable efficient querying and analysis
- Design, develop, and maintain data pipelines to transform and process large volumes of data from various sources, while adding business context and semantics to the data
- Implement automated data quality checks to guarantee data quality and consistency across the whole data life cycle
- Work on collecting user data from various sources in multiple formats (photo, video, text, audio) and process, analyze, and utilize this data to improve user experience, recommendations, business insights, and platform quality
- Use data and software engineering principles to solve large-scale production challenges
- Collaborate with CARE team to build AI-powered solutions, engage with engineering and product teams, and present solutions to management
Requirements
- Created robust and scalable data models from business requirements
- Worked collaboratively to collect, prepare and analyze complex business data
- Engage engineering and product teams to elicit requirements and present solutions to top management
- Knowledge of data pipeline tools and technologies (e.g., Airflow, EMR, Kafka)
- Collaborate with different teams to understand data needs and develop effective data pipelines
- Comfortable understanding big data concepts to ingest, process, and make data available for data scientists, business analysts, and product teams
- Comfortable maintaining data consistency across the entire data ecosystem
- Motivated to contribute to a data-driven culture and see the impact of your work reflected company-wide
- Prior experience in Data Modeling and Data Pipelines is a mandatory requirement