MARGO

Data Engineer

MARGO

full-time

Posted on:

Location Type: Remote

Location: Poland

Visit company website

Explore more

AI Apply
Apply

Salary

💰 PLN 190 - PLN 200 per hour

About the role

  • Own and architect the end-to-end reporting lifecycle in the diamond matching project
  • Inherit a replicated production environment
  • Transform raw Python application data into a high-performance analytics layer
  • Bridge the gap between backend data structures and business-ready visualizations

Requirements

  • Required experience
  • - Building and maintaining a robust, end-to-end data and analytics platform.
  • - Developing and optimizing data pipelines to move data from online environments to the data warehouse and reporting layers.
  • - Writing high-quality, production-ready code in SQL and Python, utilizing libraries such as pandas and SQLAlchemy.
  • - Troubleshooting and resolving issues related to cloud data configuration, synchronization, and data latency.
  • - Managing and processing diverse data types, including both structured and unstructured data, such as JSONB.
  • - Designing, deploying, and maintaining automated business intelligence dashboards and internal monitoring tools.
  • Required skills
  • - Advanced SQL knowledge (window functions, CTEs, optimization queries)
  • - Advanced knowledge of Data Build Tool (dbt) (or similar)
  • - Working knowledge of code versioning and peer code reviewing process.
  • - Working knowledge of deduplication and normalization strategies
  • - Working knowledge of Cloud infrastructure
  • - Working knowledge of Python coding principles and data libraries
  • - Knowledge of ORMs such as SQLAlchemy.
  • Extras points for
  • - Knowledge of ML tooling, such as MLFlow and Sagemaker.
  • - Experience with AWS.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
PythonSQLpandasSQLAlchemyData Build Tool (dbt)data pipelinesdata warehousedata latencydeduplicationnormalization