Point Wild (Formerly Pango Group)

Data Engineer

Point Wild (Formerly Pango Group)

full-time

Posted on:

Location: 🇵🇱 Poland

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowAWSAzureCloudCyber SecurityGoogle Cloud PlatformKafkaPySparkPythonSparkSQLUnity

About the role

  • Build and optimize data ingestion pipelines on Databricks (batch and streaming) to process structured, semi-structured, and unstructured data.
  • Implement scalable data models and transformations leveraging Delta Lake and open data formats (Parquet, Delta).
  • Design and manage workflows with Databricks Workflows, Airflow, or equivalent orchestration tools.
  • Implement automated testing, lineage, and monitoring frameworks using tools like Great Expectations and Unity Catalog.
  • Build integrations with enterprise and third-party systems via cloud APIs, Kafka/Kinesis, and connectors into Databricks.
  • Partner with AI/ML teams to provision feature stores, integrate vector databases (Pinecone, Milvus, Weaviate), and support RAG-style architectures.
  • Optimize Spark and SQL workloads for speed and cost efficiency across multi-cloud environments (AWS, Azure, GCP).
  • Apply secure-by-design data engineering practices aligned with Point Wild’s cybersecurity standards and evolving post-quantum cryptographic frameworks.

Requirements

  • At least 5 years in Data Engineering with strong experience building production data systems on Databricks.
  • Expertise in PySpark, SQL, and Python.
  • Strong expertise with various AWS services.
  • Strong knowledge of Delta Lake, Parquet, and lakehouse architectures.
  • Experience with streaming frameworks (Structured Streaming, Kafka, Kinesis, or Pub/Sub).
  • Familiarity with DBT for transformation and analytics workflows.
  • Strong understanding of data governance and security controls (Unity Catalog, IAM).
  • Exposure to AI/ML data workflows (feature stores, embeddings, vector databases).
  • Detail-oriented, collaborative, and comfortable working in a fast-paced innovation-driven environment.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (bonus).
GFT Technologies

Data Engineer, KDB

GFT Technologies
Mid · Seniorfull-time🇵🇱 Poland
Posted: 40 minutes agoSource: jobs.gft.com
AnsibleAWSAzureCloudETLGoogle Cloud PlatformJavaKafkaKubernetesLinuxPythonTCP/IP+2 more
MONQ

Data Engineer

MONQ
Mid · Seniorfull-time$120k–$130k / year🇵🇱 Poland
Posted: 1 day agoSource: jobs.ashbyhq.com
AirflowApacheAWSAzureERPETLKafkaOraclePythonSpark
ZF Group

Data Engineer

ZF Group
Mid · Seniorfull-time🇵🇱 Poland
Posted: 2 days agoSource: jobs.zf.com
AzurePython
Inuits | Team Augmentation

Data Engineer

Inuits | Team Augmentation
Mid · Seniorfull-time$120–$140🇵🇱 Poland
Posted: 2 days agoSource: inuits-sp-z-oo.breezy.hr
AirflowAmazon RedshiftApacheAWSBigQueryCloudETLIoTKafkaKubernetesPythonSpark+2 more
Insight Value

Senior Data Engineer – Senior

Insight Value
Seniorfull-time🇵🇱 Poland
Posted: 5 days agoSource: jobs.quickin.io
AWSAzureETLGoogle Cloud PlatformSQLSSIS