
Senior Data Engineer – Real Time Data Processing
Minor Hotels Europe and Americas
full-time
Posted on:
Location Type: Office
Location: Atlanta • 🇺🇸 United States
Visit company websiteSalary
💰 $110,000 - $115,000 per year
Job Level
Senior
Tech Stack
ApacheAWSAzureCloudDockerETLGoogle Cloud PlatformGRPCJavaJenkinsKafkaKubernetesMicroservicesPythonRedis
About the role
- Build reliable streaming applications using Confluent Kafka, Apache Flink, Hazelcast, Kafka Streams, Kafka Connect, and Schema Registry.
- Develop ETL/ELT pipelines for real time ingestion, transformation, and distribution; implement windowing, joins, and stateful processing.
- Implement distributed caching and in memory data grid integrations to reduce latency and improve throughput.
- Contribute to event gateway / event grid routing, schemas, topic design, ACLs, and dead letter strategies.
- Write clean, testable code for microservices (Java/Python), focusing on reliability, idempotency, and observability.
- Automate CI/CD pipelines, containerization (Docker), and deployments to Kubernetes.
- Participate in data governance: tagging, metadata updates, lineage capture, schema evolution, and data quality checks.
- Monitor production systems, perform performance tuning, troubleshoot backpressure/lag, and improve SLO attainment.
- Collaborate on design docs, code reviews, and cross team integrations.
Requirements
- 7+ years in software engineering, with 3+ years focused on real time streaming or event driven systems.
- Strong hands-on experience with Kafka (topics, partitions, consumer groups), Schema Registry, Kafka Connect, and either Flink or Kafka Streams or Hazelcast.
- Solid understanding of ETL/ELT concepts, event time vs. processing time, checkpointing, state management, and exactly once/at least once semantics.
- Proficiency with microservices (Java /Python), APIs (REST/gRPC), Avro/JSON/protobuf, and contract testing.
- Experience with Docker, Kubernetes, and CI/CD tools (GitHub Actions/Azure DevOps/Jenkins or similar).
- Familiarity with distributed caching (Redis, Hazelcast) and in memory data grids.
- Cloud experience in at least one cloud platform (Azure/AWS/GCP).
- Knowledge of observability (metrics, logs, traces) and resilience (retries, timeouts, DLQs, circuit breakers).
- Exposure to data governance, metadata catalogs, and lineage tooling; schema evolution and compatibility (backward/forward/full).
Benefits
- Flexible work
- Healthcare including dental, vision, mental health, and well-being programs
- Financial well-being programs such as 401(k) and Employee Share Ownership Plan
- Paid time off and paid holidays
- Paid parental leave
- Family building benefits like adoption assistance, surrogacy, and cryopreservation
- Social well-being benefits like subsidized back-up child/elder care and tutoring
- Mentoring, coaching and learning programs
- Employee Resource Groups
- Disaster Relief
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
Confluent KafkaApache FlinkHazelcastKafka StreamsKafka ConnectETLELTmicroservicesJavaPython
Soft skills
collaborationcode reviewsdesign documentationtroubleshootingperformance tuning