qode.world

Infrastructure Engineer, Kafka and GenAI

qode.world

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Job Level

Mid-LevelSenior

Tech Stack

ApacheAWSAzureCloudDockerGoGoogle Cloud PlatformGrafanaJenkinsKafkaKubernetesPrometheusPythonSparkSplunkTerraform

About the role

  • Incedo seeks a Kafka and GenAI Infrastructure Engineer to design, build, and maintain cloud-native streaming and GenAI infrastructure.
  • Location: Fort Mill, South Carolina and Jersey City, New Jersey (Hybrid).
  • Design, deploy, and manage scalable Apache Kafka / Confluent Kafka clusters, topics, brokers, schema registry, connectors, and stream processing (Kafka Streams/KSQL).
  • Implement monitoring, alerting, and log aggregation for the Kafka ecosystem (Prometheus, Grafana, Splunk, etc.).
  • Ensure high availability, fault tolerance, and disaster recovery of Kafka clusters.
  • Work with LLMs and vector databases to support GenAI use cases and integrate Kafka with AI/ML pipelines for real-time inference.
  • Deploy and manage GenAI services (OpenAI, Hugging Face, Vertex AI, Amazon Bedrock) within secure, compliant infrastructure.
  • Provision infrastructure using Terraform / CloudFormation / Pulumi; manage deployments in AWS, Azure, or GCP using Kubernetes / ECS / Lambda.
  • Build CI/CD pipelines (GitLab/GitHub Actions/Jenkins) for Kafka and AI/ML components.
  • Ensure encryption, RBAC, and data protection policies across Kafka and GenAI workloads; collaborate with InfoSec and Governance teams.

Requirements

  • 5+ years of experience with Apache Kafka / Confluent Platform
  • 3+ years of experience with cloud infrastructure (AWS, Azure, or GCP)
  • Familiarity with GenAI / LLM integration (OpenAI, LangChain, vector stores)
  • Strong understanding of stream processing and event-driven architecture
  • Experience with Infrastructure as Code (IaC) and CI/CD pipelines
  • Experience with containerization tools: Docker, Kubernetes
  • Proficiency in scripting: Python, Bash, or Go
  • (Preferred) Experience integrating Kafka with AI/ML platforms or MLOps tools
  • (Preferred) Familiarity with DataBricks, Apache Flink, or Spark Structured Streaming
  • (Preferred) Exposure to data governance and lineage tools (e.g., Apache Atlas, Collibra)
  • (Preferred) Financial industry experience is a plus