Tech Stack
AirflowApacheAWSAzureCloudETLGoogle Cloud PlatformGrafanaJavaKafkaPandasPySparkPythonSparkSQLTableauTCP/IP
About the role
- Own and maintain critical parts of ClickHouse's Data engineering ecosystem, including drivers, SDKs, and connectors.
- Own the full lifecycle of data framework integrations from core database driver to SDKs and connectors.
- Build tools enabling Data Engineers to harness ClickHouse's speed and scale for real-time analytical workloads.
- Collaborate closely with the open-source community, internal teams, and enterprise users to ensure JVM integrations meet performance and reliability standards.
- Impact processing of massive datasets for real-time analytics and observability systems.
Requirements
- 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions.
- Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam.
- Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect.
- Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases.
- Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling.
- Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns.
- Outstanding written and verbal communication skills.
- Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire.
- Passion for open-source development.