Tech Stack
AWSCloudCyber SecurityDistributed SystemsJavaKubernetesPythonScalaSpark
About the role
- Collaborate with a senior Agile Scrum team to design, develop, and maintain large-scale, cloud-based data processing pipelines and backend components.
- Work with cutting-edge technologies including Spark, Kubernetes, AWS, and modern data lakes like Databricks and Snowflake.
- Design and implement scalable, cost-effective solutions that deliver high performance and are easy to maintain.
- Tackle complex, high-scale problems and drive performance optimization and cost-efficiency across the data pipeline.
- Partner with engineers across Hunters' R&D and Product teams to enhance our platform and provide capabilities for internal and external users to build data transformations and detection pipelines at scale.
- Build robust monitoring and observability solutions to ensure full visibility across all stages of data processing.
- Stay current with trends in big data processing and distributed computing.
- Contribute to code quality through regular reviews and adherence to best practices.
Requirements
- 4+ years of experience as a Backend Engineer
- 3+ years of hands-on experience in Scala/Python/JAVA and cloud architecture (EMR/K8S)
- Deep technical expertise in distributed systems, stream processing, and data modeling of large data sets
- Proven track record of delivering scalable, and secure systems in a fast-paced working environment
- Experience with data governance practices, data security, and performance & cost optimization
- Experience with containers and AWS services such as S3, EKS, and more
- Strong problem-solving skills and ability to work independently
- A team player with excellent communication skills
- B.Sc. in computer science or an equivalent
- Advantages: Experience in Big Data frameworks such as Spark
- Advantages: Experience with modern Data lakes/warehouses such as Snowflake and Databricks
- Advantages: Production experience working with SaaS environments
- Advantages: Experience in data modeling