Tech Stack
ApacheCloudDockerETLGoogle Cloud PlatformKubernetesPandasPythonSQL
About the role
- Collaborate with cross-functional teams to define requirements and deliver reliable solutions.
- Design, develop, and maintain data pipelines and ETL/ELT processes using Python.
- Build and optimize scalable, high-performance data applications.
- Develop and manage real-time streaming pipelines using Pub/Sub or Apache Beam.
- Participate in code reviews, architecture discussions, and continuous improvement initiatives.
- Monitor, troubleshoot, and optimize production data systems for reliability and performance.
Requirements
- 5+ years of professional experience in software or data engineering using Python.
- Strong understanding of software engineering best practices (testing, version control, CI/CD).
- Proven experience building and optimizing ETL/ELT pipelines and data workflows.
- Proficiency in SQL and database concepts.
- Experience with data processing frameworks (e.g., Pandas).
- Understanding of software design patterns and scalable architecture principles.
- Experience with cloud platforms (GCP preferred).
- Knowledge of CI/CD pipelines and Infrastructure as Code tools.
- Familiarity with containerization (Docker, Kubernetes).
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
- Excellent problem-solving, analytical, and communication skills.
- We are proud to offer a competitive salary alongside a strong insurance package.
- extensive learning and development resources.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonETLELTSQLPandasdata engineeringdata pipelinesstreaming pipelinessoftware engineering best practicesscalable architecture
Soft skills
problem-solvinganalytical skillscommunication skills