Tech Stack
BigQueryCloudJavaPostgresPythonSQL
About the role
- Design, build, and optimize robust, scalable data pipelines, leading the migration from legacy systems to our modern, AI-centric platform.
- Evolve our data models and schemas to better support complex analytics, AI training, and fine-tuning workloads.
- Collaborate with AI/ML teams to productionize models, streamline training data delivery, and support the development of sophisticated agentic systems.
- Empower the organization by partnering with BI developers and analysts to design highly efficient queries and unlock new insights.
- Champion data governance and compliance, ensuring our data handling practices remain secure and trustworthy as we innovate.
Requirements
- Proven experience (4+ years) in a data engineering role, with a track record of building and managing complex data systems.
- Deep expertise in SQL and query optimization.
- Hands-on experience with cloud data warehouses and databases, specifically Google BigQuery and CloudSQL (PostgreSQL).
- Programming experience with Python or JAVA
- A proactive and self-motivated & managed mindset, perfect for a fully remote environment with a high degree of autonomy.
- Excellent communication and documentation skills; you can clearly articulate complex technical concepts to diverse audiences.
- The ability to work a flexible schedule and the readiness to respond to occasional off-hours emergencies.
- Competitive pay rates
- Fully remote work environments
- Self-managed time off
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringSQLquery optimizationcloud data warehousesGoogle BigQueryCloudSQLPostgreSQLPythonJAVAdata pipelines
Soft skills
proactive mindsetself-motivatedcommunication skillsdocumentation skillscollaborationflexibilityautonomy