
Staff Data Engineer
PayJoy
full-time
Posted on:
Location Type: Hybrid
Location: San Francisco • California • 🇺🇸 United States
Visit company websiteSalary
💰 $270,932 - $304,288 per year
Job Level
Lead
Tech Stack
AirflowApacheAWSAzureCloudETLGoogle Cloud PlatformKafkaMySQLPostgresPySparkPythonSparkSQLTerraformUnity
About the role
- Architect and Build Data Pipelines: Build, optimize, and maintain reliable, scalable, and efficient data pipelines for both batch and real-time data processing.
- Data Strategy: Develop and maintain a data strategy aligned with business objectives, ensuring data infrastructure supports current and future needs.
- Streaming Expertise: Lead the development of real-time ingestion pipelines using Kafka/Kinesis, and design data models optimized for streaming workloads.
- Data Quality & Governance: Implement data quality checks, schema evolution, lineage tracking, and compliance using tools like Unity Catalog and Delta Lake etc.
- Tool & Technology Selection: Evaluate and implement the latest data engineering tools and technologies that will best serve our needs, balancing innovation with practicality.
- Automation and CI/CD: Drive automation of pipeline deployments, testing and monitoring using Terraform, CircleCi or similar tools.
- Performance Tuning: Regularly review, refine, and optimize SQL queries across different systems to maintain peak performance. Identify and address bottlenecks, query performance issues, and resource utilization. Setup best practices and work with developers on education of what they should be doing in the software development lifecycle to ensure optimal performance.
- Database Administration: Manage and maintain production AWS RDS MySQL, Aurora and postgres databases. Perform routine database operations, including backups, restores, and disaster recovery planning. Monitor database health, diagnose and resolve issues in a timely manner.
- Knowledge and Training: Serve as the primary point of contact for database performance and usage related knowledge, providing guidance, training, and expertise to other teams and stakeholders.
- Monitoring & Troubleshooting: Implement monitoring solutions to ensure high availability and troubleshoot data pipeline issues in real-time.
- Documentation: Maintain comprehensive documentation of systems, pipelines, and processes for easy onboarding and collaboration.
- Mentorship & Leadership: Mentor other engineers, review PRs, and establish best practices in data engineering.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
- 12+ years of experience in data engineering, with at least 3+ years working in Databricks.
- Deep hands-on experience with Apache Spark (PySpark/SQL), Delta Lake, and Structured Streaming.
- Technical Expertise: Deep understanding of data engineering concepts, including ETL/ELT processes, data warehousing, big data technologies, and cloud platforms (e.g., AWS, Azure, GCP).
- Strong proficiency in Python, SQL, and data modeling for both OLTP and OLAP systems.
- Architectural Knowledge: Strong experience in designing and implementing data architectures, including real-time data processing, data lakes, and data warehouses.
- Tool Proficiency: Hands-on experience with data engineering tools such as Apache Spark, Kafka, Databricks, Airflow, and modern data orchestration frameworks.
- Innovation Mindset: A track record of implementing innovative solutions and reimagining data engineering practices.
- Experience with Databricks Workflows, Delta Live Tables (DLT), and Unity Catalog.
- Familiarity with stream processing patterns (exactly-once semantics, watermarking, checkpointing)
Benefits
- 100% Company-funded health, dental, and vision insurance for employee and immediate family
- Company-funded employee life and disability insurance
- 3% employer 401k contribution
- Company holidays; 20 days vacations; flexible sick leave
- Headphone, home office equipment and wellness perks.
- $2,000 USD annual Co-working Travel perk
- $2,000 USD annual Professional Development perk
- Commuter benefit
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data pipelinesreal-time data processingdata strategydata quality checksSQLETLdata warehousingdata modelingApache SparkPython
Soft skills
mentorshipleadershipcommunicationcollaborationinnovation mindset