
Staff Data Engineer
Databricks
full-time
Posted on:
Location Type: Remote
Location: Remote • California • 🇺🇸 United States
Visit company websiteSalary
💰 $158,400 - $221,775 per year
Job Level
Lead
Tech Stack
PySparkPythonSQL
About the role
- As a Staff Data Engineer, you will be a critical partner to the Global GTM Strategy & Operations teams.
- You will design, build, and maintain scalable data models, curated reporting tables, and forecasts.
- Work closely with cross-functional stakeholders—including Engineering, IT, Finance, Marketing, and Legal.
- Lead the design and implementation of mission-critical data pipelines and robust infrastructure.
- Tackle complex data challenges, from high-throughput ingestion to orchestrating the underlying systems.
- Play a crucial role in advancing our data security, governance, and architectural standards.
Requirements
- 8+ years of work experience working as a Data Engineer with B2B sales, marketing, or finance data (GTM experience highly preferred).
- You are technical and fluent in data. You have advanced knowledge of Python and SQL.
- You have experience with Github workflows and PySpark knowledge.
- You can find and troubleshoot root cause data issues with minimal outside guidance or upfront knowledge of the pipeline.
- You have built for scale and have experience building scalable and productionizable data models.
- You are passionate about applying AI and have the ability to structure data models and tables that are optimized for AI readiness.
- You thrive in partnership with the business and understand how the business works.
- You excel in collaborative environments and possess a service-oriented mindset.
Benefits
- Databricks is committed to fair and equitable compensation practices.
- The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonSQLdata modelingdata pipelinesdata governancedata securityPySparkdata ingestionAI readinesstroubleshooting
Soft skills
collaborationservice-oriented mindsetpartnershipproblem-solvingcommunication