Develop and maintain foundational data models that serve as the single source of truth for analytics across the organization.
Translate business requirements into scalable data models, dashboards, and tools to empower stakeholders.
Partner with engineering, data science, product, and business teams to ensure alignment on priorities and data solutions.
Build frameworks, tools, and workflows that maximize efficiency for data users while maintaining high standards of data quality and performance.
Understand data flows from creation, ingestion, transformation, and delivery; take accountability for fixing issues anywhere in the stack.
Deliver business value by building data models, pipelines, and insights to support experimentation, ad hoc analysis, and product optimization.
Develop abstractions (UDFs, Python packages, dashboards) and internal frameworks for scalable data workflows and data apps.
Requirements
Data Modeling Expertise: Strong understanding of best practices for designing modular and reusable data models (e.g., star schemas, snowflake schemas).
Prompt Design and Engineering: Expertise in prompt engineering and design for LLMs (e.g., GPT), including creating, refining, and optimizing prompts to improve response accuracy, relevance, and performance for internal tools and use cases.
Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization.
Intermediate to Advanced Python: Expertise in scripting and automation, with experience in Object-Oriented Programming (OOP) and building scalable frameworks.
Collaboration and Communication: Strong ability to translate technical concepts into business value for cross-functional stakeholders. Proven ability to manage projects and communicate effectively across teams.
Data Pipeline Development: Experience building, maintaining, and optimizing ETL/ELT pipelines, using modern tools like dbt, Airflow, or similar.
Data Visualization: Proficiency in building polished dashboards using tools like Looker, Tableau, Superset, or Python visualization libraries (Matplotlib, Plotly).
Development Tools: Familiarity with version control (GitHub), CI/CD, and modern development workflows.
Data Architecture: Knowledge of modern data lake/warehouse architectures (e.g., Snowflake, Databricks) and transformation frameworks.
Business Acumen: Ability to understand and address business challenges through analytics engineering.
Data savvy: Familiarity with statistics and probability.
Bonus Skills: Experience with cloud platforms (e.g., AWS, GCP). Familiarity with Docker or Kubernetes.
Benefits
target bonus
medical
dental
vision
401(k)
Team and company-wide offsites (attendance expected and fully supported)
Reasonable accommodations for individuals with disabilities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data modelingadvanced SQLPythonETLELTdata visualizationprompt engineeringdata pipeline developmentobject-oriented programmingdata architecture