Market One

Senior Database Engineer

Market One

full-time

Posted on:

Location Type: Remote

Location: ColoradoUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $150,000 - $180,000 per year

Job Level

About the role

  • Analyze and optimize production data solutions to identify and resolve issues related to performance, locking, and scalability
  • Write and optimize complex SQL across the full stack — Spark SQL for distributed transformations, PostgreSQL for transactional and analytical workloads, and Delta Lake for versioned, ACID-compliant table management
  • Design and build large-scale Spark and Azure Data Factory pipelines for batch and streaming ingestion, transformation, and feature engineering — leveraging both PySpark and Spark SQL for distributed processing at large scale
  • Design and develop data lake architectures including Medallion Architecture to support advanced analytics and large-scale data ingestion
  • Build and maintain production-grade data pipelines for efficient data movement and transformation across systems. Managing Delta table lifecycles, schema evolution, Z-ordering, time travel, and Change Data Feed to support reliable, performant analytics and OLTP and ML workloads
  • Design and operate hybrid storage patterns combining PostgreSQL for transactional workloads — with optimized schemas, indexes, CTEs, window functions, and partitioning — alongside Delta Lake Lakehouse layers for analytical and ML workloads
  • Design and implement reporting solutions that deliver actionable insights to business stakeholders
  • Communicate database and data architecture designs to business and technical audiences, including business users, program sponsors, database administrators, ETL and BI developers
  • Evaluate potential technology/tool solutions that meet business needs and facilitate internal and external discussions towards desirable outcomes
  • Collaborate with solution architects and project resources on systems integration and compatibility, while acting as a leader in coaching, training, and providing guidance
  • Create functional and technical documentation related to data architecture and business intelligence solutions
  • Provide technical consulting to application development teams during application design and development for highly complex or critical projects
  • Design data governance procedures to ensure compliance with internal and external regulations

Requirements

  • 7+ years in data engineering, big data platforms, or a related discipline with hands-on production experience at scale
  • Located in Eastern or Central time zone; you will work extensively with a team member in the UK
  • Proven experience analyzing and tuning production database systems for performance and reliability
  • Expert-level SQL skills — complex joins, CTEs, window functions, query plan analysis, and optimization across both OLTP (PostgreSQL) and distributed engines (Spark SQL, Databricks SQL, Delta Lake)
  • Hands-on experience with data lake technologies and data pipeline frameworks (e.g., Azure Data Lake, Azure Data Factory, Databricks)
  • Deep expertise in Apache Spark — DataFrames, Spark SQL, UDFs, partitioning, broadcast strategies, and hands-on performance tuning experience
  • Data warehouse and visualization experience, including demonstrated strong Logical, Physical, and Dimensional Modeling skills
  • Solid command of Delta Lake internals — transaction log, schema enforcement, schema evolution, time travel, CDF, Z-ordering, liquid clustering, and OPTIMIZE/VACUUM operations
  • Strong PostgreSQL experience — schema design, indexing strategies, partitioning, EXPLAIN/ANALYZE tuning, and extensions including pgvector for similarity search
  • Strong Python skills — PySpark, pandas, async programming, building production data utilities
  • Strong understanding of reporting and BI solution design, including Power BI or similar tools
  • Experience in design of technology roadmaps and the transition from the current architectural framework to target architecture of the future
  • Excellent verbal and written communication skills to document and present data models, strategies, standards, and concepts to both business and IT audiences
  • Experience with the following tech & tools: SQL Engines: Azure SQL Server, Spark, Cosmos DB, PostgreSQL, Elastic Azure Technologies: Cosmos DB, SQL Database, Analytics, Azure Databricks, Data Factory, Fabric, Power BI, Azure Data Lake ETL Tools: Azure Data Factory, Azure Databricks, Azure Stream Analytics Lakehouse: Delta Lake, Delta Live Tables
Benefits
  • Competitive base salary: $150,000-$180,000/year
  • Flexible vacation policy – take the time you need to recharge
  • Comprehensive health, vision & dental insurance
  • 401k with company contribution
  • Opportunity for career progression with plenty of room for personal growth
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
SQLSpark SQLPostgreSQLDelta LakePySparkData LakeData PipelineData GovernanceData ModelingPerformance Tuning
Soft Skills
CommunicationCollaborationLeadershipDocumentationCoachingTrainingAnalytical ThinkingProblem SolvingInterpersonal SkillsStakeholder Engagement