Vulcury

Data Engineer – Pipelines, Structured Markup

Vulcury

full-time

Posted on:

Location Type: Remote

Location: United States

Visit company website

Explore more

AI Apply
Apply

About the role

  • Build and maintain ingestion pipelines (Python-based ETL/ELT)
  • Design structured transformation workflows using dbt, SQLMesh, or equivalent
  • Convert unstructured transcripts and documents into normalized database records
  • Maintain PostgreSQL architecture (structured tables, JSONB, indexing strategy)
  • Develop attribute extraction frameworks for technical, commercial, and risk signals
  • Ensure data quality, consistency, and lineage from raw interaction to structured output
  • Collaborate with AI/ML engineers to ensure clean model inputs

Requirements

  • Strong Python (data pipelines, orchestration)
  • Advanced SQL (PostgreSQL preferred)
  • Experience with ETL/ELT frameworks (dbt, Airflow, SQLMesh, etc.)
  • Experience handling semi-structured data (JSON, transcripts, document parsing)
  • Strong schema design and normalization skills
  • Familiarity with cloud storage systems (S3 or equivalent)
Benefits
  • 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
PythonSQLETLELTdbtSQLMeshPostgreSQLdata normalizationschema designdata quality