Higher Logic

Data Solutions Engineer

Higher Logic

full-time

Posted on:

Location: 🇺🇸 United States

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

ETLPythonSQL

About the role

  • Support Higher Logic’s strategic data initiatives by designing, developing, and scaling data pipelines and models that enable analytics, business intelligence, and AI workflows using both structured and unstructured data.
  • Build and maintain scalable ETL/ELT pipelines to ingest structured and unstructured data into the data warehouse.
  • Develop and optimize data models to support reporting, dashboards, and operational insights.
  • Write clean, efficient, and reusable SQL and Python code to transform raw data into business-ready outputs.
  • Collaborate with analytics and business stakeholders to understand data requirements and deliver practical solutions.
  • Monitor pipeline performance and ensure reliability, accuracy, and timeliness of data flows.
  • Design frameworks for triggering AI workflows based on business rules and data events.
  • Serve as a subject matter expert on data best practices, including schema design, data governance, and data security.
  • Partner with cross-functional teams to align data assets with enterprise goals, AI enablement, and automation strategies.
  • Drive adoption of new data tools and methodologies, such as DBT, Iceberg, or similar modern data stack technologies.
  • Report directly to the Chief Data Officer and work closely with internal stakeholders across departments to ensure data is reliable, accessible, and actionable.

Requirements

  • Proficiency in SQL and Python for data processing and transformation.
  • Familiarity with modern data warehousing tools (e.g., DBT, Iceberg, Databricks, or similar).
  • Experience working with both structured and unstructured data sources.
  • Understanding of data quality, observability, and monitoring principles.
  • Strong communication and collaboration skills to support cross-functional work.
  • Ability to work independently and adapt to evolving data needs.