Honeywell

Data Engineer II

Honeywell

full-time

Posted on:

Location Type: Hybrid

Location: Charlotte • North Carolina • 🇺🇸 United States

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AzureCassandraETLHadoopHBaseHDFSMongoDBNoSQLPySparkPythonSFDCSparkSQLUnity

About the role

  • Design & build pipelines to ingest, transform, and publish structured/unstructured data from SFDC, EDW, ADLS, Event Hub, and APIs into Databricks/Snowflake, following Delta Lake and Unity Catalog standards
  • Model data (star/snowflake, CDC, SCD, dimensional views) to support analytics (e.g., commercial pipeline metrics, quote/discount modeling)
  • Operationalize ML/analytics pipelines including bronze→silver→gold processing, joins with model/market indicators, and serving outputs to applications/APIs
  • Harden platforms: CI/CD with Azure DevOps; monitor jobs/clusters; optimize PySpark/SQL performance; enforce data governance (quality, privacy, lineage, access)
  • Partner & document: collaborate with product owners and data science; write runbooks and technical specs; contribute to weekly updates and stewardship forums

Requirements

  • Min 4 years of experience in data engineering, ETL, or database development/administration
  • Hands‑on Azure Databricks, CI/CD & DevOps, and Snowflake experience
  • Strong Python, SQL, PySpark; comfort with both structured and unstructured data
  • Experience with Agile delivery
  • Bachelor’s degree in a technical discipline such as science, technology, engineering, mathematics
  • Experience with at least one NoSQL store (e.g., HBase/Cassandra/MongoDB)
  • Familiarity with Hadoop ecosystem (HDFS, Spark), and data integration/ETL tools
  • Exposure to ML ops tooling (MLflow), AKS‑backed API services, and integration patterns between Databricks, Snowflake, and application layers
  • Demonstrated contributions to data quality/stewardship initiatives (lineage, metadata, GDM frameworks)
  • Clear communication and ability to present technical trade‑offs to stakeholders
  • Working knowledge of SFDC data model and commercial processes (opportunities, quotes, quote line items)
Benefits
  • Comprehensive benefits package including employer-subsidized Medical, Dental, Vision, and Life Insurance
  • Short-Term and Long-Term Disability
  • 401(k) match
  • Flexible Spending Accounts
  • Health Savings Accounts
  • EAP
  • Educational Assistance
  • Parental Leave
  • Paid Time Off (for vacation, personal business, sick time, and parental leave)
  • 12 Paid Holidays

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data engineeringETLdatabase developmentPythonSQLPySparkNoSQLHadoopML opsdata modeling
Soft skills
clear communicationcollaborationtechnical documentationstakeholder presentationdata stewardship