L3Harris Technologies

Specialist, Data Engineer

L3Harris Technologies

full-time

Posted on:

Location Type: Hybrid

Location: MelbourneFloridaUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $79,000 - $146,500 per year

About the role

  • Design, build, and maintain robust data pipelines to ensure reliable data flow across the enterprise.
  • Maintain data pipeline schedules, orchestrate workflows, and monitor the overall health of data pipelines to ensure continuous data availability.
  • Create, update, and optimize data connections, datasets, and transformations to align with business needs.
  • Troubleshoot and resolve data sync issues, ensuring consistent and correct data flow from source systems.
  • Collaborate with cross-functional teams to uphold data quality standards and ensure accurate data is available for use.
  • Utilize Palantir Foundry to establish data connections to source applications, extract and load data, and design complex logical data models that meet functional and technical specifications.
  • Develop and manage data cleansing, consolidation, and integration mechanisms to support big data analytics at scale.
  • Build visualizations using Palantir Foundry tools and assist business users with testing, troubleshooting, and documentation creation, including data maintenance guides.

Requirements

  • Bachelor’s Degree and minimum 4 years of prior relevant experience
  • Graduate Degree and a minimum 2 years of prior related experience
  • In lieu of a degree, minimum of 8 years of prior related experience.
  • 2+ years experience with designing and developing data pipelines in PySpark, Spark SQL, SQL or Code Build.
  • 2+ years experience in building and deploying data synchronization schedules and maintaining data pipelines using Palantir Foundry.
  • 2+ years of experience with Data Pipeline development or ETL tools such as Palantir Foundry, Azure Data Factory, SSIS, or Python.
  • 2+ years of experience in Data Integration.
  • Strong understanding of Business Intelligence (BI) and Data Warehouse (DW) development methodologies.
  • Hands-on experience with the Snowflake Cloud Data Platform, including data architecture, query optimization, and performance tuning.
  • Proficiency in Python, PySpark, Pandas, Databricks, JavaScript, or other scripting languages for data processing and automation.
  • Experience with other ETL tools such as Azure Data Factory (ADF), SSIS, Informatica, or Talend is highly desirable.
  • Familiarity with connecting and extracting data from various ERP applications, including Oracle EBS, SAP ECC/S4, Deltek Costpoint, and more.
  • Experience with AI tools such as OpenAI, Palantir AIP, Snowflake Cortex or similar.
Benefits
  • health and disability insurance
  • 401(k) match
  • flexible spending accounts
  • EAP
  • education assistance
  • parental leave
  • paid time off
  • company-paid holidays
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data pipeline developmentPySparkSpark SQLSQLdata synchronizationETL toolsdata integrationdata architecturequery optimizationdata processing
Soft Skills
collaborationtroubleshootingdocumentation creationdata quality standards
Certifications
Bachelor's DegreeGraduate Degree