Purpose Financial

Senior Data Engineer

Purpose Financial

full-time

Posted on:

Location Type: Office

Location: GreenvilleSouth CarolinaUnited States

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Design, develop, and maintain scalable and reliable data pipelines to ingest, transform, and load structured, semi-structured, and unstructured data from various sources into our data lake and warehouse environments.
  • Implement data integration solutions to consolidate data from disparate sources, including databases, API’s, streaming platforms, and 3rd party services (e.g., Snowpipe, SnowPark, Dynamo, Kafka).
  • Optimize data processing workflows for performance, efficiency, and scalability using distributed computing or parallel processing frameworks such as FiveTran, dbt, Snowpark, Snowpipe, etc.…
  • Collaborate cross-functionally with IT & business stakeholders to understand data requirements, define data models, and develop solutions to support data services, reporting, and Software Development.
  • Partner with data, IT, and business teams to improve design and building of metrics to enhance our analytic capabilities.
  • Implement data quality checks, data validation processes, and error handling mechanisms to ensure the accuracy, completeness, and reliability of data across all stages of the data lifecycle.
  • Support the design and maintenance of data schemas, and metadata repositories for governance documentation of data lineage, definitions, and dependencies.
  • Support the development and maintenance of data governance policies, standards, and best practices to ensure compliance with data privacy regulations and industry standards.
  • Apply best practices for AWS and Snowflake architectures, data pipelines and data models.
  • Monitor, troubleshoot, and optimize the performance and availability of data systems and infrastructure using monitoring and logging tools such as Prometheus.
  • Stay current with emerging technologies, tools, and trends in data engineering, cloud architectures, and cloud computing to evaluate their potential impact and relevance to our data platforms.
  • Responsible to coach and mentor junior data engineers.

Requirements

  • 10+ years of experience in data engineering, data pipelines, and data services required.
  • Familiarity with Agile/Scrum based development and methodology.
  • Strong proficiency in programming languages such as Python, SQL, Spark, or Java, with experience in data manipulation, transformation, and analysis.
  • Expert level experience with cloud-based data platforms and services such as AWS, Snowflake and dbt.
  • Experience with distributed computing frameworks such as Apache Spark, Kafka, etc.
  • Proficiency in database systems, data warehousing, data patterns/architectures, and SQL query optimizations.
  • Familiarity with containerization and orchestration technologies such as Docker or Kubernetes.
  • Excellent problem-solving skills, attention to detail, and ability to work effectively in a fast-paced and collaborative environment.
  • Strong communication, interpersonal and teamwork skills, with the ability to interact with stakeholders at all levels of the data team.
Benefits
  • Competitive Wages
  • Health/Life Benefits
  • Health Savings Account plus Employer Seed
  • 401(k) Savings Plan with Company Match
  • Paid Parental Leave
  • Company Paid Holidays
  • Paid Time Off including Volunteer Time
  • Tuition Reimbursement
  • Business Casual Environment
  • Rewards & Recognition Program
  • Employee Assistance Program
  • Office in downtown Greenville that offers free parking, onsite gym, free snacks/drinks
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringdata pipelinesdata servicesPythonSQLSparkJavaAWSSnowflakedbt
Soft Skills
problem-solvingattention to detailcollaborationcommunicationinterpersonal skillsteamwork