
Data Engineer
Broadridge
contract
Posted on:
Location Type: Remote
Location: Remote • New Jersey • 🇺🇸 United States
Visit company websiteSalary
💰 $70 - $85 per hour
Job Level
Mid-LevelSenior
Tech Stack
AirflowAmazon RedshiftAWSCloudETLGreenplumInformaticaJenkinsKafkaKubernetes.NETPostgresPySparkPythonSplunkSQL
About the role
- Own complex SQL at scale : design, refactor, and optimize Greenplum SQL functions and complex queries.
- Performance & concurrency : Analyze plans, eliminate skew/locks, tune resource queues, and improve runtimes/SLOs across multi-tenant loads.
- Participate in troubleshooting’s and continuous improvements/optimization.
- SQL DevOps / CI/CD : Implement versioning, automated testing, and deployments for schemas, stored procedures, and SQL artifacts (GitLab/Jenkins).
- Client onboarding pipelines : File ingestion via Informatica—design high-throughput, observable ETL to land/validate onboarded client data.
- Expand and analyze existing workflows.
- Streaming ingestion via Kafka—build/operate real-time and micro-batch pipelines (Connect, schema registry, DLQ patterns) that feed Greenplum.
- Integrated platform integration : Work on integration with BR Integrated platform by ingesting data form streaming services and utilize AWS orchestration and ETL tools.
- Data quality & controls : Reconciliation, validation rules, auditability, and error handling to meet regulatory and compliance requirements.
- Observability & support : Dashboards/alerts (Datadog/Splunk/CloudWatch), on-call participation, RCAs/postmortems, hardened runbooks.
- Partner & deliver : Collaborate with Product, QA/SDET, and App Dev teams to deliver end-to-end outcomes.
Requirements
- Extensive experience in in Data Engineering with advanced Greenplum/PostgreSQL in an MPP/DW environment.
- Proven query optimization (EXPLAIN/ANALYZE), distribution/partitioning, skew mitigation, and lock/contention reduction.
- Hands-on CI/CD for SQL (Git workflow, code review, automated deploys via Liquibase/Flyway or equivalent).
- Strong Informatica experience for file-based ingestion (performance tuning, error handling, observability).
- Practical Kafka experience (producers/consumers, Connect sinks, partitioning/keys, schema management, at-least-once semantics).
- Proficiency with Python/Shell for tooling/automation.
- Working knowledge of AWS data patterns (S3 staging, Glue/PD orchestration, containerized services).
- Data quality frameworks, reconciliation, and production support with an SLA/SLO mindset.
- Nice-to-Have Snowflake/Redshift; PySpark/EMR/Databricks.
- Airflow/Step Functions/TWS/Control-M orchestration.
- Service integration with API/.NET teams, basic Kubernetes/ECS know-how.
- Experience in transactional Financial services / Tax domain (very desirable)
- Understanding of Enterprise Architectural and Cloud migration for on-prem workloads.
Benefits
- Health insurance
- 401(k) matching
- Flexible work hours
- Paid time off
- Professional development opportunities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
SQLGreenplumPostgreSQLInformaticaKafkaPythonShellCI/CDData quality frameworksPerformance tuning
Soft skills
collaborationtroubleshootingcontinuous improvementobservabilitysupport