Develop big data applications for Synchrony in Hadoop ecosystem
Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment
Work with team members to achieve business results in a fast paced and quickly changing environment
Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio
Provide data analysis for Synchrony’s data ingestion, standardization and curation efforts ensuring all data is understood from a business context
Identify enablers and level of effort required to properly ingest and transform data for the data lake
Profile data to assist with defining the data elements, propose business term mappings, and define data quality rules
Work with the Data Office to ensure that data dictionaries for all ingested and created data sets are properly documented in data dictionary repository
Ensure the lineage of all data assets are properly documented in the appropriate enterprise metadata repositories
Assist with the creation and implementation of data quality rules
Ensure the proper identification of sensitive data elements and critical data elements
Create source-to-target data mapping documents
Test current processes and identify deficiencies
Investigate program quality to make improvements to achieve better data accuracy
Understand functional and non-functional requirement and prepare test data accordingly
Plan, create and manage the test case and test script
Identify process bottlenecks and suggest actions for improvement
Execute test script and collect test results
Present test cases, test results, reports and metrics as required by the Office of Agile
Perform other duties as needed to ensure the success of the team and application and ensure the team’s compliance with the applicable Data Sourcing, Data Quality, and Data Governance standards
Requirements
Bachelor's degree OR in lieu of Bachelor's degree, High School Diploma/ GED and minimum 2 years of Information Technology experience
Minimum of 1 year of Hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git
Ability to write abstracted, reusable code components
Programming experience in at least one of the following languages: Scala, Java or Python
Analytical mindset
Willingness and aptitude to learn new technologies quickly
Superior oral and written communication skills; Ability to collaborate across teams of internal and external technical staff, business analysts, software support and operations staff
Legal authorization to work in the U.S.; Employer will not sponsor employment visas
Must be 18 years or older
Willingness to take a drug test, submit to a background investigation and submit fingerprints as part of onboarding
Ability to satisfy the requirements of Section 19 of the Federal Deposit Insurance Act
Desired: Bachelor's degree in a quantitative field (Engineering, Computer Science, Statistics, Econometrics)
Desired: Performance tuning experience
Desired: Exposure to Ab Initio tools (GDE, Co>Operating System, Control Center, Metadata Hub, Enterprise Meta>Environment, Portal, Acquire>It, Express>It, Conduct>It, Data Quality Environment, Query>It)
Desired: Familiarity with Hortonworks/Cloudera, Zookeeper, Oozie and Kafka
Desired: Familiar with Public Cloud (AWS, GCP, Azure) data engineering services
Desired: Familiar with data management tools (Collibra)
Desired: Background in ETL, data warehousing or data lake
Desired: Financial industry or credit processing experience
Desired: Experience managing onshore/offshore teams and client-facing experience
Desired: Proficient in maintenance of data dictionaries and Collibra
Benefits
Choice and flexibility to work from home, near a Hub, or in an office
Option to work from home with occasional required in-person engagement activities
Inclusive culture with Employee Resource Groups (ERGs)
Award-winning culture
Support, encouragement, and tools/technology to grow your career
Reasonable accommodation for applicants with disabilities
Career Support Line for application and accommodation assistance
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
HadoopNoSQLIn-memory Data GridsKafkaSparkAb InitioSQLShell scriptingScalaJava
Soft skills
analytical mindsetsuperior oral communicationsuperior written communicationcollaborationproblem-solvingadaptabilityattention to detailteamworktime managementcritical thinking