The Data Engineer II designs, develops and delivers data pipelines and value-added data assets across the OU Health data ecosystem
Work closely with data scientists, business intelligence developers, and other colleagues to build or enhance robust systems
Mentors junior ETS colleagues and emphasizes agility, partnership, and cross functional teamwork
Incorporate new business and system data into the Data Warehouse while maintaining enterprise best practices and adhering to data governance standards
Apply business rules to our data as we migrate from source to target using Informatica or scripting language
Validate data to ensure quality
Collaborate with colleagues across the enterprise to scope requests
Extract data from various data sources, validate results, create relevant data visualizations, and share with requester
Develop dashboards and automate refreshes as appropriate
Manage initiatives & projects of average complexity and risk
Provide leadership to assigned delivery teams
Provide input to prioritized roadmaps, develop work estimates, and ensure successful delivery to support strategic planning and initiatives, improve organizational performance, and advance progress towards OU Health goals
Work closely with different application and operational teams to understand business needs and align data engineering initiatives accordingly
Investigates highly technical issues identified from the use of multiple data sources and determines the appropriate use of action for the organization
Guide, mentor and train Data Engineer I and Business Intelligence Developers on technical skills and best practices
Ability to work independently and within teams
Ability to develop and advise on data asset use to provide solutions to organizational needs
Requirements
Bachelor's Degree required
3-5 years in analytics (Business Intelligence, Data Engineering, Data Science, etc.) including 2 years in a Data Engineering or Data Warehousing role required
Epic certification/accreditation required within 6 months of hire or within 3 months of class completion including:
- Cogito Fundamentals
- Clarity Data Model
- Caboodle Data Model
- Access Data Model
- Revenue Data Model
- Clinical Data Model
- Caboodle Development
Strong analytic skills related to working with structured and unstructured datasets
Must possess critical thinking and creative problem-solving skills
Effective communication, project management and organizational skills
Experience supporting and working with cross-functional teams in a dynamic environment
Working knowledge of stream processing and highly scalable data stores
Previous experience manipulating, processing, and extracting value from large, disconnected datasets
Advanced SQL and data manipulation skills
Exposure to big data tools: dbt, SnowPark, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Snowflake, MS SQL Server, and Postrgres
Exposure to data integration tools: Fivetran, Matillion, SSIS, dbt, SnowSQL
Exposure to stream-processing systems: IBM Streams, Flume, Storm, Spark-Streaming, etc.
Exposure to consuming and building APIs
Exposure to object-oriented/object function programming languages: Python, Java, C++, Scala, etc.
Exposure to statistical data analysis tools: R, SAS, SPSS, etc.
Experience with visual analytics tools: QlikView, Tableau, Power BI etc.
Familiarity to Agile methodology for development
Familiarity with electronic health record and/or financial systems. i.e., Epic Systems, Workday, Strata etc.
Benefits
PTO
401(k)
medical and dental plans
comprehensive benefits package
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringdata warehousingSQLdata manipulationstream processingbig data toolsdata integration toolsAPIsobject-oriented programmingstatistical data analysis
Bachelor's DegreeEpic certificationCogito FundamentalsClarity Data ModelCaboodle Data ModelAccess Data ModelRevenue Data ModelClinical Data ModelCaboodle Development