
Senior Analyst, Data Design and Engineering
Johnson & Johnson
full-time
Posted on:
Location Type: Hybrid
Location: Paranaque • Philippines
Visit company websiteExplore more
Job Level
About the role
- Develop and maintain workflows, transformation designs, and integration activities for the GS Enterprise Data Lake & Analytics Platform (GSIE).
- Establish and enforce industry standards and best practices for data engineering across new and existing projects and datasets.
- Work independently and collaboratively with data professionals, analytics specialists, and technical and business leaders to gather functional and non-functional requirements, assemble large, sophisticated datasets, and ensure system functionality.
- Create and maintain standardized governance for data modeling and transformation by defining and refining data relationships and structures (conceptual, logical, and physical models).
- Understand and maintain data lineage, metadata, and relationships.
- Ensure the Entity Relationship Diagram (ERD) is kept up to date.
- Identify, design, and implement internal process improvements — for example, automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
- Build infrastructure for optimal extraction, transformation, and loading (ETL) from diverse data sources.
- Build analytics tools that leverage the data pipeline to deliver insights into customer experience, operational efficiency, and process effectiveness.
- Create data-modeling assets for visualization analysts and data scientists to support the development of effective and innovative data products.
- Contribute creative and innovative ideas to user-experience and technical design discussions.
- Develop and provide documentation, training, and support as requirements emerge.
- Perform other work-related duties as assigned.
Requirements
- Bachelor's degree (4-year) in a STEM field, preferably Information Technology, Computer Science, or Management Information Systems/Technology.
- Intermediate to advanced SQL scripting skills and experience working with relational databases and/or data lakes such as Databricks (preferred), Snowflake, PostgreSQL, and Oracle.
- Advanced experience in schema design and a deep understanding of star schema, entity-attribute modeling, dimensional modeling, metadata design, data dictionaries, and overall data governance.
- Intermediate to advanced Python skills (PySpark preferred) for building and optimizing big-data pipelines, architectures, and datasets.
- Experience performing root-cause analysis on internal and external data to identify gaps, answer business questions, and find opportunities for improvement.
- Strong analytical and problem-solving skills, with experience working with structured and unstructured datasets.
- Experience building systems to support data transformation, data structures, metadata, dependency and workload management, and technical documentation.
- Experience manipulating, processing, and extracting value from large, disparate datasets.
- Experience working in a global matrix organization and supporting cross-functional teams is highly preferred.
- Project-management experience as both a leader and a team member.
- Experience with cloud platforms such as Azure and GCP; AWS experience preferred.
- At least 2 years' experience developing Alteryx or Dataiku workflows for data pipelines.
- Self-starter with the ability to proactively diagnose, coordinate, and resolve issues with minimal supervision.
- Outstanding attention to detail and strong due diligence is highly preferred.
Benefits
- Health insurance
- Professional development opportunities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
SQLPythonPySparkETLdata modelingschema designroot-cause analysisdata governancedata transformationdata extraction
Soft Skills
analytical skillsproblem-solving skillsattention to detailself-startercollaborationcommunicationcreativityproject managementorganizational skillsinnovation