qode.world

Enterprise Data Architect

qode.world

full-time

Posted on:

Location Type: Hybrid

Location: TexasOhioPennsylvaniaUnited States

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Partner with business and technology stakeholders to analyze enterprise data requirements and translate them into scalable data engineering and analytics solutions.
  • Design, build, and support end-to-end data pipelines, including data ingestion, preprocessing, normalization, transformation, quality checks, and loading across complex data ecosystems.
  • Lead and contribute to ETL/ELT development using technologies such as Spark, Hadoop, Hive, Kafka, Python, and Scala, ensuring performance, reliability, and data accuracy.
  • Work with distributed data platforms including HDFS, HBase, Sqoop, Flume, and MapReduce, supporting both batch and real-time processing use cases.
  • Apply strong data modeling and data design principles to support analytics, reporting, regulatory, and operational needs.
  • Collaborate with enterprise architects on logical and physical data models aligned with PNC standards.
  • Support and implement data quality frameworks, including profiling, validation rules, reconciliation, and monitoring to ensure trusted and compliant data.
  • Collaborate with cross-functional teams to ensure solutions align with enterprise architecture, security, governance, and regulatory requirements.
  • Contribute to cloud-based data solutions, particularly on AWS, supporting data processing, analytics, and ML workloads.
  • Collaborate with data scientists and ML engineers to enable machine learning and AI use cases, including feature engineering, data preparation, and pipeline integration.
  • Support development and deployment of ML and AI systems, including exposure to LLM-based solutions, feature stores, and ML lifecycle management tools.
  • Participate in or support MLOps practices, including model deployment, monitoring, retraining pipelines, and integration with platforms such as SageMaker, MLflow, Kubeflow, or similar tools.
  • Work in Agile delivery environments, actively participating in sprint planning, stand-ups, reviews, and retrospectives using tools such as Jira.
  • Serve as a client-facing consultant, coordinating across the SDLC and communicating technical concepts clearly to both technical and non-technical stakeholders.
  • Contribute to solutioning, estimations, POCs, and client proposals, helping shape data, analytics, and AI modernization initiatives.
  • Mentor junior team members, support onboarding, and promote best practices in data engineering, analytics, and platform design.
  • Foster collaboration across teams to support continuous improvement and delivery excellence.

Requirements

  • 12+ years of experience in data engineering, data analytics, or enterprise data consulting.
  • Strong hands-on experience with big data and distributed data platforms.
  • Proficiency in Python, with experience in streaming and real-time data processing.
  • Solid understanding of data modeling, ETL/ELT design, and data quality practices.
  • Experience supporting cloud-based data platforms, preferably AWS.
  • Exposure to machine learning, AI, and MLOps concepts preferred.
  • Experience working in Agile/Scrum environments.
  • Strong communication and consulting skills with experience working in client-facing roles.
  • Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringdata analyticsETLELTdata modelingdata qualitymachine learningMLOpsPythonbig data
Soft Skills
communicationconsultingcollaborationmentoringproblem-solvingclient-facingleadershipagile methodologycontinuous improvementteamwork
Certifications
Bachelor's degreeMaster's degree