Mobiz Inc.

Data Engineer

Mobiz Inc.

full-time

Posted on:

Location Type: Office

Location: IslamabadPakistan

Visit company website

Explore more

AI Apply
Apply

About the role

  • Design, build, test, deploy, and maintain robust data pipelines using Databricks, Azure Data Factory, and Microsoft Fabric, supporting both batch and streaming workloads.
  • Ingest, integrate, and harmonize data from multiple cloud and on-premises sources to deliver a unified, reliable, and trusted data view for analytics and reporting.
  • Use Python and SQL to clean, transform, validate, and model data to meet business, reporting, and analytical requirements.
  • Contribute to data architecture and platform design decisions, ensuring solutions are scalable, secure, cost-effective, and aligned with long-term business and technical strategy.
  • Build and maintain metadata-driven pipelines that gracefully adapt to schema changes, new data sources, and evolving ingestion patterns.
  • Proactively identify data quality issues, pipeline failures, and performance bottlenecks; perform root-cause analysis and implement durable, long-term solutions.
  • Continuously optimize data pipeline performance, reliability, and scalability to ensure fast, consistent, and dependable access to data.
  • Implement automated data quality checks, validation rules, and monitoring dashboards to ensure accuracy, consistency, and trust in delivered datasets.
  • Monitor, analyze, and manage data pipeline and compute costs across Azure, Databricks, and Microsoft Fabric, recommending and implementing cost-optimization strategies without compromising performance or availability.
  • Automate routine and repetitive data engineering tasks, applying best practices for CI/CD, deployment, testing, and operational support.
  • Design, implement, and maintain secure data access controls using role-based access, audits, and governance best practices to ensure appropriate data security and compliance.
  • Maintain clear, accurate, and up-to-date technical documentation, and provide ongoing operational support as data platforms and business needs evolve.
  • Stay current with emerging data engineering technologies, cloud services, and analytics best practices, and evaluate new tools through proofs of concept when appropriate.
  • Collaborate closely with product managers, software engineers, analysts, and operations teams to embed high-quality data solutions into core business processes.
  • Promote and enforce data engineering best practices including version control, modular and reusable pipeline design, documentation standards, and scalable architecture patterns.
  • Enable and support data democratization by providing well-modeled, secure, and self-service-ready datasets through Power BI, Microsoft Fabric, or custom analytics solutions.
  • Deliver measurable outcomes such as highly reliable pipelines with minimal manual intervention, on-time pipeline execution within service windows, rapid resolution of critical incidents, and consistently high data quality.
  • Ensure reusable and standardized pipeline components are leveraged across teams to accelerate delivery, improve consistency, and reduce operational overhead.
  • Provide stakeholders with reliable, well-modeled datasets that can be confidently explored without requiring ongoing engineering support.
  • Ensure all data solutions comply with organizational security, access, and governance policies while maintaining positive feedback from internal and external users.

Requirements

  • 3–5 years of hands-on experience in data engineering or related roles, with proven experience building, supporting, and operating production-grade data pipelines.
  • Strong hands-on experience with Databricks, including development, optimization, and operational support of Databricks-based data solutions.
  • Proven experience with Azure Data Factory (ADF) and Microsoft Fabric for data ingestion, orchestration, and transformation.
  • Advanced proficiency in SQL for querying, data modeling, performance tuning, and troubleshooting.
  • Strong programming skills in Python for data transformation, automation, scripting, and pipeline development.
  • Hands-on experience with Microsoft Azure cloud services related to data storage, compute, networking, and security.
  • Solid understanding of ETL/ELT patterns, data integration strategies, and real-time or near-real-time data processing concepts.
  • Experience designing metadata-driven and reusable pipeline components that scale across multiple teams and projects.
  • Strong understanding of data governance, data quality frameworks, monitoring practices, and automated validation techniques.
  • Experience supporting high-availability data platforms with SLAs, monitoring, alerting, and incident response processes.
  • Demonstrated ability to optimize cloud compute usage and manage costs efficiently across Azure, Databricks, and Microsoft Fabric.
  • Experience using version control systems such as Git and following modern software engineering best practices.
  • Ability to design secure, scalable, and cost-effective data architectures aligned with business needs.
  • Strong analytical, problem-solving, and troubleshooting skills with a proactive and ownership-driven mindset.
  • Excellent communication skills, with the ability to explain complex data concepts to both technical and non-technical stakeholders.
  • Ability to work independently as well as collaboratively in cross-functional team environments.
  • High attention to detail with a strong focus on data accuracy, reliability, and documentation quality.
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • Relevant certifications are required or strongly preferred, including Databricks Certified Data Engineer, Microsoft Certified: Azure Data Engineer Associate, and Microsoft Certified: Fabric Data Engineer Associate.
Benefits
  • Competitive Salary and comprehensive benefits plan
  • A dynamic and collaborative work environment with opportunity to work with cutting-edge technology and innovative solutions
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringdata pipelinesPythonSQLETLELTdata modelingdata transformationdata integrationdata governance
Soft Skills
analytical skillsproblem-solvingcommunication skillsattention to detailcollaborationownership-driven mindsetproactive approachtroubleshootingdocumentation qualitycross-functional teamwork
Certifications
Databricks Certified Data EngineerMicrosoft Certified: Azure Data Engineer AssociateMicrosoft Certified: Fabric Data Engineer Associate