Gorilla Logic

Senior Application Data Architect

Gorilla Logic

full-time

Posted on:

Location Type: Remote

Location: Colombia

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Design and evolve enterprise data architectures, including Lakehouse, Data Warehouse, pipelines, semantic models, and reporting layers.
  • Define and maintain architectural standards, patterns, and best practices across Microsoft Fabric and Azure services.
  • Translate enterprise data strategy into epics, features, and actionable technical stories.
  • Lead technical planning activities, identifying dependencies, risks, and trade-offs early in the lifecycle.
  • Establish and enforce standards for data quality, taxonomy, pipeline design, and semantic modeling.
  • Review solution designs and critical implementations to ensure scalability, performance, and maintainability.
  • Act as a technical leader and mentor, elevating engineering practices and system-level thinking.
  • Design, build, and enhance data pipelines, dataflows, notebooks, and semantic models.
  • Contribute hands-on to complex and high-impact initiatives where architecture and implementation intersect.
  • Support platform modernization, including migration from on-prem SQL Server to Microsoft Fabric.
  • Optimize data solutions for performance, reliability, and cost efficiency.
  • Collaborate with Data Scientists to productionize ML models and integrate them into enterprise pipelines.
  • Apply AI-assisted techniques to improve development workflows and solution quality.
  • Deliver scalable, maintainable, and high-quality data solutions aligned with best practices.
  • Participate in cross-project planning and release activities.
  • Collaborate with Product Owners and stakeholders to align solutions with business needs and priorities.
  • Monitor systems using logs and dashboards to ensure performance, reliability, and issue resolution.
  • Create and maintain clear, concise technical documentation (architecture, systems, processes).

Requirements

  • 4+ years of experience in software/data engineering (Python, PySpark, Spark or similar)
  • Strong experience designing and building enterprise data platforms (Lakehouse, Data Warehouse, Analytics)
  • SQL, relational databases, and large-scale data systems
  • Data pipelines, ETL/ELT processes, and query optimization
  • Semantic modeling and reporting tools (e.g., Power BI)
  • Experience with cloud data platforms (preferably Azure / Microsoft Fabric or similar)
  • Familiarity with distributed data technologies (e.g., Spark, Kafka, Hadoop or cloud-native equivalents)
  • Understanding of CI/CD, DataOps/MLOps, and modern deployment practices
  • Experience working with APIs and system integrations.
  • Bachelor’s degree in Computer Science or related field (or equivalent experience)
Benefits
  • Health insurance
  • 401(k)
  • Flexible work arrangements
  • Professional development
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
PythonPySparkSparkSQLETLELTsemantic modelingdata pipelinesquery optimizationdata architecture
Soft Skills
technical leadershipmentoringcollaborationcommunicationplanningproblem-solvingdocumentation
Certifications
Bachelor’s degree in Computer Science