Luxury Presence

Staff Software Engineer, Data Platform

Luxury Presence

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇨🇦 Canada

Visit company website
AI Apply
Apply

Job Level

Lead

Tech Stack

AirflowApacheAWSCloudDistributed SystemsETLGraphQLJavaKafkaKubernetesMicroservicesPythonSparkSQL

About the role

  • Build robust data pipelines and backend services that power high-quality MLS and property data across 400+ feeds
  • Property discovery and search on agent websites
  • Personalized listing recommendations and other data-driven features
  • Conversational and operational AI agents that streamline internal workflows
  • The evaluation and monitoring infrastructure that keeps these systems improving over time
  • Own the end-to-end architecture for MLS and property data: streaming and batch pipelines, microservices, storage layers, and APIs
  • Design and evolve event-driven, Kafka-based data flows that power listing ingestion, enrichment, recommendations, and AI use cases
  • Drive technical design reviews, set engineering best practices, and make high-quality tradeoffs around reliability, performance, and cost
  • Design, build, and operate backend services (Python or Java) that expose listing, property, and recommendation data via robust APIs and microservices
  • Implement scalable data processing with Spark or Flink on EMR (or similar), orchestrated via Airflow and running on Kubernetes where applicable
  • Champion observability (metrics, tracing, logging) and operational excellence (alerting, runbooks, SLOs, on-call participation) for data and backend services
  • Build and maintain high-volume, schema-evolving streaming and batch pipelines that ingest and normalize MLS and third-party data
  • Ensure data quality, lineage, and governance are built into the platform from the start—supporting analytics, AI/ML, and customer-facing features
  • Partner with analytics engineering and data science to make data discoverable and usable (e.g., semantic layers, documentation, self-service tooling)
  • Collaborate with ML/AI engineers to design and scale AI agents that automate MLS feed onboarding, listing discrepancy triage, and other operational workflows
  • Work with frameworks such as PydanticAI, LangChain, or similar to integrate LLM-based agents into our data and service architecture
  • Help define and implement evaluation, logging, and feedback loops so these agents and data-driven products continuously improve
  • Collaborate closely with Product, Engineering, and Operations to shape the roadmap for our data platform, MLS capabilities, and AI-powered experiences
  • Translate ambiguous business and customer problems into clear technical strategies and phased delivery plans
  • Mentor and unblock other engineers; elevate the overall level of technical decision-making on the team via pairing, reviews, and design guidance

Requirements

  • 10+ years of professional software engineering experience, including owning production systems end-to-end
  • Significant experience working with data-intensive or distributed systems at scale (high volume, high availability)
  • Prior experience in a senior or staff/lead role where you influenced architecture, standards, and technical direction
  • Strong programming skills in Python or Java, with experience building microservices and APIs (REST/GraphQL)
  • Hands-on experience with Apache Kafka or similar event/messaging platforms (Kinesis, Pub/Sub, etc.)
  • Deep experience with Spark or Flink for large-scale data processing, across streaming and batch pipelines (on EMR or similar big-data compute)
  • Airflow (or equivalent orchestration tools)
  • Kubernetes for running data/compute workloads
  • Strong SQL and data modeling skills; solid understanding of ETL/ELT patterns, data warehousing concepts, and performance tuning
  • Experience building on AWS (preferred) or another major cloud provider, with a good grasp of cost, reliability, and security tradeoffs
  • Experience building or integrating AI agents into production workflows (e.g., internal tools, support automation, operational triage, or data workflows)
  • Familiarity with frameworks such as PydanticAI, LangGraph, Claude Code or similar, and how they interact with backend services, vector stores, and LLM APIs
  • Comfort working with logs, telemetry, and evaluation metrics to monitor, debug, and iteratively improve AI-driven systems
  • Demonstrated ability to lead technical initiatives across teams, from idea to production (alignment, design, implementation, rollout)
  • Track record of mentoring other engineers and raising the bar on code quality, testing, and design
  • Strong communication skills; able to clearly explain complex technical decisions to both engineers and non-technical stakeholders
  • Customer and product mindset: you care about how the data and services you build improve the end-user and client experience, not just the internals.
Benefits
  • Health insurance
  • Professional development opportunities
  • Remote work options

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
PythonJavamicroservicesAPIsApache KafkaSparkFlinkAirflowKubernetesSQL
Soft skills
leadershipcommunicationmentoringcollaborationtechnical decision-makingproblem-solvingcustomer mindsetdesign guidanceinfluencing architectureoperational excellence