Luma AI

Product Security Engineer, Multimodal & Generative AI

Luma AI

full-time

Posted on:

Origin:  • 🇺🇸 United States • California

Visit company website
AI Apply
Manual Apply

Job Level

Mid-LevelSenior

Tech Stack

AWSCloudDockerGoogle Cloud PlatformKubernetesPython

About the role

  • About Luma Labs: we’re pioneering the next generation of multimodal generative AI, enabling models to create hyper-realistic videos and images from natural language and other inputs. Our products empower creators, developers, and companies to generate content that was previously impossible instantly and intelligently.
  • You will be Luma Labs’ first dedicated security engineering hire. As the Product Security Engineer, you’ll own the security posture of our products, services, and generative systems. You’ll work directly with engineering, ML, infrastructure, and leadership to proactively design and implement secure systems with a strong focus on the unique risks and opportunities in multimodal video and image generation.
  • This is a leadership-track position with both strategic ownership and deep technical execution.
  • What You’ll Do: Own Product & Application Security; Secure GenAI Systems; Lead Threat Modeling & Reviews; Build Security Infrastructure; Define Misuse & Abuse Guardrails; Incident Response & Detection; Influence Org-wide Security Culture; Build the Function.
  • Build the Function: Help hire and grow a high-caliber security team as the company scales.
  • Requirements: Must-Have: 5+ years in security engineering, with deep experience in product/application security; Have successful track of getting product through security certifications; Proven ability to operate as a hands-on engineer and technical leader; Strong understanding of generative AI systems or high-complexity ML applications; Proficient in secure development with Python and experience securing cloud-native environments (AWS/GCP, Docker/K8s); Deep experience with threat modeling, secure design, and modern application security tooling (SAST, DAST, IaC scanning, etc.); Ability to balance pragmatism and rigor you can make fast, thoughtful decisions and execute in a fast-moving startup environment; Excellent written and verbal communication skills; comfortable collaborating across research, product, infra, and leadership.; Bonus / Nice-to-Have: Hands-on experience with generative models (e.g., diffusion, transformers, vision-language) and related risks (prompt injection, data leakage); Experience building or leading security teams in an early-stage startup; Exposure to red teaming, adversarial ML, or AI safety frameworks; Public speaking, open-source contributions, or research in security or AI fields.
  • Why This Role is Unique: Greenfield Security; Cross-Disciplinary Impact; Fast Path to Leadership.
  • Deep Tech with Real Users: Work on cutting-edge video and image generation tools already in production and scaling fast.

Requirements

  • 5+ years in security engineering, with deep experience in product/application security.
  • Have successful track of getting product through security certifications
  • Proven ability to operate as a hands-on engineer and technical leader.
  • Strong understanding of generative AI systems or high-complexity ML applications.
  • Proficient in secure development with Python and experience securing cloud-native environments (AWS/GCP, Docker/K8s).
  • Deep experience with threat modeling, secure design, and modern application security tooling (SAST, DAST, IaC scanning, etc.).
  • Ability to balance pragmatism and rigor you can make fast, thoughtful decisions and execute in a fast-moving startup environment.
  • Excellent written and verbal communication skills; comfortable collaborating across research, product, infra, and leadership.