Transform Visual Models into Real-World Applications
Fine-tune & Adapt – Train and customize diffusion models (SDXL, Flux, Stable Diffusion variants) using LoRA, DreamBooth, and other parameter-efficient methods.
Curate Datasets – Build, clean, and annotate large-scale image datasets with captioning, tagging, and NSFW filtering for safe and aligned generation.
Evaluate & Align – Develop pipelines to measure fidelity, diversity, style adherence, and safety across generated outputs.
Optimize Performance – Apply GPU memory optimization, latent diffusion tricks, and distributed training for efficient scaling.
Deploy & Monitor – Ship diffusion-powered features into production with monitoring for drift, latency, and quality.
Collaborate & Deliver – Work with product and design to integrate generative vision capabilities into user experiences.
Likes ownership and independence
Bias for speed - prototype, test, and iterate without waiting for perfect plans.
Stay calm and effective in startup chaos - shifting priorities and building from zero doesn’t faze you.
Possess humility, hunger, and hustle, and lift others up as you go.
Requirements
Strong experience with diffusion models and generative vision (Stable Diffusion, SDXL, Flux, etc.).
Hands-on skills with DreamBooth, LoRA/QLoRA, and fine-tuning methods.
Proficiency with PyTorch (preferred).
Experience in dataset preparation (captioning, tagging, filtering, augmentation).
Knowledge of GPU optimization, latent diffusion, and efficient training techniques.
Strong foundations in software engineering, algorithms, and clean code practices.