Salary
💰 $165,000 - $350,000 per year
Tech Stack
LinuxPythonTypeScript
About the role
- When people get a model running on their own machine, the next question often is, "Great, now what can I actually do with it?". This is where you come in.
- Local, on-device AI has clear advantages: privacy, offline availability, predictable cost, and data control for both home and business users. Your job will be to turn those advantages into concrete features: tool-using assistants, agentic RAG, document understanding, data analysis and manipulation, code assistants of various kinds, and anything else that makes local AI genuinely useful.
- The work spans applied research, rapid prototyping, systematic evaluation, and production engineering delivered to a large and enthusiastic user base worldwide.
- Push the boundaries of what’s possible with local AI by designing and implementing new features
- Identify and scope high-impact local AI use cases (RAG, document understanding, data analysis, code intelligence) and design end-to-end solutions
- Prototype algorithms quickly, run systematic evaluations, and promote successful approaches into LM Studio’s production codebase
- Survey recent papers and open-source work; adapt, implement, and benchmark promising ideas on consumer hardware
- Collaborate with app, systems, and API focused team members to ship reliable features to a rapidly growing user base and iterate based on real-world feedback
- Build automated test and benchmarking harnesses to track accuracy, latency, and memory across model versions and tasks
Requirements
- 3+ years writing software every day (TypeScript or Python preferred; C++ a plus) - internships included
- Demonstrated experience shipping LLM-powered features
- Track record of running systematic experiments and using results to make algorithm choices
- Ability to profile and debug performance issues at the model, runtime, or systems level
- Strong written and verbal communication