Salary
💰 $120,000 - $200,000 per year
About the role
- Research and design technical aspects of AI regulations and policy to improve chances of safe AI.
- Produce a large range of research outputs for the technical governance space (reading, writing, meetings).
- Threat modeling to understand how AI systems could cause large-scale harm and mitigation actions.
- Respond to government Requests for Comment and prepare inputs into regulations.
- Learn risk management practices from other industries and apply them to AI.
- Design and implement evaluations of AI models to demonstrate failure modes and inform policy.
- Prepare and present briefings to policymakers explaining AI evaluations and relevant topics.
- Analyze government or AI developer policy documents and write reports on limitations.
- Design new AI policies and standards that address limitations of current approaches, focusing on scalability to smarter-than-human intelligence.
- (Manager-specific) External and internal stakeholder management, project management, people management, and potential research contributions.
Requirements
- No formal degree requirements, but applicants with strong background in AI Safety preferred.
- Experience or familiarity in one or more: compute governance; policy (including AI policy); AI safety generalist; research or engineering on frontier AI models or the AI tech stack (bonus).
- Strong agency, conscientiousness, comfort learning on the job, generative thinking.
- Strong communication skills (internal and external): concise writing and presenting to policymakers.
- Alignment with MIRI's values and passion for reducing existential risks from AI.
- Must provide resume and earliest possible start date; asked whether authorized to work in the US (visa sponsorship possible but may prioritize those who can start soon).