Innodata Inc.

Applied Research Scientist, LLM Evaluation – Post-Training

Innodata Inc.

full-time

Posted on:

Location Type: Remote

Location: New JerseyUnited States

Visit company website

Explore more

AI Apply
Apply

About the role

  • Define and execute a research agenda focused on LLM evaluation and post-training, especially evaluation-driven model improvement
  • Design rigorous experiments to study how evaluation methodologies impact fine-tuning and post-training outcomes
  • Develop and validate evaluation frameworks for LLM and multimodal systems, including: benchmark/task design scoring methods judge/model-assisted evaluation human evaluation protocols robustness/stress testing
  • Lead research on advanced evaluation domains, including long-context, cross-modal, and dynamic multi-turn evaluations
  • Study the effectiveness and limitations of existing evaluation techniques, and propose improved methodologies with clear validity and scalability tradeoffs
  • Analyze model behavior and failure patterns; generate actionable recommendations for model improvement and evaluation redesign
  • Collaborate with AI/ML Research Engineers to translate research methods into scalable evaluation and post-training pipelines
  • Collaborate with Language Data Scientists to integrate human-in-the-loop and synthetic data/evaluation strategies into research programs
  • Engage with customer technical stakeholders to understand evaluation goals, review methodologies, and provide expert recommendations
  • Contribute to internal benchmark datasets, evaluation frameworks, and reusable research assets
  • Produce high-quality technical documentation, internal research reports, and client-facing materials explaining methods, results, assumptions, and limitations
  • Contribute to thought leadership and best practices in LLM evaluation, post-training, and GenAI quality measurement

Requirements

  • MS/PhD in Computer Science, Machine Learning, Statistics, Applied Mathematics, AI, or a related quantitative scientific field (PhD strongly preferred)
  • 5+ years of relevant experience in applied research / research science in ML/AI, with substantial work in LLMs or foundation models
  • Demonstrated experience with LLM evaluation, benchmarking, alignment, post-training, or model quality research
  • Strong foundation in experimental design, statistical analysis, and scientific reasoning for ML systems
  • Strong coding skills in Python for research experimentation and analysis (e.g., data processing, evaluation pipelines, statistical analysis, visualization)
  • Experience working with modern ML tooling/frameworks (e.g., PyTorch, Hugging Face, JAX/TensorFlow as applicable)
  • Ability to evaluate and compare human and automated evaluation methods, including tradeoffs in cost, reliability, validity, and scalability
  • Experience designing evaluation studies and protocols that are reproducible across datasets, model versions, and evaluation runs
  • Ability to collaborate directly with technical stakeholders including research scientists, ML engineers, data scientists, and customer technical counterparts
  • Strong communication skills and ability to present nuanced technical conclusions, assumptions, and limitations clearly.
Benefits
  • Health insurance
  • Retirement plans
  • Paid time off
  • Flexible work arrangements
  • Professional development opportunities
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
LLM evaluationbenchmarkingpost-trainingexperimental designstatistical analysisPythondata processingevaluation pipelinesvisualizationML tooling
Soft Skills
collaborationcommunicationscientific reasoningtechnical stakeholder engagementthought leadership
Certifications
MS in Computer SciencePhD in Computer SciencePhD in Machine LearningPhD in StatisticsPhD in Applied MathematicsPhD in AI