SonicJobs Logo
Left arrow iconBack to search

Research Scientist / Engineer, Foundation Model Evaluation

Apple
Posted a month ago, valid for 21 days
Location

Cupertino, CA 95015, US

Salary

Competitive

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • We are seeking a Research Scientist or Engineer for our Foundation Model Evaluation team at Apple.
  • The role requires a minimum of 3 years of experience in AI model evaluation, NLP, or a related field, with a strong background in machine learning and statistical analysis.
  • The successful candidate will design evaluation methodologies that translate insights into actionable signals for model improvement.
  • Proficiency in Python and experience with ML frameworks like PyTorch or JAX are essential for this hands-on position.
  • The salary for this position is competitive, commensurate with experience, and will be discussed during the interview process.
We are looking for a Research Scientist or Engineer to join our Foundation Model Evaluation team. In this role, you will design and build evaluation methodology that measures what matters - how well our models perform at the frontier of key capabilities, and how well they serve real users across Apple products on billions of active devices. You will turn evaluation insights into signals that make models better.

Description


This is a hands-on role focused on the models that power Apple products used daily by over a billion people. You will design evaluation systems where the outcome is not just a score, but an actionable signal - one that drives model improvement and predicts real user experience. Working alongside model training and product teams, you will close the loop between evaluation and improvement. Our work spans three areas: • Frontier capability assessment: benchmarking against the state of the art in reasoning, code, knowledge, and agentic workflows • Product-aligned evaluation: measuring model quality in ways that reflect real user experience • Evaluation-to-training integration: feeding actionable insights back into the model development cycle You may focus on one area or work across multiple, depending on your background and interests.

Minimum Qualifications


3+ years of experience in AI model evaluation, NLP, or a related area (e.g., natural language generation, information retrieval, or conversational AI) Strong fundamentals in machine learning, natural language processing, and statistical analysis Proficiency in Python and experience with ML frameworks (PyTorch, JAX, or equivalent) Demonstrated ability to translate research insights into practical implementations Strong experimental design skills: ability to design rigorous comparisons and draw valid conclusions from results Clear technical communication: ability to distill evaluation results into actionable recommendations for cross-functional partners MS or PhD in Computer Science, Machine Learning, Natural Language Processing or a related technical field. Equivalent practical experience will be considered.

Preferred Qualifications


PhD in Computer Science, Machine Learning, NLP, or a related field Direct experience evaluating large language models, e.g. benchmark design, model-based judging Track record of collaborating with model training and data teams to turn evaluation findings into training improvements Experience building reusable evaluation tooling or analysis frameworks adopted across teams Familiarity with human evaluation methodology and experience partnering with annotation teams or vendors to assess model quality



Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.