Description
This is a hands-on role focused on the models that power Apple products used daily by over a billion people. You will design evaluation systems where the outcome is not just a score, but an actionable signal - one that drives model improvement and predicts real user experience. Working alongside model training and product teams, you will close the loop between evaluation and improvement. Our work spans three areas: • Frontier capability assessment: benchmarking against the state of the art in reasoning, code, knowledge, and agentic workflows • Product-aligned evaluation: measuring model quality in ways that reflect real user experience • Evaluation-to-training integration: feeding actionable insights back into the model development cycle You may focus on one area or work across multiple, depending on your background and interests.
Minimum Qualifications
3+ years of experience in AI model evaluation, NLP, or a related area (e.g., natural language generation, information retrieval, or conversational AI) Strong fundamentals in machine learning, natural language processing, and statistical analysis Proficiency in Python and experience with ML frameworks (PyTorch, JAX, or equivalent) Demonstrated ability to translate research insights into practical implementations Strong experimental design skills: ability to design rigorous comparisons and draw valid conclusions from results Clear technical communication: ability to distill evaluation results into actionable recommendations for cross-functional partners MS or PhD in Computer Science, Machine Learning, Natural Language Processing or a related technical field. Equivalent practical experience will be considered.
Preferred Qualifications
PhD in Computer Science, Machine Learning, NLP, or a related field Direct experience evaluating large language models, e.g. benchmark design, model-based judging Track record of collaborating with model training and data teams to turn evaluation findings into training improvements Experience building reusable evaluation tooling or analysis frameworks adopted across teams Familiarity with human evaluation methodology and experience partnering with annotation teams or vendors to assess model quality
Learn more about this Employer on their Career Site
