Description
As the Eval Systems Architect, you will own the technical architecture of Siri's evaluation infrastructure — a system spanning real-device automation, simulated product evaluation, AI-powered auto-evaluators, developer workflows, and observability tooling. You will work across the Agentic Eval Engineering and Siri to ensure architectural coherence, define interfaces and contracts between systems, and drive the technical roadmap for the evaluation platform as a whole. This is not a role where you design in isolation. You will embed with teams, understand their systems deeply, and make architectural decisions that balance local team autonomy with system-wide consistency. You will lead a first-principles review of existing evaluation tooling and infrastructure — identifying gaps, redundancies, and opportunities to simplify or unify. You will represent the technical perspective in leadership discussions, influence build-vs-integrate decisions, and set the standards that enable teams to move fast without creating fragmentation. Your work will directly influence how Apple evaluates its most important AI products. Your architectural decisions will impact the speed, confidence, and quality with which Siri ships to billions of users.
Minimum Qualifications
BS/MS/PhD in Computer Science, Software Engineering, or a related field. 10+ years of software engineering experience, with at least 5 years in a systems architecture, staff/principal engineer, or technical leadership role. Proven track record of designing and shipping large-scale distributed systems serving multiple teams or organizations. Deep expertise in system design: API design, service architecture, data flow modeling, interface contracts, and schema evolution. Solid software engineering fundamentals with production experience, including CI/CD, testing strategies, system monitoring, debugging complex multi-service systems, and code maintainability. Demonstrated expertise in using AI-assisted software development workflows to accelerate engineering while maintaining code quality.
Preferred Qualifications
Experience architecting evaluation, testing, or quality infrastructure at scale — particularly for AI/ML products where quality is non-binary and continuous. Experience with building LLM applications, LLM-as-judge evaluation frameworks, and offline evaluation pipelines. Familiarity with MLOps principles for model lifecycle management and training data pipelines. Experience with VM orchestration, fleet management, or large-scale job scheduling systems. Knowledge of simulation and service virtualization techniques for complex software stacks. Experience with observability platforms (metrics, logging, tracing, dashboarding) and defining SLOs for platform reliability. Experience with agentic AI systems, including tool-use, multi-step reasoning, and human-in-the-loop workflows. Track record of leading cross-team architectural initiatives (e.g., platform migrations, API unification, system consolidation) in organizations with 50+ engineers
Learn more about this Employer on their Career Site
