SonicJobs Logo
Left arrow iconBack to search

Siri, Eval Architect Engineer

Apple
Posted a day ago, valid for 8 days
Location

Cupertino, CA 95015, US

Salary

Competitive

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • Apple is seeking a Senior Eval Systems Architect to define the architecture of systems that measure Siri's quality across various platforms and updates.
  • The role requires a minimum of 10 years of software engineering experience, with at least 5 years in a systems architecture or technical leadership position.
  • Candidates should have a strong background in designing large-scale distributed systems and expertise in system design, including API design and service architecture.
  • The position offers a competitive salary of $180,000 per year, reflecting the seniority and expertise required for the role.
  • The Architect will influence how Apple evaluates its AI products, ensuring a coherent and scalable evaluation infrastructure.
Do you want to define the architecture of the systems that measure Siri's quality across every platform, every locale, and every model update? Apple's Agentic Eval Engineering organization is building the evaluation infrastructure that determines how Siri's quality is measured, trusted, and improved — spanning large-scale automation on real devices, model-in-the-loop simulation, AI-powered auto-evaluators, and closed-loop agentic fix pipelines. We are seeking a senior Eval Systems Architect to own the end-to-end technical vision and system architecture across our entire evaluation stack, ensuring that we build toward a coherent, scalable, and trustworthy system.

Description


As the Eval Systems Architect, you will own the technical architecture of Siri's evaluation infrastructure — a system spanning real-device automation, simulated product evaluation, AI-powered auto-evaluators, developer workflows, and observability tooling. You will work across the Agentic Eval Engineering and Siri to ensure architectural coherence, define interfaces and contracts between systems, and drive the technical roadmap for the evaluation platform as a whole. This is not a role where you design in isolation. You will embed with teams, understand their systems deeply, and make architectural decisions that balance local team autonomy with system-wide consistency. You will lead a first-principles review of existing evaluation tooling and infrastructure — identifying gaps, redundancies, and opportunities to simplify or unify. You will represent the technical perspective in leadership discussions, influence build-vs-integrate decisions, and set the standards that enable teams to move fast without creating fragmentation. Your work will directly influence how Apple evaluates its most important AI products. Your architectural decisions will impact the speed, confidence, and quality with which Siri ships to billions of users.

Minimum Qualifications


BS/MS/PhD in Computer Science, Software Engineering, or a related field. 10+ years of software engineering experience, with at least 5 years in a systems architecture, staff/principal engineer, or technical leadership role. Proven track record of designing and shipping large-scale distributed systems serving multiple teams or organizations. Deep expertise in system design: API design, service architecture, data flow modeling, interface contracts, and schema evolution. Solid software engineering fundamentals with production experience, including CI/CD, testing strategies, system monitoring, debugging complex multi-service systems, and code maintainability. Demonstrated expertise in using AI-assisted software development workflows to accelerate engineering while maintaining code quality.

Preferred Qualifications


Experience architecting evaluation, testing, or quality infrastructure at scale — particularly for AI/ML products where quality is non-binary and continuous. Experience with building LLM applications, LLM-as-judge evaluation frameworks, and offline evaluation pipelines. Familiarity with MLOps principles for model lifecycle management and training data pipelines. Experience with VM orchestration, fleet management, or large-scale job scheduling systems. Knowledge of simulation and service virtualization techniques for complex software stacks. Experience with observability platforms (metrics, logging, tracing, dashboarding) and defining SLOs for platform reliability. Experience with agentic AI systems, including tool-use, multi-step reasoning, and human-in-the-loop workflows. Track record of leading cross-team architectural initiatives (e.g., platform migrations, API unification, system consolidation) in organizations with 50+ engineers



Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.