SonicJobs Logo
Left arrow iconBack to search

Machine Learning Compute Efficiency Lead, Infrastructure & Planning

Apple
Posted 8 days ago, valid for 5 hours
Location

Cupertino, CA 95015, US

Salary

Competitive

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • Apple's Platform Acceleration & Compute Efficiency (PACE) team is seeking a Senior Architect to enhance ML compute efficiency for large-scale model serving.
  • Candidates should have a minimum of 5 years of experience in ML infrastructure, with expertise in foundation model serving, inference, and training at scale.
  • The role involves optimizing Apple’s ML workloads, advocating for ML engineers, and collaborating with internal and external teams to improve resource strategies.
  • A relevant MS or PhD degree is required, along with familiarity with PyTorch, JAX, and cluster management tools like Slurm and Kubernetes.
  • The salary for this position is competitive and commensurate with experience, reflecting the critical nature of this role in Apple's ML ecosystem.
Apple’s Platform Acceleration & Compute Efficiency (PACE) is a high-leverage team operating at the critical intersection of our ML organizations, underlying compute infrastructure, and core platform tooling. Our mission is to empower Apple’s software engineering teams with efficient, scalable compute. By driving out operational friction and optimizing the broader machine learning ecosystem, we directly accelerate the pace of development across the company. As foundation models become increasingly central to Apple's user experiences, maximizing the efficiency of our ML compute is paramount. In this role, you will focus relentlessly on compute efficiency, ensuring that Apple’s models run as fast, reliably, and cost-effectively as possible. You will tackle massive optimization challenges, from maximizing hardware utilization across GPUs, TPUs, and custom Apple Silicon, to shaping workload scheduling and capacity allocation for large model serving. We are seeking a Senior Architect with deep expertise in ML infrastructure to act as a linchpin for Apple’s foundational inference strategy. You will be instrumental in defining, establishing, and monitoring compute efficiency metrics across the software engineering organization. By partnering closely with model developers and infrastructure providers, your work will directly reduce serving costs, shape core engineering decisions, and enable the highly efficient, scalable inference required to power Apple Intelligence for hundreds of millions of users.

Description


- Own and support ML compute management for Apple’s inference workloads (GPU, TPU, and custom silicon) to enable large-scale model serving. - Collaborate closely with Apple Intelligence and ML engineering teams to understand roadmaps and resource pain points to develop and implement resource strategies. - Optimize Apple’s ML workloads by driving performance improvements, maximizing resource utilization, and reducing service costs through deep root cause analysis that shapes both engineering decisions and the end customer experience. - Architect solutions for large-scale optimization problems, including capacity allocation, workload scheduling, and cost reduction, enabling Apple's AI-driven experiences. - Advocate on behalf of Apple’s ML engineers to bring a consolidated view of ML platform and model inference requirements to Apple’s internal infrastructure platform providers and 3rd party public cloud providers.

Minimum Qualifications


MS or PhD in a relevant field Direct experience with foundation model serving, inference, and training at scale Familiarity with PyTorch, JAX, cluster management (Slurm, Kubernetes), or GPU/TPU hardware Prior experience in efficiency, FinOps, or capacity planning Experience negotiating technical roadmaps with platform or infrastructure teams Background in technical and financial decision-making (TCO modeling, cost optimization)

Preferred Qualifications


MS or PhD in a relevant field Direct experience with foundation model serving, inference, and training at scale Familiarity with PyTorch, JAX, cluster management (Slurm, Kubernetes), or GPU/TPU hardware Prior experience in efficiency, FinOps, or capacity planning Experience negotiating technical roadmaps with platform or infrastructure teams Background in technical and financial decision-making (TCO modeling, cost optimization)



Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.