Description
We are seeking an ML Integration Engineer. In this role, you will ensure Apple’s inference stack allows integrating ML workflows end-to-end with excellent user experience, flawless functionality, and maximum performance. This role is far reaching and you will partner with teams across our ML deployment stack, from ML model developers to runtime engineers, as you ensure the best experience, functionality, and maximum performance for ML workflows. The scope of work is wide, spanning model-side updates, ML frameworks export, custom kernels, compiler optimization, and development of analysis and debugging tools. As a power user of Apple’s ML infrastructure, you will also help spearhead the integration of the latest and most capable models with strong, competitive performance across hardware targets, showcasing the practical power of Apple’s authoring and runtime APIs. This role offers the unique opportunity to shape how ML developers experience Apple’s end-to-end inference stack, from model creation to deployment.
Minimum Qualifications
Bachelors in Computer Sciences, Engineering, or related discipline. Proficient in Python programming. Some familiarity with C++ is required. Proficiency in at least one ML authoring framework, such as PyTorch, MLX, and JAX. Understanding of ML fundamentals, including common architectures such as Transformers. Understanding of GPU programming paradigms. Strong communication skills, including ability to communicate with cross-functional audiences.
Preferred Qualifications
Experience with C++, Swift. Experience with GPU kernel optimizations. Experience with MLIR/LLVM or similar compiler toolchains. Familiarity with Hugging Face or other model repositories.
Learn more about this Employer on their Career Site
