SonicJobs Logo
Left arrow iconBack to search

Machine Learning Engineer, Sensing & Connectivity

Apple
Posted 8 days ago, valid for 7 days
Location

Cupertino, CA 95015, US

Salary

Competitive

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • The Motion & Interaction team at Apple is looking for a talented machine learning engineer to develop next-generation features using multi-modal sensing.
  • Candidates should have an MS, PhD, or at least 5 years of experience in machine learning, computer science, or related fields.
  • The role involves designing and implementing models and algorithms while optimizing for power, memory, and performance.
  • Strong proficiency in Python and machine learning frameworks like PyTorch or TensorFlow is required, along with experience in creating models, preferably with time series data.
  • The position offers a unique opportunity to work cross-functionally and impact millions of users worldwide.
The Motion & Interaction team has created intuitive experiences for our customers through motion sensing. When you simply raise your wrist, shake your head, or move your device to interact, it’s the work of engineers and scientists on this team. Our fingerprints can be found across core capabilities and experiences on iPhone, Watch, AirPods, Vision Pro, and other Apple products. We are a multidisciplinary team that operates at the intersection of algorithms, software, hardware, and design. We come from diverse backgrounds in signal processing, machine learning, software engineering, statistics, controls, firmware development, and more. As a member of our dynamic group, you will have a unique opportunity to work cross-functionally to develop products and features that impact the lives of millions of users worldwide a daily basis.

Description


We are seeking a talented, self-motivated machine learning engineer to build Apple’s next-generation features and experiences using multi-modal sensing. In this role, you will ideate, design, and implement models & algorithms, while optimizing for power, memory, and performance. You will be working on motion sensing-related features, including sensor fusion and interactive technologies.

Minimum Qualifications


MS, PhD, or 5+ years experience in machine learning, computer science, or related fields Professional experience creating and experimenting with machine learning models, preferably with time series data Strong proficiency in Python, machine learning tools and frameworks e.g. PyTorch, TensorFlow

Preferred Qualifications


Results oriented, with a proven ability to effectively prioritize and deliver tasks on schedule Excellent communication and collaboration skills Strong product sense, including the ability to balance technical feasibility with user experience Experience developing for embedded or real-time systems Experience leveraging distributed compute/storage models when the scale of data calls for it Experience designing and implementing interfaces between algorithms, software, and firmware Experience with multi-modal inputs and models, including IMU, images, video, and/or audio



Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.