Description
You have a strong research background in machine learning or related fields, and regularly publish your results in the main relevant conference and journal venues, and make sure that your research results are of high quality and reproducible. You will advance the frontier of machine learning through a combination of self-directed research - proposing your own research ideas and demonstrating their feasibility, and collaborative research working with your colleagues on larger problems, sharing implementation and experimentation. You will provide technical mentorship and guidance, and prepare technical reports for publication and conference talks. You will have the opportunity to collaborate with broader teams across Apple.
Minimum Qualifications
Demonstrated expertise in machine learning research working on at least some of the following topics - Reinforcement Learning, LLMs training, LLMs test-time adaptation/scaling, Reasoning/Planning, Diffusion Language Models, Audio Generative/Recognition Models, and Multimodal generative models. Publication record in relevant conferences (e.g., NeurIPS, ICML, ICLR, AAAI, CVPR, ICCV, ECCV, ACL, EMNLP, etc) Hands-on experience working with deep learning toolkits such as Tensorflow or PyTorch. Strong mathematical skills in linear algebra and statistics.
Preferred Qualifications
Ability to formulate a research problem, design, experiment, implement and communicate solutions. Ability to work in a diverse collaborative environment. PhD, or equivalent practical experience, in Computer Science, or related technical field. Hands on experience on at least some of the following: Torch Titan or Lingua, Sharded models, FSDP, DDP, fine-tuning pipelines for common models such as Llama, Mixtral, etc
Learn more about this Employer on their Career Site
