Description
As a Machine Learning Research Engineer, you will help design and develop models and algorithms for multimodal perception and reasoning leveraging Vision-Language Models (VLMs) and Multimodal Large Language Models (MLLMs). You will collaborate with experienced researchers and engineers to explore new techniques, evaluate performance, and translate product needs into impactful ML solutions. Your work will contribute directly to user-facing features across billions of devices. Your primary responsibilities include: - Contribute to the development and adaptation of AI/ML models for multimodal perception and reasoning - Innovate robust algorithms that integrate visual and language data for comprehensive understanding - Collaborate closely with cross-functional teams to translate product requirements into effective ML solutions. - Conduct hands-on experimentation, model training, and performance analysis - Communicate research outcomes effectively to technical and non-technical stakeholders, providing actionable insights. - Stay current with emerging methods in VLMs, MLLMs, and related areas
Minimum Qualifications
Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or related field — or relevant industry experience Proficiency in Python and deep learning frameworks such as PyTorch, or equivalent Practical experience with training and evaluating neural networks Familiarity with multimodal learning, vision-language models, or large language models Strong problem-solving skills and ability to work in a collaborative, product-focused environment Ability to communicate technical results clearly and concisely
Preferred Qualifications
Proven track record of research contributions demonstrated through publications in top-tier conferences and journals. Background in multi-modal reasoning, VLM, and MLLM research with impactful software projects. Solid understanding of natural language processing (NLP) and computer vision fundamentals.
Learn more about this Employer on their Career Site
