Description
You’ll work on ground breaking research projects to advance our AI and computer vision capabilities, contribute to both foundational research and practical applications on multimodal large language models, and design, implement, and evaluate algorithms and models for human understanding. You have a strong background in developing and exploring multimodal large language models that integrate diverse data modalities such as text, image, video, and audio. You’ll have the opportunity to collaborate with multi-functional teams, including researchers, data scientists, software engineers, human interface designers and application domain experts. You’ll stay up-to-date on the latest advancements in AI, machine learning, and computer vision and apply this knowledge to drive innovation within the company.
Minimum Qualifications
Experience in developing, training/tuning multimodal LLMs. Programming skills in Python. Masters degree with a minimum of 3 years relevant industry experience.
Preferred Qualifications
Expertise in one or more of: computer vision, NLP, multimodal fusion, Generative AI. Experience with at least one deep learning framework such as JAX, PyTorch, or similar. Publication record in relevant venues. PhD in Computer Science, Electrical Engineering, or a related field with a focus on AI, machine learning, or computer vision.
Learn more about this Employer on their Career Site
