Responsibilities
- Algorithm & System Implementation: Translate cutting-edge research in Vision-Language Models (VLMs), Reinforcement Learning, and Multimodal LLMs into performant, real-world applications on smart glasses and robotic platforms
- End-to-End Ownership: Own the full lifecycle of feature development, from initial prototyping and data collection to deployment and system integration
- Experimental Rigor: Design and lead large-scale ablation studies
- develop robust benchmarking suites to evaluate and iterate on next-gen contextual AI
- Cross-Functional Influence: Partner with Hardware Engineers to influence sensor/silicon design and collaborate with Researchers and Product Managers to define the future of human-AI interaction
- Architecture & Roadmap: When hired at a staff level, lead the design and execution of engineering roadmaps, making critical architectural decisions to ensure low-latency, high-accuracy inference on power-constrained "always-on" edge devices
- Technical Leadership & Mentorship: When hired at a staff level, provide technical guidance and mentorship to peers and engineers, setting the bar for engineering and software maintainability
Minimum Qualifications
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
- Bachelor’s degree in Computer Science, Robotics, or a related technical field (or equivalent practical experience)
- Experience: 3-8+ years of professional experience in AI research engineering, software development, or a related field. (Candidate leveling will be determined based on technical depth, scope of previous impact, and leadership experience)
- Core Technical Skills: In-depth experience programming in C++ and Python, with a focus on developing high-performance, maintainable codebases
- Framework Mastery: Extensive experience with PyTorch or TensorFlow, including model optimization (e.g., quantization, distillation, or custom kernel development)
- Deployment Experience: Proven history of deploying machine learning models into production environments or integrated hardware-software systems
Preferred Qualifications
- Advanced Degree: Ph.D. or M.S. in Computer Science, Software Engineering, or Robotics
- Domain Expertise: Specialized experience in one or more: Egocentric Perception, Vision-Language-Action (VLA) models, SLAM, or Sim-to-Real transfer
- Large-Scale Data: Experience architecting data pipelines for high-dimensionality, multi-modal datasets
- Publication Record: Contributions to the research community through publications at market leading venues (CVPR, ICCV, NeurIPS, ICRA, RSS) or significant patent filings
- Communication: Ability to communicate complex technical trade-offs to both technical peers and non-technical stakeholders
$88.46/hour to $257,000/year + bonus + equity + benefits
Learn more about this Employer on their Career Site
