SonicJobs Logo
Left arrow iconBack to search

Deep Learning Intern — LLM Research & Model Safety

A10 Networks, Inc
Posted 4 months ago, valid for 17 days
Location

San Jose, CA 95103, US

Salary

$50 per hour

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • We are looking for a Deep Learning Intern focused on LLM research and model safety for a 12-week, full-time, on-site internship in San Jose, California.
  • The role involves researching methods to enhance the safety, interpretability, and reliability of Large Language Models, as well as fine-tuning and evaluating these models.
  • Candidates must be currently enrolled in a Bachelor’s, Master’s, or PhD program in Computer Engineering or a related field, with graduation expected between December 2026 and June 2027.
  • Preferred qualifications include strong programming skills in Python, experience with deep learning frameworks, and an interest in AI safety and interpretability.
  • Compensation for the internship is $50/hour for BS candidates, $58/hour for MS candidates, and $65/hour for PhD candidates.
Deep Learning Intern — LLM Research & Model Safety

We’re seeking a Deep Learning Intern passionate about advancing Large Language Model (LLM) research, with a focus on safety, interpretability, and alignment. In this role, you’ll investigate model behavior, identify vulnerabilities, and design fine-tuning and evaluation strategies to make AI systems more robust and trustworthy

You’ll collaborate with researchers and engineers to experiment with LLMs, Vision-Language Models (VLMs), and multimodal architectures, contributing to next-generation AI systems that are both powerful and safe

This is a 12-week, full-time, on-site internship at our San Jose, California office, where you’ll work on high-impact projects that directly support our mission. We’re looking for motivated students eager to apply their technical and research skills to shape the future of responsible AI

Your Responsibilities

  • Research and prototype methods to improve safety, interpretability, and reliability of LLMs
  • Fine-tune pre-trained LLMs on curated datasets for task adaptation and behavioral control
  • Design evaluation frameworks to measure robustness, alignment, and harmful output rates
  • Conduct adversarial and red-teaming experiments to uncover weaknesses in model responses
  • Collaborate with engineering teams to integrate findings into production inference systems
  • Explore and experiment with multimodal model extensions, including VLMs and audio-based models
  • Stay up-to-date with the latest research on model alignment, parameter-efficient tuning, and safety benchmarks

Qualifications – You Must

  • Currently enrolled in a Bachelor’s, Master’s, or PhD program in Computer Engineering or a related field in the U.S. for the full duration of the internship
  • Graduation expected between December 2026 – June 2027
  • Available for 12 weeks between May–August 2026 or June–September 2026

Preferred Qualifications

  • Strong programming skills in Python and experience with deep learning frameworks (PyTorch or TensorFlow)
  • Understanding of transformer architectures, attention mechanisms, and scaling laws
  • Experience or coursework in LLM fine-tuning, LoRA/QLoRA, or instruction-tuning methods
  • Familiarity with evaluation datasets and safety benchmarks (e.g., HELM, TruthfulQA, JailbreakBench)
  • Interest in AI safety, interpretability, or bias detection
  • Exposure to Vision-Language Models (VLMs), speech/audio models, or multimodal architectures is a plus
  • Ability to implement research ideas into working prototypes efficiently

What You’ll Gain

  • Hands-on experience in LLM and multimodal model research, focusing on safety and performance
  • Exposure to fine-tuning, red-teaming, and evaluation of frontier AI models
  • Mentorship from experts working at the intersection of deep learning research and AI safety engineering
  • Opportunities to publish internal studies or papers and contribute to real-world model safety initiatives

Compensation: 

BS: $50/hour

MS: $58/hour

PhD: $65/hour




Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.