SonicJobs Logo
Left arrow iconBack to search

Senior Software Engineer - NIM Factory Container and Cloud Infrastructure

NVIDIA
Posted a month ago, valid for 16 days
Location

Santa Clara, CA 95052, US

Salary

$120,000 - $144,000 per year

info
Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • NVIDIA is looking for a Senior Software Engineer specializing in container and cloud infrastructure with a focus on NVIDIA Inference Microservices (NIMs).
  • Candidates should have 10+ years of experience in building production software, particularly with containers and Kubernetes, along with strong Python skills.
  • The base salary for this position ranges from 184,000 USD to 287,500 USD for Level 4 and 224,000 USD to 356,500 USD for Level 5, with eligibility for equity and benefits.
  • The role involves designing and implementing container strategies, optimizing performance, and collaborating across teams to ensure the availability of new models.
  • NVIDIA values diversity and is committed to being an equal opportunity employer, welcoming applications until at least February 9, 2026.

NVIDIA is the platform upon which every new AI-powered application is built. We are seeking a Senior Software Engineer focused on container and cloud infrastructure. You will help design and implement our core container strategy for NVIDIA Inference Microservices (NIMs) and our hosted services. You will build enterprise-grade software and tooling for container build, packaging, and deployment. You will help improve reliability, performance, and scale across thousands of GPUs. There is much more to build ahead, including support for disaggregated LLM inference and other emerging deployment patterns.

What you'll be doing:

  • Design, build, and harden containers for NIM runtimes, inference backends; enable reproducible, multi-arch, CUDA-optimized builds.

  • Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests.

  • Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi-cluster rollouts.

  • Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing.

  • Evolve the base image strategy, dependency management, and artifact/registry topology.

  • Collaborate across research, backend, SRE, and product teams to ensure day-0 availability of new models.

  • Mentor teammates; set high engineering standards for container quality, security, and operability.

What we need to see:

  • 10+ years building production software with a strong focus on containers and Kubernetes.

  • Strong Python skills building production-grade tooling/services

  • Experience with Python SDKs and clients for Kubernetes and cloud services

  • Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi-stage builds, and registry workflows.

  • Deep experience operating workloads on Kubernetes.

  • Strong understanding on LLM inference features, including structured output, KV-cache, and LoRa adapter

  • Hands-on experience building and running GPU workloads in k8s, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation.

  • Excellent collaboration and communication skills; ability to influence cross-functional design.

  • A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience.

Ways to stand out from the crowd:

  • Expertise with Helm chart design systems, Operators, and platform APIs serving many teams.

  • Experience with OpenAI API, Hugging Face API as well as understanding difference inference backends (vLLM, SGLang, TRT-LLM)

  • Background in benchmarking and optimizing inference container performance and startup latency at scale.

  • Prior experience designing multi-tenant, multi-cluster, or edge/air-gapped container delivery.

  • Contributions to open-source container, k8s, or GPU ecosystems.

With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology industry's most desirable employers. We have some of the most forward-thinking and versatile people in the world working with us, and our engineering teams are growing fast in some of the most impactful fields of our generation: Deep Learning, Artificial Intelligence, and Autonomous Vehicles. If you're a creative engineer who enjoys autonomy and shares our passion for technology, we want to hear from you.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until February 9, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

#deeplearning



Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.