NVIDIA · 6 hours ago
Senior Software Engineer - Inference as a Service
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. We are seeking a Senior Software Engineer to join our Software Infrastructure Team, where you will be a key contributor to our Inference as a Service platform, developing systems that manage GPU resources and ensure service stability.
AI InfrastructureArtificial Intelligence (AI)Consumer ElectronicsFoundational AIGPUHardwareSoftwareVirtual Reality
Responsibilities
Contribute to the design and development of a scalable, robust, and reliable platform for serving AI models for inference as a service
Develop and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads
Build and maintain the core infrastructure, including load balancing and rate limiting, to ensure the stability and high availability of inference services
Implement APIs for model deployment, monitoring, and management for a seamless user experience
Collaborate with engineering teams to integrate deployment, monitoring, and performance telemetry into our CI/CD pipelines
Build tools and frameworks for real-time observability, performance profiling, and debugging of inference services
Work with architects to define and implement best practices for long-term platform evolution
Contribute to NVIDIA's AI Factory initiative by building a foundational platform that supports model serving needs
Qualification
Required
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience)
12+ years of software engineering experience with expertise in distributed systems or large-scale backend infrastructure
Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems
Proven experience with container orchestration technologies like Kubernetes
A strong understanding of system architecture for high-performance, low-latency API services
Experience in designing, implementing, and optimizing systems for GPU resource management
Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry)
Demonstrated experience with deployment strategies and CI/CD pipelines
Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment
Preferred
Experience with specialized inference serving frameworks
Open-source contributions to projects in the AI/ML, distributed systems, or infrastructure space
Hands-on experience with performance optimization techniques for AI models, such as quantization or model compression
Expertise in building platforms that support a wide variety of AI model architectures
Strong understanding of the full lifecycle of an AI model, from training to deployment and serving
Benefits
Equity
Benefits
Company
NVIDIA
NVIDIA is a computing platform company operating at the intersection of graphics, HPC, and AI.
H1B Sponsorship
NVIDIA has a track record of offering H1B sponsorships. Please note that this does not
guarantee sponsorship for this specific role. Below presents additional info for your
reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2025 (1877)
2024 (1355)
2023 (976)
2022 (835)
2021 (601)
2020 (529)
Funding
Current Stage
Public CompanyTotal Funding
$4.09BKey Investors
ARPA-EARK Investment ManagementSoftBank Vision Fund
2023-05-09Grant· $5M
2022-08-09Post Ipo Equity· $65M
2021-02-18Post Ipo Equity
Recent News
2026-02-09
The Motley Fool
2026-02-09
Company data provided by crunchbase