Genesis AI · 4 months ago
Staff Software Engineer, Inference (Bay Area / Paris / Remote)
Genesis-ai is a company focused on building advanced AI solutions, and they are seeking a Staff Software Engineer specializing in inference. The role involves developing low-latency inference pipelines and optimizing distributed inference systems, with a strong emphasis on performance and reliability.
Artificial Intelligence (AI)Robotics
Responsibilities
Build low-latency inference pipelines for on-device deployment, enabling real-time next-token and diffusion-based control loops in robotics
Design and optimize distributed inference systems on GPU clusters, pushing throughput with large-batch serving and efficient resource utilization
Implement efficient low-level code (CUDA, Triton, custom kernels) and integrate it seamlessly into high-level frameworks
Optimize workloads for both throughput (batching, scheduling, quantization) and latency (caching, memory management, graph compilation)
Develop monitoring and debugging tools to guarantee reliability, determinism, and rapid diagnosis of regressions across both stacks
Qualification
Required
Deep experience in distributed systems, ML infrastructure, or high-performance serving (8+ years)
Production-grade expertise in Python, with strong background in systems languages (C++/Rust/Go)
Low-level performance mastery: CUDA, Triton, kernel optimization, quantization, memory and compute scheduling
Proven track record scaling inference workloads in both throughput-oriented cluster environments and latency-critical on-device deployments
System-level mindset with a history of tuning hardware–software interactions for maximum efficiency, throughput, and responsiveness