We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

High-performance AI compute engineer

Cisco Systems, Inc.
paid time off
United States, California, San Jose
170 W Tasman Dr (Show on map)
Jul 20, 2025

Meet the Team

We are an innovation team on a mission to transform how enterprises harness AI. Operating with the agility of a startup and the focus of an incubator, we're building a tight-knit group of AI and infrastructure experts driven by bold ideas and a shared goal: to rethink systems from the ground up and deliver breakthrough solutions that redefine what's possible - faster, leaner, and smarter.
We thrive in a fast-paced, experimentation-rich environment where new technologies aren't just welcome - they're expected. Here, you'll work side-by-side with seasoned engineers, architects, and thinkers to craft the kind of iconic products that can reshape industries and unlock entirely new models of operation for the enterprise.
If you're energized by the challenge of solving hard problems, love working at the edge of what's possible, and want to help shape the future of AI infrastructure - we'd love to meet you.

Impact

As High-performance AI compute engineer, you will be instrumental in defining and delivering the next generation of enterprise-grade AI infrastructure. As a principal engineer within our GPU and CUDA Runtime team, you will play a critical role in shaping the future of high-performance compute infrastructure. Your contributions will directly influence the performance, reliability, and scalability of large-scale GPU-accelerated workloads, powering mission-critical applications across AI/ML, scientific computing, and real-time simulation.

You will be responsible for developing low-level components that bridge user space and kernel space, optimizing memory and data transfer paths, and enabling cutting-edge interconnect technologies like NVLink and RDMA. Your work will ensure that systems efficiently utilize GPU hardware to its full potential, minimizing latency, maximizing throughput, and improving developer experience at scale.

This role offers the opportunity to impact both open and proprietary systems, working at the intersection of device driver innovation, runtime system design, and platform integration.

KEY RESPONSIBILITIEs

  • Design, develop, and maintain device drivers and runtime components for GPU and network components of the systems.
  • Working with kernel and platform components to build efficient memory management paths using pinned memory, peer-to-peer transfers, and unified memory.
  • Optimize data movement using high-speed interconnects such as RDMA, InfiniBand, NVLink, and PCIe, with a focus on reducing latency and increasing bandwidth.
  • Implement and fine-tune GPU memory copy paths with awareness of NUMA topologies and hardware coherency.
  • Develop instrumentation and telemetry collection mechanisms to monitor GPU and memory performance without impacting runtime workloads.
  • Contribute to internal tools and libraries for GPU system introspection, profiling, and debugging.
  • Provide technical mentorship and peer reviews, and guide junior engineers on best practices for low-level GPU development.
  • Stay current with evolving GPU architectures, memory technologies, and industry standards.

Minimum Qualifications :

  • 18+ of experience in systems programming, ideally with 5+ years focused on CUDA/GPU driver and runtime internals.
  • Minimum of 5+ years of experience with kernel-space development, ideally in Linux kernel modules, device drivers, or GPU runtime libraries (e.g., CUDA, ROCm, or OpenCL runtimes).
  • Direct experience working with NVIDIA GPU architecture, CUDA toolchains, and performance tools (Nsight, CUPTI, etc.).
  • Experience optimizing for NVLink, PCIe, Unified Memory (UM), and NUMA architectures.
  • Strong grasp of RDMA, InfiniBand, and GPUDirect technologies and their using in frameworks like UCX.
  • Minimum of 8+ years of experience programming within C/C++ with low-level systems proficiency (memory management, synchronization, cache coherence).
  • Strong understanding of multi-threaded and asynchronous programming models.
  • Deep understanding of HPC workloads, performance bottlenecks, and compute/memory tradeoffs.
  • Expertise in zero-copy memory access, pinned memory, peer-to-peer memory copy, and device memory lifetimes.

Preferred Qualifications

  • Familiarity with python and AI framework like pytorch.
  • Familiarity with assembly or PTX/SASS for debugging or optimizing CUDA kernels.
  • Familiarity with NVMe storage offloads, IOAT/DPDK, or other DMA-based acceleration methods.
  • Familiarity with Valgrind, cuda-memcheck, gdb, and profiling with Nsight Compute/Systems.
  • Proficiency with perf, ftrace, eBPF, and other Linux tracing tools

#WeAreCisco

#WeAreCisco where every individual brings their unique skills and perspectives together to pursue our purpose of powering an inclusive future for all.

Our passion is connection-we celebrate our employees' diverse set of backgrounds and focus on unlocking potential. Cisconians often experience one company, many careers where learning and development are encouraged and supported at every stage. Our technology, tools, and culture pioneered hybrid work trends, allowing all to not only give their best, but be their best.

We understand our outstanding opportunity to bring communities together and at the heart of that is our people. One-third of Cisconians collaborate in our 30 employee resource organizations, called Inclusive Communities, to connect, foster belonging, learn to be informed allies, and make a difference. Dedicated paid time off to volunteer-80 hours each year-allows us to give back to causes we are passionate about, and nearly 86% do!

Our purpose, driven by our people, is what makes us the worldwide leader in technology that powers the internet. Helping our customers reimagine their applications, secure their enterprise, transform their infrastructure, and meet their sustainability goals is what we do best. We ensure that every step we take is a step towards a more inclusive future for all. Take your next step and be you, with us!


Applied = 0

(web-6886664d94-5gz94)