Host IO Engineer
Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.=
Cerebras is building a team of exceptional people to work together on big problems. Join us!
You will contribute to the high-performance software used for communicating with the CS-1 at 1 Tb/s and beyond. Feeding configuration and training data from client systems to the CS-1 is a huge challenge that requires optimized data structures and algorithms that take full advantage of the available hardware resources, including CPU, memory, storage, and network bandwidth.
The software must be built with a high degree of concurrency across threads, processes, cores, and systems. The domain of this engineer is all the software between ML frameworks’ interfaces for hardware accelerators and the Linux IO system calls.
Skills & Qualifications
- Bachelor's / Master's degree or foreign equivalent in Computer Science, Engineering, or related field
- Strong programming: C++, Python, multi-thread, multi-process
- Prior projects using SW to unlock the potential of HW
- Summer Internship & New Grad / Full Time:
- Headquarters/Los Altos Office
- Remote Office
- San Diego Office
- Toronto Office