Cerebras is developing a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.
Cerebras is building a team of exceptional people to work together on big problems. Join us!
As a member of our Compiler team, you will work with leaders from industry and academia to develop entirely new solutions for the toughest problems in AI compute.
As deep neural network architectures evolve, they are becoming enormously parallel, and distributed. Compilers are needed to optimize the mappings of computation graphs to compute nodes. In this position, you will build the tools that generate distributed memory code from evolving intermediate representations.
- Design and devise graph semantics, intermediate representations, and abstraction layers between high-level definitions (like TensorFlow’s XLA) and low-level distributed code.
- Use state-of-the-art parallelization and partitioning techniques to automate generation, exploiting hand-written distributed kernels.
- Identify and implement novel program analysis and optimization techniques.
- Employ and extend state of the art program analysis methods such as the Integer Set Library.
Cerebras is hiring full-time team members as well as interns.
Skills & Qualifications
- Master’s, PhD, or foreign equivalents in computer science, engineering, or related field.
- Two or more years of related work experience on compilers and distributed systems.
- Compiler experience; experience generating and optimizing code.
- Familiarity with high-level parallel program analysis and optimization.
- LLVM compiler internals.
- Polyhedral models.
- Familiarity with HPC kernels and their optimization.
- Headquarters/Los Altos Office
- Remote Office
- San Diego Office
- Toronto Office