Compiler Engineering Intern
Cerebras is developing a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for the deep learning workload.
Cerebras is building a team of exceptional people to work together on big problems. Join us!
As an intern on our Compiler team, you will work with leaders from industry and academia to develop entirely new solutions for the toughest problems in AI compute.
As deep neural network architectures evolve, they are becoming enormously parallel, and distributed. Compilers are needed to optimize the mappings of computation graphs to compute nodes. In this position, you will build the tools that generate distributed memory code from evolving intermediate representations.
- Design and devise graph semantics, intermediate representations, and abstraction layers between high-level definitions (like TensorFlow’s XLA) and low-level distributed code.
- Use state-of-the-art parallelization and partitioning techniques to automate generation, exploiting hand-written distributed kernels.
- Identify and implement novel program analysis and optimization techniques.
- Employ and extend state of the art program analysis methods such as the Integer Set Library.
Skills & Qualifications
- Graduate and undergraduate students in Computer Science with a background in compilers and parallel programming.
- Two or more years of related work experience on compilers and distributed systems.
- Compiler experience; experience generating and optimizing code.
- Familiarity with high-level parallel program analysis and optimization
- LLVM compiler internals.
- Polyhedral models.
- Familiarity with HPC kernels and their optimization.
Our cozy and well-appointed headquarters are in the heart of Silicon Valley near downtown Los Altos, California.
Our beautiful San Diego offices overlook views of the Sorrento Valley canyon.