Machine Learning Algorithms Researcher
Cerebras is developing a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for the deep learning workload.
Cerebras is building a team of exceptional people to work together on big problems. Join us!
Cerebras is developing both novel algorithms to accelerate training of existing neural network architectures, as well as new, custom network architectures for the next generation of deep learning accelerators. For this internship position, we are looking for hands-on researchers who can:
- Take an algorithm from inception, to TensorFlow or PyTorch implementation, to results competitive with state-of-the-art on benchmarks such as ImageNet classification.
- Develop algorithms for training and inference with sparse weights and sparse activations.
- Develop algorithms for training at unprecedented levels of scale and paralellism.
- Publish results in Machine Learning conferences and company messaging, like blog posts and white papers.
Skills & Qualifications
- Publications in Machine Learning such as supervised, unsupervised, or reinforcement learning. Statistical modeling such as generative modeling and probabilistic modeling.
- Graduate and undergraduate students with a background in Deep Learning and Neural Networks
- Experience with deep learning models such as Transformers, RNNs, and CNNs for language modeling, speech recognition, and computer vision
- Experience with high performance machine learning methods such as distributed training, parameter server, synchronous and asynchronous model parallelism.
- Experience with 16 bit / low precision training and inference, using half precision floating point, fixed point.
- Experience with model compression and model quantization.
- Summer Internship & New Grad / Full Time:
- Headquarters/Los Altos Office
- Remote Office
- San Diego Office
- Toronto Office