Machine Learning Research Engineering Intern
Cerebras is developing a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for the deep learning workload.
Cerebras is building a team of exceptional people to work together on big problems. Join us!
Cerebras is developing both novel algorithms to accelerate training of existing neural network architectures, as well as new, custom network architectures for the next generation of deep learning accelerators. For this internship position, we are looking for hands-on researchers who can:
- Take an algorithm from inception, to TensorFlow implementation, to results competitive with state-of-the-art on benchmarks such as ImageNet classification.
- Develop algorithms for training and inference with sparse weights and sparse activations.
- Develop algorithms for training at unprecedented levels of scale and parallelism.
- Publish results in blog posts, white papers and at Machine Learning conferences.
Skills & Qualifications
- Graduate and undergraduate students with a background in Deep Learning and Neural Networks
- Experience with supervised deep learning models such as RNNs and CNNs for computer vision, language modeling and speech recognition.
- Publications in Machine Learning such as supervised, unsupervised and reinforcement learning. Statistical modeling such as generative modeling and probabilistic modeling.
- Experience with high performance machine learning methods such as distributed training, parameter server, synchronous and asynchronous model parallelism.
- Experience with 16 bit / low precision training and inference, using half precision floating point, fixed point.
- Experience with model compression and model quantization.
Our cozy and well-appointed headquarters are in the heart of Silicon Valley near downtown Los Altos, California.
Our beautiful San Diego offices overlook views of the Sorrento Valley canyon.