The Computation Limits of Deep Learning

Deep learning's recent history has been one of achievement: from triumphing over humans in the game of Go to world-leading performance in image classification, voice recognition, translation, and other tasks. But this progress has come with a voracious appetite for computing power. This project catalogs the extent of this dependency, showing that progress across a wide variety of applications is strongly reliant on increases in computing power. Extrapolating forward this reliance reveals that progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable. Thus, continued progress in these applications will require dramatically more computationally-efficient methods, which will either have to come from changes to deep learning or from moving to other machine learning methods.

Benchmarks

Question Answering on SQuAD 1.1

open
ANNA large (single model)
LUKE
XLNet (single model)
XLNET-123 (single model)
SpanBERT (single model)
BERT (single model)
MAMCN+ (single model)
Reinforced Mnemonic Reader (single model)
BiDAF + Self Attention + ELMo (single model)
{gqa} (single model)

Want to contribute?

You have access to our database where you can point out any errors or suggest changes

Go to database
App screenshot