sets new deep learning standard

15-11-2018 trains ResNet network 2.5x faster, using 8x fewer GPUs and at a quarter of the cost than previous benchmark winner

Read this blog series to find out how

Now that the dust has settled after the DawnBench competition (held earlier this year) and results - made publicly available, David Page, Chief Scientist here at Myrtle.AI decided to revisit the object recognition training times for CIFAR-10 data and review the training methods and network designs used to achieve the winning result.  David decided to take his investigations a stage further by applying Myrtle’s expertise to the training process and the model design to see if he could improve on the results.

The aim of DawnBench was to allow fair speed training comparisons to be made on different training and inference methods through a series of benchmarking competitions. Processing time and cost are critical resources when training models for real-world applications, yet many existing benchmarks focus solely on model accuracy. 

David was particularly interested in the CIFAR 10 object recognition benchmark.  The only requirement was to deliver the fastest and the cheapest image classifier to achieve 94% accuracy on CIFAR-10 data.

By the time the CIFAR-10 competition closed in April, the fastest single GPU entry was achieved by Ben Johnson, a Fast.AI studentwho achieved the 94% accuracy in under 6 minutes (341 seconds).  Johnson was able to improve on previous results by applying mixed-precision training, selecting a smaller network with sufficient capacity for the task and by employing higher learning rates to speed up stochastic gradient descent (SGD).

Object recognition is a hot topic in deep learning research, because of the huge volume of photos and video streams in social and commercial use, however the final competition entries didn't reflect cutting edge training practices and model design approaches, developed at  

David decided to tackle this time of 341 seconds to see if it could be reduced. In a series of blogs he explains how he did this, challenges he encountered along the way, such as such as forgetfulness, speed, efficiency and the measures he took to overcome them. But most exciting of all he was able to demonstrate that by applying Myrtle.AI’s training techniques to the CIFAR 10 data, the end results were 4.5 times faster than the previous benchmark winners’.

To read more information, click here. does the engineering to make deep learning a low power, high performance reality today.