Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

An image with multiple objects: Person, dog and cars
Two images for digit classification. A network should classify both as ‘4’ but suffer higher uncertainty for the left image.
Neural Network with dropout. Neoruns randomly dropped during training
Neural Network with dropout enabled during testing. Multiple feed-forwards, for the same input, generate multiple outputs.
  1. I find the paper contribution significant.
  2. I humbling express my admiration for the theoretical foundation.
  3. A round of applause is due for the released github source code.
  4. From my practical experience, I have one minor negative feedback. The paper language like in “with dropout applied before every weight layer” gives the impression that dropout layers can be simply placed before every trainable layer. This confused me; probably because I am a “deep learning generation”. I prefer “ before every fully-connected layer”. A convolution layer is also a weight layer, yet it requires a different handling.
  5. Based on the previous comment, I must mention followup papers that extend the current theoretical foundation to CNNs[3] and RNNs[4]

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ahmed Taha

Ahmed Taha

I write reviews on computer vision papers. Writing tips are welcomed.