wasshuber

wasshuber

Programming Machine Learning: Help: weird results I don't understand

I encountered something that I can’t explain. Any help, tips, or explanations would be great.

I followed the one hidden layer example with 100 nodes and sigmoid activation function. Works great and I can get to 98.6% accuracy with a learning rate of 1.0, a batch size of 1000, and 100 epochs.

I then decided to exchange the sigmoid activation function with the ReLU. This is not done in the book at this point but it is easy enough to program the ReLU and its derivative. Here is the Python code I used:

def relu(z):
    return np.maximum(0.0,z)
def relu_gradient(z):
    return (z > 0)*1

Works fine as long as one reduces the learning rate which I did reduce to 0.1. It reaches about the same level of accuracy as with the sigmoid. I then did one insignificant change in the gradient of the ReLU. Instead of z > 0 I wrote z >= 0. So the code for the gradient was now:

def relu_gradient(z):
    return (z >= 0)*1

This I thought should not make any difference because how often would z be exactly zero? How often would the weighted sum of all inputs in the floating point format be exactly zero? Perhaps never. Even if it is zero occasionally it should hardly make any big difference. But to my surprise, it makes a profound difference. I can only get to about 95%. Why? Why is there almost 4% difference in accuracy for this insignificant change? There must be something weird happening.

I tried this several times to rule out that somehow the random initialization was unusual. I tried it with different learning rates and different batch sizes. None made any difference in the result. I checked for dead neurons. Found none. If somebody can tell me what is going on here I would really appreciate it.

Most Liked

wasshuber

wasshuber

Turns out it was a bug. Using the nomenclature of the book I was feeding h into the gradient function when I should have fed a into it. With the >= comparison this made all the gradients 1 and thus it acted like the linear activation function. (The linear activation function does produce about 94% accuracy.) Properly using the gradient function produces the expected results. It doesn’t matter if one uses > or >=.

I am happy I found this bug. But this is also part of why your book is so great. Programming it yourself forces one to understand the little details and allows one to change and modify the algorithms at the very core, which leads to much deeper understanding of how this all works.

Here is an insight that my experimentation produced. I tested a bunch of different activation functions including weird piecewise linear ones, periodic ones with sin and cos, combinations thereof etc. It surprised me that many work just as good as ReLU or sigmoid with a single hidden layer. (I intend to extend this experimentation to multiple hidden layers.) For example, it is kind of shocking at first that the absolute-value-function works just as good as ReLU. This kind of makes sense in the biological case. A neuron being a cell would not be completely identical to its neighbor neuron. Neurons in nature would certainly have different activation functions. Perhaps not as different as I experimented with but they would perhaps be noisy and distorted versions of sigmoid or ReLU. It doesn’t matter, it still works fine.

Further, this makes me wonder if perhaps that variation in activation functions in nature is a benefit. I am wondering if folks have tried to make nets where each activation function of each neuron is different. Perhaps that confers a training advantage to the network because not everything behaves in exactly the same way? I will try to explore this question. But first I need to extend the code to allow for multiple hidden layers.

This is one critique I have to make. In my opinion, it would have been better to go further with the code and extend it to multiple hidden layers than to switch to libraries. The point of the book is programming it yourself to allow full unmitigated experimentation. I would have added one or two chapters to extend the code further even if that would have meant leaving out libraries altogether. Numpy should be fast enough to explore multilayer networks on a single average computer.

Where Next?

Popular Pragmatic Bookshelf topics Top

johnp
Running the examples in chapter 5 c under pytest 5.4.1 causes an AttributeError: ‘module’ object has no attribute ‘config’. In particula...
New
sdmoralesma
Title: Web Development with Clojure, Third Edition - migrations/create not working: p159 When I execute the command: user=> (create-...
New
mikecargal
Title: Hands-on Rust: question about get_component (page 295) (feel free to respond. “You dug you’re own hole… good luck”) I have somet...
New
raul
Hi Travis! Thank you for the cool book! :slight_smile: I made a list of issues and thought I could post them chapter by chapter. I’m rev...
New
jskubick
I’m running Android Studio “Arctic Fox” 2020.3.1 Patch 2, and I’m embarrassed to admit that I only made it to page 8 before running into ...
New
jgchristopher
“The ProductLive.Index template calls a helper function, live_component/3, that in turn calls on the modal component. ” Excerpt From: Br...
New
jskubick
I found an issue in Chapter 7 regarding android:backgroundTint vs app:backgroundTint. How to replicate: load chapter-7 from zipfile i...
New
oaklandgit
Hi, I completed chapter 6 but am getting the following error when running: thread 'main' panicked at 'Failed to load texture: IoError(O...
New
jwandekoken
Book: Programming Phoenix LiveView, page 142 (157/378), file lib/pento_web/live/product_live/form_component.ex, in the function below: d...
New
dtonhofer
@parrt In the context of Chapter 4.3, the grammar Java.g4, meant to parse Java 6 compilation units, no longer passes ANTLR (currently 4....
New

Other popular topics Top

Devtalk
Reading something? Working on something? Planning something? Changing jobs even!? If you’re up for sharing, please let us know what you’...
1050 21151 394
New
PragmaticBookshelf
Learn from the award-winning programming series that inspired the Elixir language, and go on a step-by-step journey through the most impo...
New
AstonJ
There’s a whole world of custom keycaps out there that I didn’t know existed! Check out all of our Keycaps threads here: https://forum....
New
AstonJ
I’ve been hearing quite a lot of comments relating to the sound of a keyboard, with one of the most desirable of these called ‘thock’, he...
New
AstonJ
I ended up cancelling my Moonlander order as I think it’s just going to be a bit too bulky for me. I think the Planck and the Preonic (o...
New
rustkas
Intensively researching Erlang books and additional resources on it, I have found that the topic of using Regular Expressions is either c...
New
PragmaticBookshelf
Use WebRTC to build web applications that stream media and data in real time directly from one user to another, all in the browser. ...
New
PragmaticBookshelf
Author Spotlight Rebecca Skinner @RebeccaSkinner Welcome to our latest author spotlight, where we sit down with Rebecca Skinner, auth...
New
AstonJ
If you want a quick and easy way to block any website on your Mac using Little Snitch simply… File > New Rule: And select Deny, O...
New
PragmaticBookshelf
Programming Ruby is the most complete book on Ruby, covering both the language itself and the standard library as well as commonly used t...
New

Sub Categories: