
CoderDennis
Machine Learning in Elixir: chapter 7 CNN model accuracy no better than MLP (page 160)
When training the cnn_model, I get the following output:
Epoch: 0, Batch: 150, accuracy: 0.4985513 loss: 7.6424022
Epoch: 1, Batch: 163, accuracy: 0.4992854 loss: 7.6783161
Epoch: 2, Batch: 176, accuracy: 0.5000441 loss: 7.6865749
Epoch: 3, Batch: 139, accuracy: 0.4983259 loss: 7.6991839
Epoch: 4, Batch: 152, accuracy: 0.4988766 loss: 7.6995916
%{
"conv_0" => %{
"bias" => #Nx.Tensor<
f32[32]
EXLA.Backend<host:0, 0.1357844422.1979580433.82179>
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN]
>,
"kernel" => #Nx.Tensor<
f32[3][3][3][32]
EXLA.Backend<host:0, 0.1357844422.1979580433.82180>
[
[
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN],
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
],
...
],
...
]
>
},
"conv_1" => %{
"bias" => #Nx.Tensor<
f32[64]
EXLA.Backend<host:0, 0.1357844422.1979580433.82181>
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -0.0071477023884654045, NaN, NaN, NaN, NaN, 0.0, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...]
>,
"kernel" => #Nx.Tensor<
f32[3][3][32][64]
EXLA.Backend<host:0, 0.1357844422.1979580433.82182>
[
[
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
],
...
],
...
]
>
},
"conv_2" => %{
"bias" => #Nx.Tensor<
f32[128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82183>
[0.0, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 0.005036031361669302, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...]
>,
"kernel" => #Nx.Tensor<
f32[3][3][64][128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82184>
[
[
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
],
...
],
...
]
>
},
"dense_0" => %{
"bias" => #Nx.Tensor<
f32[128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82185>
[NaN, -0.005992305930703878, -0.006005365401506424, -0.004664595704525709, NaN, NaN, NaN, -5.619042203761637e-4, 0.0, NaN, -0.005999671295285225, -6.131592726887902e-6, NaN, 0.0, NaN, 0.0, 0.0, NaN, NaN, -0.006002828478813171, -0.00600335793569684, 0.0, NaN, NaN, NaN, -0.006002923008054495, -0.006005282513797283, -0.00600528996437788, -0.0060048955492675304, -0.006004981696605682, NaN, -0.006004655733704567, -0.006005233619362116, NaN, -0.006004724185913801, -0.006005335133522749, -0.006005051080137491, -0.006004408933222294, NaN, -0.006005355156958103, 0.0, -0.006005344912409782, 0.0, NaN, -0.005991040728986263, ...]
>,
"kernel" => #Nx.Tensor<
f32[18432][128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82186>
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
]
>
},
"dense_1" => %{
"bias" => #Nx.Tensor<
f32[1]
EXLA.Backend<host:0, 0.1357844422.1979580433.82187>
[NaN]
>,
"kernel" => #Nx.Tensor<
f32[128][1]
EXLA.Backend<host:0, 0.1357844422.1979580433.82188>
[
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
...
]
>
}
}
The accuracy of the mlp_model was Batch: 6, accuracy: 0.5078125
and the accuracy of this cnn_model is Batch: 6, accuracy: 0.4944196
which was slightly worse instead of the expected “significantly better.”
I reviewed all the code to make sure I hadn’t missed anything, but I couldn’t find anything that didn’t match.
I’m guessing the NaN
s in the trained model state are a problem, but I’m not sure how to fix that.
Marked As Solved

CoderDennis
Switching to Axon 0.7 resolved the issue.
Popular Prag Prog topics

page 20: … protoc command…
I had to additionally run the following go get commands in order to be able to compile protobuf code using go...
New

Python Testing With Pytest - Chapter 2, warnings for “unregistered custom marks”
While running the smoke tests in Chapter 2, I get these...
New

your book suggests to use Image.toByteData() to convert image to bytes, however I get the following error: "the getter ‘toByteData’ isn’t...
New

Dear Sophie.
I tried to do the “Authorization” exercise and have two questions:
When trying to plug in an email-service, I found the ...
New

Hello! Thanks for the great book.
I was attempting the Trie (chap 17) exercises and for number 4 the solution provided for the autocorre...
New

When I run the coverage example to report on missing lines, I get:
pytest --cov=cards --report=term-missing ch7
ERROR: usage: pytest [op...
New

When running tox for the first time, I got the following error:
ERROR: InterpreterNotFound: python3.10
I realised that I was running ...
New

The markup used to display the uploaded image results in a Phoenix.LiveView.HTMLTokenizer.ParseError error.
lib/pento_web/live/product_l...
New

Hi,
I am getting an error I cannot figure out on my test.
I have what I think is the exact code from the book, other than I changed “us...
New

Hi, I’m working on the Chapter 8 of the book.
After I add add the point_offset, I’m still able to see acne:
In the image above, I re...
New
Other popular topics

Hello Devtalk World!
Please let us know a little about who you are and where you’re from :nerd_face:
New

I’ve been really enjoying obsidian.md:
It is very snappy (even though it is based on Electron). I love that it is all local by defaul...
New

I’ve been hearing quite a lot of comments relating to the sound of a keyboard, with one of the most desirable of these called ‘thock’, he...
New

This looks like a stunning keycap set :orange_heart:
A LEGENDARY KEYBOARD LIVES ON
When you bought an Apple Macintosh computer in the e...
New

The V Programming Language
Simple language for building maintainable programs
V is already mentioned couple of times in the forum, but I...
New

Intensively researching Erlang books and additional resources on it, I have found that the topic of using Regular Expressions is either c...
New

Was just curious to see if any were around, found this one:
I got 51/100:
Not sure if it was meant to buy I am sure at times the b...
New

Author Spotlight
Erin Dees
@undees
Welcome to our new author spotlight! We had the pleasure of chatting with Erin Dees, co-author of ...
New

Author Spotlight:
David Bryant Copeland
@davetron5000
We’re so happy to bring you another Author Spotlight, a series where we sit dow...
New

Author Spotlight:
Peter Ullrich
@PJUllrich
Data is at the core of every business, but it is useless if nobody can access and analyze ...
New
Latest in PragProg
Latest (all)
Categories:
Popular Portals
- /elixir
- /rust
- /wasm
- /ruby
- /erlang
- /phoenix
- /keyboards
- /js
- /rails
- /python
- /security
- /go
- /swift
- /vim
- /clojure
- /java
- /haskell
- /emacs
- /svelte
- /onivim
- /typescript
- /crystal
- /c-plus-plus
- /tailwind
- /kotlin
- /gleam
- /react
- /flutter
- /elm
- /ocaml
- /vscode
- /opensuse
- /ash
- /centos
- /php
- /deepseek
- /zig
- /scala
- /html
- /debian
- /nixos
- /lisp
- /agda
- /react-native
- /textmate
- /sublime-text
- /kubuntu
- /arch-linux
- /ubuntu
- /revery
- /manjaro
- /django
- /spring
- /diversity
- /nodejs
- /lua
- /slackware
- /julia
- /c
- /neovim