CoderDennis
Machine Learning in Elixir: chapter 7 CNN model accuracy no better than MLP (page 160)
When training the cnn_model, I get the following output:
Epoch: 0, Batch: 150, accuracy: 0.4985513 loss: 7.6424022
Epoch: 1, Batch: 163, accuracy: 0.4992854 loss: 7.6783161
Epoch: 2, Batch: 176, accuracy: 0.5000441 loss: 7.6865749
Epoch: 3, Batch: 139, accuracy: 0.4983259 loss: 7.6991839
Epoch: 4, Batch: 152, accuracy: 0.4988766 loss: 7.6995916
%{
"conv_0" => %{
"bias" => #Nx.Tensor<
f32[32]
EXLA.Backend<host:0, 0.1357844422.1979580433.82179>
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN]
>,
"kernel" => #Nx.Tensor<
f32[3][3][3][32]
EXLA.Backend<host:0, 0.1357844422.1979580433.82180>
[
[
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN],
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
],
...
],
...
]
>
},
"conv_1" => %{
"bias" => #Nx.Tensor<
f32[64]
EXLA.Backend<host:0, 0.1357844422.1979580433.82181>
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -0.0071477023884654045, NaN, NaN, NaN, NaN, 0.0, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...]
>,
"kernel" => #Nx.Tensor<
f32[3][3][32][64]
EXLA.Backend<host:0, 0.1357844422.1979580433.82182>
[
[
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
],
...
],
...
]
>
},
"conv_2" => %{
"bias" => #Nx.Tensor<
f32[128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82183>
[0.0, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 0.005036031361669302, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...]
>,
"kernel" => #Nx.Tensor<
f32[3][3][64][128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82184>
[
[
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
],
...
],
...
]
>
},
"dense_0" => %{
"bias" => #Nx.Tensor<
f32[128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82185>
[NaN, -0.005992305930703878, -0.006005365401506424, -0.004664595704525709, NaN, NaN, NaN, -5.619042203761637e-4, 0.0, NaN, -0.005999671295285225, -6.131592726887902e-6, NaN, 0.0, NaN, 0.0, 0.0, NaN, NaN, -0.006002828478813171, -0.00600335793569684, 0.0, NaN, NaN, NaN, -0.006002923008054495, -0.006005282513797283, -0.00600528996437788, -0.0060048955492675304, -0.006004981696605682, NaN, -0.006004655733704567, -0.006005233619362116, NaN, -0.006004724185913801, -0.006005335133522749, -0.006005051080137491, -0.006004408933222294, NaN, -0.006005355156958103, 0.0, -0.006005344912409782, 0.0, NaN, -0.005991040728986263, ...]
>,
"kernel" => #Nx.Tensor<
f32[18432][128]
EXLA.Backend<host:0, 0.1357844422.1979580433.82186>
[
[NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, ...],
...
]
>
},
"dense_1" => %{
"bias" => #Nx.Tensor<
f32[1]
EXLA.Backend<host:0, 0.1357844422.1979580433.82187>
[NaN]
>,
"kernel" => #Nx.Tensor<
f32[128][1]
EXLA.Backend<host:0, 0.1357844422.1979580433.82188>
[
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
[NaN],
...
]
>
}
}
The accuracy of the mlp_model was Batch: 6, accuracy: 0.5078125 and the accuracy of this cnn_model is Batch: 6, accuracy: 0.4944196 which was slightly worse instead of the expected “significantly better.”
I reviewed all the code to make sure I hadn’t missed anything, but I couldn’t find anything that didn’t match.
I’m guessing the NaNs in the trained model state are a problem, but I’m not sure how to fix that.
Marked As Solved
CoderDennis
Switching to Axon 0.7 resolved the issue.
Popular Pragmatic Bookshelf topics
Running the examples in chapter 5 c under pytest 5.4.1 causes an AttributeError: ‘module’ object has no attribute ‘config’.
In particula...
New
When I try the command to create a pair of migration files I get an error.
user=> (create-migration "guestbook")
Execution error (Ill...
New
Hi,
build fails on:
bracket-lib = “~0.8.1”
when running on Mac Mini M1 Rust version 1.5.0:
Compiling winit v0.22.2
error[E0308]: mi...
New
Page 28: It implements io.ReaderAt on the store type.
Sorry if it’s a dumb question but was the io.ReaderAt supposed to be io.ReadAt?
...
New
I am working on the “Your Turn” for chapter one and building out the restart button talked about on page 27. It recommends looking into ...
New
In general, the book isn’t yet updated for Phoenix version 1.6. On page 18 of the book, the authors indicate that an auto generated of ro...
New
It seems the second code snippet is missing the code to set the current_user:
current_user: Accounts.get_user_by_session_token(session["...
New
Book: Programming Phoenix LiveView, page 142 (157/378), file lib/pento_web/live/product_live/form_component.ex, in the function below:
d...
New
Hello @herbert ! Trying to get the very first “Hello, Bracket Terminal!" example to run (p. 53). I develop on an Amazon EC2 instance runn...
New
I just bought this book to learn about Android development, and I’m already running into a major issue in Ch. 1, p. 20: “Update activity...
New
Other popular topics
I’m thinking of buying a monitor that I can rotate to use as a vertical monitor?
Also, I want to know if someone is using it for program...
New
Continuing the discussion from Thinking about learning Crystal, let’s discuss - I was wondering which languages don’t GC - maybe we can c...
New
Hello everyone! This thread is to tell you about what authors from The Pragmatic Bookshelf are writing on Medium.
New
Build efficient applications that exploit the unique benefits of a pure functional language, learning from an engineer who uses Haskell t...
New
Author Spotlight
Jamis Buck
@jamis
This month, we have the pleasure of spotlighting author Jamis Buck, who has written Mazes for Prog...
New
Author Spotlight
Mike Riley
@mriley
This month, we turn the spotlight on Mike Riley, author of Portable Python Projects. Mike’s book ...
New
Big O Notation can make your code faster by orders of magnitude. Get the hands-on info you need to master data structures and algorithms ...
New
Will Swifties’ war on AI fakes spark a deepfake porn reckoning?
New
Hair Salon Games for Girls Fun
Girls Hair Saloon game is mainly developed for kids. This game allows users to select virtual avatars to ...
New
Ok, well here are some thoughts and opinions on some of the ergonomic keyboards I have, I guess like mini review of each that I use enoug...
New
Categories:
Sub Categories:
Popular Portals
- /elixir
- /rust
- /wasm
- /ruby
- /erlang
- /phoenix
- /keyboards
- /python
- /js
- /rails
- /security
- /go
- /swift
- /vim
- /clojure
- /emacs
- /haskell
- /java
- /svelte
- /onivim
- /typescript
- /kotlin
- /c-plus-plus
- /crystal
- /tailwind
- /react
- /gleam
- /ocaml
- /flutter
- /elm
- /vscode
- /ash
- /html
- /opensuse
- /zig
- /centos
- /deepseek
- /php
- /scala
- /react-native
- /lisp
- /textmate
- /sublime-text
- /nixos
- /debian
- /agda
- /django
- /kubuntu
- /deno
- /arch-linux
- /nodejs
- /ubuntu
- /revery
- /spring
- /manjaro
- /lua
- /diversity
- /julia
- /markdown
- /slackware








