Side-by-side comparison: 10 vs 100 vs 500 hidden nodes

One drawing → three models respond simultaneously. Makes it painfully obvious what capacity buys you (and when it’s just buying you overconfidence).
Shortcuts: C clear the evidence, P predict your destiny
TF.js: loading… Dataset: pending… Models: untrained
Draw here (mouse or touch)
32×32 preview (what the network sees)
Training mode
epochs: —
Train all three models
Run log
How to use (aka: teach the machine your handwriting isn’t a cryptid)
1) Draw a digit. Thick lines help. If you draw with the elegance of a spider on espresso… results may vary.

2) Click “Train”. Your browser will do push-ups (10/100/500 neurons at once). Watch the run log for progress.

3) View results. Each panel shows: the predicted digit, confidence bars for 0–9, and the single hidden neuron with the highest activation (the one screaming the loudest).

4) Want to train again? Refresh the page and repeat. (Browsers are amazing… until they remember everything and then forget how to breathe.)

What to look for
10 hidden nodes: often “hedges” (flatter bars) → underfitting.
100 hidden nodes: sharper confidence → better representation capacity.
500 hidden nodes: sometimes better, sometimes diminishing returns → capacity vs generalization.

Model A: 10 hidden

acc: —
Top neuron:

Model B: 100 hidden

acc: —
Top neuron:

Model C: 500 hidden

acc: —
Top neuron: