
Neural Network Digit Recognition
Hypothesis
Science Concepts Learned
Can a program learn to read handwriting from scratch — and then generalize that skill to handwriting it has never seen? In this experiment, a three-layer neural network is trained on twenty examples of each digit from 0 to 9, using back propagation to correct mistakes layer by layer across 30,000 training rounds. By the end, the network identifies the practice digits with 90% accuracy and a mean square error of just 0.01. The real test comes next: presenting fresh handwritten digits from people who had no part in the training set. The network still recognizes 83% of those new digits correctly — evidence that adjusting connection weights through repeated error correction builds genuine learning, not just memorization of the training examples.
Method & Materials
MEL Math — hands-on math experiment kits delivered monthly — makes abstract concepts tangible. (Affiliate link)
See what’s included