Inverted Nural Network test
Lately, I became very interested in Machine Learning, so I finally got to write my simple neural network from scratch that recognises handwritten digits as an exercise (~95% accuracy with 10k training samples). For the funs of it, I inverted the outputs of the trained network to, instead of classifying the image, produce an image based on the label I give it.
Very curious to see how the network "sees" numbers. For example, the area it doesn't consider (outside of the white circle), became just noise, as it is irrelevant for the network. Some numbers are easier to recognise than the others from the human perspective. It is fun to see what features the network "distilled" from the given data.