In class last week, we learned about the neural network. Neuro networks are comprised of some number of input nodes that are attached to hidden layers which produce output nodes. The input nodes are where the information enters, such as text, numbers, or pixels. The information then flows to the hidden layers, which is where the thinking occurs. The hidden layers use mathematical transformations and detects patterns. From there, the output nodes release an outcome that is made on the information provided.Â
When there is a 1 in the pixel, it means there is color within it. Conversely, when there is a 0, it means that the pixel is blank. From there, you use weights to determine whether there are connections, adjusting the weights to make the prediction better. The goal is to have the prediction as close to zero or one as possible. There are ten possible outputs, 0-9. The output result is calculated by multiplying the input which is a zero or one by the weight gives us the output. From there, you use the activation function to evaluate the answer from neural network. This process can be tedious as you have to continuously change the weights to get a better result that is as close to zero or one as you can possibly get it. This is called training, as we want to get the digits as close to 0 and 1 as possible to minimize the number of screwups in the real situation. When the equation gives us the wrong answer, we use backpropagation to adjust the weights in order to get the desired outcome.Â
Learning how neural networks work was an interesting topic to cover and the example we used helped visualize everything we had talked about.Â