Neural networks are a biologically inspired method of building computer programs that are able to learn and independently find connections in data. It does this using a few steps
Learning rates and loss function:
Mathematical notation used to compare the output to the true value. The goal is to minimize the loss funciton so that the output matches the true value (difference of 0) and the deviation from the true value is used to work backwards and change the functions of the nodes in a network.
Backpropogation:
The process by which the weights are updated in a neural network through the findings of gradient descent → done after forward propogation or working backwards to update weights based on output rather than updating output based on weights
Once output layer is reached after forward propogation, calculate loss on all outputs and their given values (rmbr, highest value in output is the probability of an answer being right)
loss function is minimized by the d(loss)/d(weights)
gradien descent starts updating weights based on how far off certain outputs are from the true value
pdates will be by batch or input, hence the results of the loss function is the average loss of the outputs
input for node j in layer l is weightes sum of activation outpts from the prevous layer l-1→ multiply weights of previoous layer and multiply it by activation output to get the next outputs