Neural networks are a biologically inspired method of building computer programs that are able to learn and independently find connections in data. It does this using a few steps

  1. Calculating Gradient and applying it to all nodes → backpropogation
  2. Dense layer weights and functions
  3. Learning rates and loss function (MSE for example)
  4. Batch normalization
  5. Regularization and Learnable Parameters

Learning rates and loss function:

Mathematical notation used to compare the output to the true value. The goal is to minimize the loss funciton so that the output matches the true value (difference of 0) and the deviation from the true value is used to work backwards and change the functions of the nodes in a network.

Backpropogation:

The process by which the weights are updated in a neural network through the findings of gradient descent → done after forward propogation or working backwards to update weights based on output rather than updating output based on weights