For those unfamiliar with machine learning it is most often implemented by the creation and training of a Neural Network. The term neural network is very generic and includes a significant number of distinct sub categories whose names are normally used to identify the exact type of network being implemented. These neural networks are modelled upon the cerebral cortex in that each neuron receives an input, processes it and communicates it on to another neuron. Neural Networks therefore typically consist of an input layer, several hidden internal layers and an output layer.
At the simplest level the neuron takes its input and applies a weight to it before performing a transfer function upon the sum of the weighted inputs. This result is then passed onto either another layer within the hidden layers or to the output layer. Neural networks which pass the output of one stage to another without forming a cycle are called Feed-forward Neural Networks (FNN), while those which contain directed cycles where there is feedback, for example like the Elman network, are called Recurrent Neural Networks (RNN).
One very commonly used term in many machine learning applications is Deep Neural Networks (DNN). These are neural networks which have several hidden layers enabling a more complex machine learning task to be implemented. Neural networks are required to be trained to determine the value of the weights and biases used within each layer. During training, the network has a number of both correct and incorrect inputs applied with the error function being used to teach the network the desired performance. Training a DNN may require a very significant data set to correctly train the required performance.