RADIAL BASIS FUNCTION NETWORKs (RBFNs)

Radial Basis Function Networks (RBFNs). RBFNs are feedforward neural networks that use radial basis functions as activation functions. They have an input layer, a hidden layer, and an output layer and are commonly used for classification, regression, and time-series prediction.

RBFNs work by the following steps: First, RBFNs classify input by comparing it to examples from the training set. Second, RBFNs have an input vector that feeds into the input layer. They have a layer of RBF neurons. Third, the function computes the weighted sum of the inputs, and the output layer contains one node for each category or class of data. Forth, the neurons in the hidden layer have Gaussian transfer functions, which have outputs that are inversely proportional to the distance from the neuron's center. Fifth, the network's output is a linear combination of the input's radial-basis functions and the neuron's parameters.

See this example of an RBFN:

DEEP BELIEF NETWORKs (DBNs)

Deep Belief Networks (DBNs) Deep Belief Networks (DBNs) are generative models made up of multiple layers of stochastic, latent variables. The latent variables have binary values and are frequently referred to as hidden units. DBNs are a stack of Boltzmann Machines with connections between the layers, and each RBM layer communicates with both the preceding and following layers. Deep Belief Networks (DBNs) are used to recognize images, videos, and motion in data.

How DBNs work is following four steps: First, greedy learning algorithms are used to train DBNs. For learning the top-down, generative weights, the greedy learning algorithm employs a layer-by-layer approach. Second, DBNs perform Gibbs sampling on the top two hidden layers. The RBM defined by the top two hidden layers is sampled in this stage. Third, DBNs use a single pass of ancestral sampling through the rest of the model to draw a sample from the visible units. Fourth, DBNs discover that the values of the latent variables in each layer can be inferred using a single bottom-up pass.

An example of DBN architecture is shown below:

RESTRICTED BOLTZMANN MACHINEs (RBMs)

Restricted Boltzmann Machines (RBMs), RBMs, invented by Geoffrey Hinton, are stochastic neural networks that can learn from a probability distribution over a set of inputs. This deep learning algorithm is used for dimensionality reduction, classification, regression, collaborative filtering, feature learning, and topic modeling. RBMs are the fundamental building blocks of DBNs. RBMs are composed of two layers: Units that are visible and units that are hidden. Each visible unit is linked to all hidden units. RBMs have no output nodes and a bias unit that is connected to all visible and hidden units.

RBMs have two phases: forward pass and backward pass. First, RBMs accept the inputs and translate them into a set of numbers that encodes the inputs in the forward pass. Second, RBMs combine each input with an individual weight and one overall bias. The algorithm sends the output to the hidden layer. Third, in the backward pass, RBMs take that set of numbers and translate them to form the reconstructed inputs. Forth, RBMs combine each activation with individual weight and overall bias and send the output to the visible layer for reconstruction. Fifth, at the visible layer, the RBM compares the reconstruction to the original input to assess the quality of the result.

The following is a diagram of how RBMs work:

SELF ORGANIZING MAPs (SOMs)

Self Organizing Maps (SOMs) created by Professor Teuvo Kohonen, which allow data visualization to reduce data dimensions using self-organizing artificial neural networks. Data visualization attempts to address the issue of humans' inability to visualize high-dimensional data. SOMs are designed to assist users in comprehending this multidimensional data.

There is five following step how SOMs work: First, SOMs assign weights to each node and select a vector at random from the training data. Second, SOMs examine each node to determine which weights are most likely to be the input vector. The winning node is referred to as the Best Matching Unit (BMU). Third, SOMs discover the BMU's neighborhood, and the number of neighbors gradually decreases. Forth, the sample vector is given a winning weight by SOMs. The weight of a node changes as it gets closer to a BMU. Fifth, the greater the distance between the neighbor and the BMU, the less it learns. SOMs iteratively repeat step two for N iterations.

A diagram of a color-coded input vector is shown below. This data is fed into a SOM, which converts it to 2D RGB values. Finally, it divides and categorizes the various colors.