Deep Learning Neural Networks

Of all the major Machine Learning techniques, Neural Networks stand out as a truly outstanding platform in their own right. They can be seen as a super extension of many types of prior art, including linear regression, stochastic gradient descent, digital filtering, and negative feedback.

Deep Machine Learning Neural Networks
Deep Machine Learning Neural Networks
Magical Deep Learning Neural Networks

Neural Networks can be simply is characterized as a construct that implements regression or classification by applying nonlinear transformation to linear combinations of raw input features.

Something Magical

What makes it seem magical to many is that it can be made to learn by a mechanism called back propagation where the desired outcome (aka label) is compared to the current output, and the error (difference) is fed back into the neuron layers, resulting in self-adjustment of weighting coefficients and biases to drive down the error value. In short, the Neural Network learns from its mistakes.

Neural Networks have evolved a long way since their early form of single layer perceptron, which could not implement a simple exclusive OR function. Today’s Deep Learning Neural Network consists of multiple layers, and is extraordinarily expressive with its large selections of transfers function.

The wealth of Neural Network designs include:

  • Dynamic
  • Associative
  • Competitive
  • Convolutional
  • Recurrent Neural Networks

They are applied to a wide array of challenging real-world problems that span just about every human endeavor:

  • Pattern Recognition (including vision, speech, handwriting signals),
  • Machine Translation
  • Object Classification
  • Gaming

Automatic Feature Extraction

Why choose the Neural Network approach when there exists a whole slew of well known Machine Learning techniques that can be computed much more efficiently? Its self-learning capability is easily the most convincing argument. Take the common linear regression for example, the burden is on the practitioner to correctly identify the predictor features before feeding them into the model. More often than not, the identification of relevant features is more than half of the battle, even with the help of domain experts. This becomes especially challenging in IoT applications, where so many sensor signals are collected very quickly. On top of identifying relevant signals, determining the meaning sequence and patterns out of these signals can be just as difficult. Neural Networks with their self-learning mechanisms are capable of automatically fishing out the right combination of input signals and identifying the composite event patterns of interest. See the Complex Event Processing section.

Challenges in Computation

The computational load has implications on infrastructural investments. Deep Learning practitioners have leveraged a combination of GPU hardware and parallel programming techniques to speed up the training processes from days to hours. It’s more the rule than exception to see multiple iterations and long training cycles. Experienced practitioners often apply intuition in determining whether to continue an existing run or fine-tuning and pivoting the Neural Network design halfway through. For these reasons, Deep Learning Neural Networks remain the high-end option reserved for the most challenging Machine Learning problems.

Scroll top of the page