At the highest conceptual level, deep learning is no different from supervised machine learning. Data scientists start with a labeled data set to train a model using an algorithm and, hopefully, end up with a model that is accurate enough at predicting the labels of new data that is run through the model. For example, developers can use Caffe, a popular deep-learning library, to train a model using thousands or millions of labeled images. Once they train the model, developers can use it within applications to probabilistically identify objects in a new image.  Conceptually like machine learning, yes, but deep learning is different because:

  • Gnarly data is welcome. Deep learning is unique in that it can work directly on digital representations of data such as image, video, and audio. Traditional machine learning must preprocess this data in some way, and the data scientist has to tell the algorithm what to look for that will be relevant to make a decision. Deep-learning algorithms do this themselves, without having to be programmed for it. This opens a new world of solving a new class of complex problems that previously relied on preprocessed abstractions of images, voice, video, and non-uniform data. For example, deep-learning algorithms can work directly on pixel data — no preprocessing required.

  • Feature engineering is built-in. One of the biggest challenges with creating traditional machine learning models is the process of feature engineering. In this process, the data scientist hypothesizes what data the machine learning algorithm will find useful. This places an iterative burden on data scientists, because they often need to introduce new data, new formats of data, or derived data to get the algorithm to work. Deep learning tries to circumvent these challenges with automatic feature extraction. Deep-learning models are capable of learning to focus on the right features by themselves, requiring little guidance from the programmer (who does not need to be a data scientist expert). This makes deep learning an extremely powerful tool for modern machine learning.

  • Topology design process is a prerequisite. It's true that deep learning gives data scientists a break when it comes to feature engineering. However, it adds a new task to the process by requiring data scientists to choose from among many permutations of configuration parameters, such as the number of layers in the network. Like feature engineering, this can be a very iterative process, with data scientists trying many different combinations until they get it right. Rather than running one combination, testing it, changing the parameters, and trying again, many data scientists take the brute-force approach to designing the network topology by using very large computing clusters to run many combinations simultaneously.

  • Supercomputers are required. A unique characteristic of deep learning is that the training process involves mathematical vector operations that often result in the need for billions of computations. To meet the need for affordable supercomputer power, deep-learning researchers adopted graphics processing units (GPUs) because they have thousands of cores and can perform the operations necessary to train deep-learning networks. NVIDIA is the best-known GPU maker that designs, develops, and markets deep-learning GPU systems that supports popular open source deep-learning libraries. Cray Computer offers supercomputers engineered for deep learning applications that embed NVidia chips that are affordable by enterprises. These are supercomputers, but they are affordable for enterprises. Startups like Graphcore and Wave Computing are working on new architectures to speed up deep learning as well. Public cloud players such as Amazon Web Services (AWS) and Google also offer GPU instances that support deep learning.

  • Purpose-built open source libraries are available. Deep learning has its own set of libraries that are evolving quickly. Many types of deep-learning algorithms have been developed. The most popular are multilayered convolutional networks with back propagation, ideal for image and voice recognition, and recurrent neural networks, ideal for natural language processing (NLP). Popular open source deep-learning libraries include Caffe, Deeplearning4j, MXNet, TensorFlow, and Theano.

 

Forrester clients can read the full report: Deep Learning: A Revolution Started For Courageous Enterprises

 


Mike Gualtieri
Diego LoGuidice, and Brandon Purcell are co-authors of this research.