ディープラーニング入門: 畳み込みニューラル ネットワークとは
出典シリーズ: Introduction to Deep Learning
この MATLAB® Tech Talk では、 畳み込みニューラルネットワーク (CNN)の基礎を説明します。畳み込みニューラルネットワークは、大まかには一般的なディープラーニング アーキテクチャですが、ここでは CNN とは何かを詳しくみていきます。このビデオでは、複雑な概念を分かりやすい複数のパートに分けています。局所受容野、重みとバイアスの共有、活性化とプーリングの 3 つの概念について説明します。
このビデオではこれら 3 つの概念をまとめ、畳み込みニューラルネットワークの層を構成する方法をご覧いただきます。
画像解析向けに畳み込みニューラルネットワークの学習を行う 3 つの方法についても説明します。これには次が含まれます。1.) ゼロからモデルを学習させる方法、2.) 転移学習を使用する方法 (ある種類の問題に関する知識を使用して類似する問題を解く手法)、3.) 事前学習済みの CNN を使用して、機械学習モデルを学習させるための特徴を抽出する方法です。
ディープラーニング向け MATLAB の使用の詳細は、こちらをご覧ください。
A convolutional neural network, or CNN, is a network architecture for deep learning. It learns directly from images. A CNN is made up of several layers that process and transform an input to produce an output.
You can train a CNN to do image analysis tasks, including scene classification, object detection and segmentation, and image processing. In order to understand how CNNs work, we'll cover three key concepts: local receptive fields, shared weights and biases, and activation and pooling.
Finally, we'll briefly discuss the three ways to train CNNs for image analysis.
So let's start with the concept of local receptive fields. In a typical neural network, each neuron in the input layer is connected to a neuron in the hidden layer. However, in a CNN, only a small region of input layer neurons connects to neurons in the hidden layer. These regions are referred to as local receptive fields.
The local receptive field is translated across an image to create a feature map from the input layer to the hidden layer neurons. You can use convolution to implement this process efficiently. That's why it is called a convolutional neural network. The second concept we'll discuss is about shared weights and biases.
Like a typical neural network, a CNN has neurons with weights and biases. The model learns these values during the training process, and it continuously updates them with each new training example. However, in the case of CNNs, the weights and bias values are the same for all hidden neurons in a given layer.
This means that all hidden neurons are detecting the same feature, such as an edge or a blob, in different regions of the image. This makes the network tolerant to translation of objects in an image. For example, a network trained to recognize cats will be able to do so whenever the cat is in the image.
Our third and final concept is activation and pooling. The activation step applies a transformation to the output of each neuron by using activation functions. Rectified linear unit, or ReLU, is an example of a commonly used activation function. It takes the output of a neuron and maps it to the highest positive value.
Or if the output is negative, the function maps it to zero. You can further transform the output of the activation step by applying a pooling step. Pooling reduces the dimensionality of the featured map by condensing the output of small regions of neurons into a single output. This helps simplify the following layers and reduces the number of parameters that the model needs to learn.
Now let's pull it all together. Using these three concepts, we can configure the layers in a CNN. A CNN can have tens or hundreds of hidden layers that each learn to detect different features in an image. In this feature map, we can see that every hidden layer increases the complexity of the learned image features.
For example, the first hidden layer learns how to detect edges, and the last learns how to detect more complex shapes. Just like in a typical neural network, the final layer connects every neuron, from the last hidden layer to the output neurons. This produces the final output. There are three ways to use CNNs for image analysis.
The first method is to train the CNN from scratch. This method is highly accurate, although it is also the most challenging, as you might need hundreds of thousands of labeled images and significant computational resources.
The second method relies on transfer learning, which is based on the idea that you can use knowledge of one type of problem to solve a similar problem. For example, you could use a CNN model that has been trained to recognize animals to initialize and train a new model that differentiates between cars and trucks.
This method requires less data and fewer computational resources than the first. With the third method, you can use a pre-trained CNN to extract features for training a machine learning model. For example, a hidden layer that has learned how to detect edges in an image is broadly relevant to images from many different domains. This method requires the least amount of data and computational resources.
I hope you found this video useful. For more information, visit MathWorks.com/deep-learning.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- United Kingdom (English)
- Australia (English)
- India (English)
- New Zealand (English)
- 日本Japanese (日本語)
- 한국Korean (한국어)