Directory Image
This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.

Pooling in convolutional neural network

Author: Mansoor Ahmed
by Mansoor Ahmed
Posted: Jan 07, 2022
pooling unit Introduction

Pooling in a convolutional neural network is the third layer. We use a pooling function to adjust the output of the layer more. Pooling is the main stage in convolutional-based systems. It decreases the dimensionality of the feature maps. Similarly, it pools a set of values into a reduced number of values.

The pooling stage transforms the combined feature representation into valuable information with valuable information and removing inappropriate information. Pooling operators deliver a method of spatial transformation invariance in addition to decreasing the computational complexity for upper layers by reducing some connections between convolutional layers.

This layer performs the down-sampling on the feature maps. Those come from the preceding layer and produce the new feature maps with a condensed resolution. This layer helps two key drives:

  • Decrease the number of parameters or weights and decrease the computational cost
  • Control overfitting.

An ideal pooling way is likely to remove only valuable information and remove unrelated parts. In this article, we will learn about pooling in convolutional neural networks in depth.

Description

A usual layer of a convolutional network contains three stages.

  • At the first stage, the layer makes some convolutions in parallel to produce a set of linear activations.
  • At the second stage, every linear activation is run over a nonlinear activation function. This stage is occasionally named the detector stage.
  • At the third stage, we use a pooling function to change the output of the layer additional.

Look at the above figure. There are two normally used sets of terminology for labeling these layers.

Left side labeled terminology

  • The convolutional net is seen as a minor number of comparatively complex layers, with each layer having various stages.
  • There is a one-to-one mapping between kernel tensors and network layers.

Right side labeled terminology

  • The convolutional net is seen as a bigger number of simple layers.
  • Each step of processing is observed as a layer in its own right.

Pooling benefits to create the representation becomes about invariant to small translations of the input. Its means that if we translate the input by a minor amount, the values of most of the pooled outputs do not modify.

In the above diagram, Max pooling presents invariance.

Top View

  • A view of the center of the output of a convolutional layer.
  • The bottom row demonstrates outputs of the nonlinearity.
  • The top row displays the outputs of max pooling.
  • The output is by the stride of one pixel in the middle of pooling regions.
  • The pooling region width is three pixels.

Bottom View

  • A view of a similar network, afterward the input has been lifted to the right by one pixel.
  • Each value in the bottom row has transformed.
  • Though, only half of the values in the top row have altered.
  • As the max-pooling units are just subtle to the determined value in the neighborhood. That’s not its real location.

Pooling through spatial regions produces invariance to translation. Though, the features may learn which transformations to develop invariant to, if we pool through the outputs of distinctly parameterized convolutions.

See the example of learned invariances in the above figure.

  • A pooling unit that pools through many features that are learned with distinct parameters may pick up to be invariant to changes of the input.
  • We indicate here that how a set of three learned filters and a max-pooling unit may learn to become invariant to variation.
  • Altogether three filters are planned to notice a hand-written 5.
  • Each filter attempts to match a somewhat diverse orientation of the 5.
  • The matching filter will match it and cause a large activation in a detector unit when a 5 seems in the input.
  • The max-pooling unit then has a big activation irrespective of which pooling unit was started.
  • We display here how the network processes two diverse inputs.
  • They are subsequent in two different detector units being activated.
  • The result on the pooling unit is unevenly the same either way.
  • This principle is leveraged by maxout networks.
  • Max pooling through spatial positions is obviously invariant to translation.
  • This multi-channel method is only essential for learning other transformations.

This is promising to use fewer pooling units than detector units as pooling précises the responses over a whole neighborhood. That happens with reporting summary statistics for pooling regions spaced k pixels separately rather than 1 pixel apart. This develops the computational effectiveness of the network as the next layer has unevenly k times fewer inputs to process. This decrease in the input size may as well result in better statistical productivity and reduced memory needs for storing the parameters.

The above diagram shows the Pooling with downsampling.

  • We utilized max-pooling by a pool width of three and a stride amid pools of two.
  • This decreases the representation size by a factor of two.
  • That decreases the computational and statistical load on the next layer.
  • The rightmost pooling area has a smaller size.
  • Though, must be comprised if we do not want to pay any attention to some of the detector units.

For more details visit:https://www.technologiesinindustry4.com/2021/12/pooling-in-convolutional-neural-network.html

About the Author

Mansoor Ahmed Chemical Engineer,Web developer

Rate this Article
Leave a Comment
Author Thumbnail
I Agree:
Comment 
Pictures
Author: Mansoor Ahmed

Mansoor Ahmed

Member since: Oct 10, 2020
Published articles: 124

Related Articles