By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See our Privacy Policy for more information
Glossary
Padding
AI DEFINITION

Padding

In deep learning, padding refers to the process of adding extra pixels, usually set to zero, around the borders of an image before applying convolution operations in a Convolutional Neural Network (CNN). The purpose is to control the spatial dimensions of the output and prevent the shrinking effect that repeated convolutions cause.

Context and role
Without padding, each convolution reduces the size of the image. For example, convolving a 5×5 filter over a 28×28 image results in a 24×24 feature map. After several layers, the feature maps become very small, limiting the network’s ability to learn rich features. Padding solves this problem by preserving dimensionality.

Types of padding

  • Zero padding: most common, filling the border with zeros.
  • Same padding: ensures the output has the same dimensions as the input.
  • Valid padding: no padding at all; the feature map shrinks.
  • Reflect/replicate padding: uses mirrored pixels instead of zeros, useful in image processing tasks to reduce artifacts.

Applications

Challenges
While padding helps preserve information, it can introduce artificial borders that may bias the network. Too much padding may also add unnecessary computation without significant gain.

📚 Further Reading

  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
  • Dumoulin, V. & Visin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv:1603.07285.