Directory Image
This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.

Overflow and Underflow in Deep Learning

Author: Mansoor Ahmed
by Mansoor Ahmed
Posted: Oct 18, 2021
Introduction

Deep learning algorithms generally need a high volume of numerical computation. This normally states to algorithms that solve mathematical problems. That solved by methods to keep informed guesses of the solution through an iterative process. Somewhat than logically deriving a formula in case a symbolic expression for the correct solution.

The general operations consist of optimization and solving systems of linear equations. As only assessing a mathematical function on a digital computer may be problematic when the function includes real numbers that cannot be signified exactly using a finite amount of memory. In this post, we will talk about the Overflow and Underflow in details.

Description
  • Overflow and underflow are together errors ensuing from a shortage of space.
  • They distinct in data types similar to integers and floating points on the most basic level.
  • The number kept in a computer occurs in a discrete number of digits different the physical world.
  • We cannot just append to our result when we make a calculation that results in an extra digit.
  • Therefore, we acquire an overflow or underflow error.
  • Overflow errors happen when functioning with integers and floating points.
  • Underflow errors are normally just related with floating points.
  • The important trouble in performing continuous math on a digital computer is that we essential to embody considerably many real numbers with a finite number of minute designs.
  • It means that for nearly all real numbers, we deserve some estimate error when we embody the number in the computer.
  • This is impartial rounding error in several cases.
  • Rounding error is difficult, particularly when it mixes through many operations.
  • It may cause algorithms that work in theory to fail in practice if they are not planned to reduce the buildup of rounding error.
Overflow
  • Overflow indicates that we have completed a calculation.
  • That bring about in a number bigger than the largest number we may represent.

Example

  • Find below illustration including unsigned integers.
  • Let’s take up we have an integer kept in 11
  • We can store the greatest number in one byte is 255255.
  • Therefore, let’s take that.
  • This is 1111111111111111.
  • At present, assume we add 22to it to get 0000001000000010.
  • The result is 257257 that is 100000001100000001.
  • The result has 99bits, though the integers we are working with contain of only 88.
  • In this scenario a computer will remove the most-significant bit (MSB) and save the rest.
  • This is basically equal to r % 2n
  • Where, r is the result, n is the number of bits existing, and % is modulo operator.
Underflow
  • One practice of rounding error that is mainly upsetting is underflow.
  • Underflow happens when numbers near zero are rounded to zero.
  • Several functions act qualitatively in a different way when their argument is zero relatively than a small positive number.

Example

  • For instance, we typically want to escape division by zero.
  • A number of software environments will increase exceptions when this occurs.
  • Others would return an outcome with a placeholder not-a-number value.
  • In another way by taking the logarithm of zero.
  • This is generally treated as?? that then develops not-a-number if it is used for various further arithmetic operations.
Softmax Function
  • Softmax function is a unique example that must be become stabilized against underflow and overflow.
  • The softmax function is repeatedly used to guess the probabilities related with a multinoulli distribution.
  • The softmax function is well-defined to be:
  • If x is actually large, exp(x) will overflow and the entire expression becomes NaN.
  • Think through what occurs when all of the xi are equal to some constant c.
  • We can understand that all of the outputs should be equal to 1/n
  • This cannot happen when c has large size
  • If c is actual negative, then exp(c) will underflow.
  • It means the denominator of the softmax would become 0.
  • Consequently, the final result is undefined.
  • If c is exact large and positive, exp(c) will overflow.
  • Once more resulting in the expression as an entire being undefined.
Solution

All of these problems can be determined by instead evaluating Softmax (z) where;

  • If x is truly small, exp(x) will underflow and the complete expression becomes 0.
  • Simple algebra prove that the value of the softmax function is not changed logically by adding or subtracting a scalar from the input vector.
  • Subtracting maxi xi consequences in the largest argument to exp being 0.
  • That rules out the possibility of overflow.
  • Similarly, at least one term in the denominator has a value of 1.
  • That rules out the possibility of underflow in the denominator leading to a division by 0.
For more details visit: https://www.technologiesinindustry4.com/2021/10/overflow-and-underflow-in-deep-learning.html
About the Author

Mansoor Ahmed Chemical Engineer,Web developer

Rate this Article
Leave a Comment
Author Thumbnail
I Agree:
Comment 
Pictures
Author: Mansoor Ahmed

Mansoor Ahmed

Member since: Oct 10, 2020
Published articles: 124

Related Articles