Precision (computer science)
In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.
Some of the standardized floating-point precision formats are:
- Half-precision floating-point format
- Single-precision floating-point format
- Double-precision floating-point format
- Quadruple-precision floating-point format
- Octuple-precision floating-point format
Of these, octuple-precision format is rarely used. The names refer to the size of a computer word, commonly 32-bits on many current computer architectures. A single-precision value occupies one word, a half-precision half of a word, double-precision two words, etc. The single- and double-precision formats are most widely used and supported on nearly all platforms. The use of half-precision format and minifloat formats has been increasing especially in the field of machine learning since many machine learning algorithms are inherently error-tolerant.