Floating point

From AN!Wiki
Jump to: navigation, search

 AN!Wiki :: Floating point

"Floating point" is a term used to describe a number that has a "less than one" component, indicated by a period. In computers, numbers are represented as being a certain length; 32-bit, 64-bit and so on. The period used to indicate where the whole number ends and the fraction begins can be vary where it is, hence the term "float".

The IEEE 754 standard dictates how most computers and software handle the binary representation of these floating point numbers.

It is common to see the term "single-precision" and "double-precision" floating point numbers. These refer to numbers that are 32-bit and 64-bit values, respectively.


Any questions, feedback, advice, complaints or meanderings are welcome.
Us: Alteeve's Niche! Support: Mailing List IRC: #clusterlabs on Freenode   © Alteeve's Niche! Inc. 1997-2019
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.
Personal tools