This section contains 1,114 words (approx. 4 pages at 300 words per page) |
Infinite-precision arithmetic allows computer programmers to represent numbers comprising an unlimited number of digits without losing any accuracy. At first glance this may seem unimportant, but the way computers are designed and built means that errors in precision can have dramatic, sometimes disastrous, consequences.
Computers typically store numbers as combinations of one or more eight-bit bytes, or octets. The software that runs on computers ultimately stores numbers using the fundamental units supplied by the hardware, but the language that has been used to write the software will dictate how many bytes are used to represent which numbers.
Numbers in computer programming can be broadly divided into two types: whole numbers, or integers; and "floating-point" numbers, or fractional numbers. Numbers like 1 and 2 are integer, and numbers like 1.5 and 33.67 are floating-point numbers because the decimal point (.) "floats" in the string of digits.
Most programming languages have further categories...
This section contains 1,114 words (approx. 4 pages at 300 words per page) |