Bit: the smallest unit of measurement in computer data, representing either a 0 or 1.

The bit is to computer communications what the atom is to physics and chemistry: the foundation upon which all other communication is built.

In order to build the universe these atoms had to come together in different groups and form elements, each of which built upon the last, forming a cohesive packet of electrons and neutrons and protons that correspond to specific traits.

Bits are not so different: each bit is either a 1 or 0, and like a singular atom cannot do much on its own. However, when combined with other bits they can form packets of information that define all we do on computers.

These “electronic elements” are bytes, groups of 8 bits that are clumped together to represent a specific piece of information. Bytes are used in storage, while bits are generally used to measure rate, and are denoted by a lower case “b.” Data transfer rates are measured in bits because they are the lowest common denominator and easiest to follow.

For example, looking at a connection that can transfer 10000 bits per second is easily written down as a 10 kbps, instead of measuring it as one- eighth or another more complicated measurement. For this same reason processor architectures are labeled in bits, i.e. 32- or 64- bit processors.

It is important to make the distinction between bits and bytes, otherwise the idea you are conveying can be totally misconstrued. For example, if you describe your processor as 8 bytes or your 240GB hard drive’s capacity in bits, there will be confusion.

Read more