Number Systems

Read time: 13 minutes (3292 words)

See also

Text, section 1.3

When we work at the level of the machine, everything is bits!

  • Numbers
  • Characters
  • Phases of the moon!
The machine cannot distinguish any of this
  • It is up to the program to make sense of the bits

What is a number?

Silly question!

  • Just a series of digits

OK, what is a digit?

  • A symbol chosen to represent a number

(BRAAAAP - recursive definition. Illegal operation!)

Counting things

We started counting things on our fingers
  • How many do we have (fingers, that is)?
We found out that we ran out of fingers after 10 items
  • How can we count higher?
  • Toes, maybe? (Nope, shoes mess that up!)
Try this
  • Count to 5 on one hand
  • Every time you hit 5, count by one on the other hand
  • You can get to 30 doing that (5 sets of five, plus 5)

Symbols for counting

Humans introduced the notion of writing down a set of symbols in columns to represent a quantity of things. In using our hands to count like we just did, we did this:

  • The right column (hand) is the lowest order count (units)
  • The left column (hand) represents groups of 5 counts

As the number of things to count grows, we cannot use fingers. Instead Let’s write down the symbols in a right to left order. The number of symbols we allow is the base of the system. For some strange reason, humans fell in love with 10 symbols (0 through 9)

What do the columns mean?

Each column, from right to left represents a specific count.

Formally, each column is a count equal to the base raised to some power
  • Right-most column = base ^ zero power (1’s column)
  • Second column = base ^ first power (10’s column)
  • Third column = base ^ second power (100’s column)
  • and so on as far as we need

We form a number by writing down a set of symbols. Each symbol is multiplied by the column value and added up to get the final result.

A simple example

123 is really
  • 3 * 10 ^ 0 = 3
  • plus 2 * 10 ^ 1 = 20
  • plus 1 * 10 ^ 2 = 100
  • = 123 (hopefully!)
Is there any reason why we must use 10 symbols?
  • Early computer designers tried to use human symbols
  • It was hard to design circuits to do this

Binary numbers

So, we moved to binary!
  • Only two symbols
  • Circuits are easy to design

Same rules apply, only now we only have two symbols (0 and 1)

Counting works as usual

Binary column values

Base is now 2, so column values are
  • Right-most column = base ^ zero power (1’s column)
  • Second column = base ^ first power (2’s column)
  • Third column = base ^ second power (4’s column)
  • and so on as far as we need

Another example

1011 is really
  • 1 * 2 ^ 0 = 1
  • plus 1 * 2 ^ 1 = 2
  • plus 0 * 2 ^ 2 = 0
  • plus 2 * 2 ^ 3 = 8
  • = 11
So 1011 in binary is the same as 11 in decimal!
  • Notice, we work from right to left

Defining the base of the system

Looking at the two numbers (1011) and (11) we have a problem
  • We have no way to know what the base is
  • use notation to solve the problem
  • 1011b is binary
  • 11 or 11d is decimal
Numbers always start with a digit symbol
  • even in binary

What is the biggest number we can form

Simple, one less than the column value of the next (unused) column

In decimal (for 3 columns)
  • 999 = 1000 - 1
  • That is really 1000 numbers (including 0)
In binary this gives (for 8 columns)
  • 255 = 256 - 1
  • (256 = 2 ^ 8)

Converting a binary number to decimal

We just did this! 11101b is really
  • 1 * 1
  • plus 0 * 2
  • plus 1 * 4
  • plus 1 * 8
  • plus 1 * 16
  • = 29

Converting from decimal to binary

Rules are simple:

  • set result = number
  • repeat
    • divide result by base - record new result and remainder
    • record remainder in column (from right to left)
  • until result is zero
    • record last remainder as leftmost digit

Example conversion

Example 29d is really
  • 29 / 2 = 14 plus 1
  • 14 / 2 = 7 plus 0
  • 7 / 3 = 3 plus 1
  • 3 / 2 = 1 plus 1
  • 1 / 2 = 0 plus 1
  • = 11101b

(Be sure to check your work by converting back!)

Binary numbers are equivalent to decimal numbers

This idea is fundamental to the business of writing computer programs. Just because things inside the computer are in a different form does not mean they are really different.

Here is a set of A’s

  • A A A A A A A A A
How many are there?
  • In decimal, there are 9 A’s
  • In binary, there are 1001b A’s
In each number system, we are referring to the size of the same set of A’s!
  • Any human (decimal) number can be represented in binary in the machine!

It is all zeros and ones

Everything in the computer is just 0’s and 1’s. We write programs that are smart enough to know the right rules to apply. In our next lecture, we will explore encoding things from our world into those 0’s and 1’s. (We just showed part of this in looking at the difference between decimal and binary numbers!)