Introduction

Natural numbers are the positive integers, i.e. 1, 2, 3, 4 and so on, excluding fractions or negative numbers. Sometimes zero is included, and sometimes not; I'll include zero in this document. The natural numbers are the most basic kind of number, and all others are essentially built up from them. For example an integer is a difference of two natural numbers: the integer x = ab, where a and b are natural numbers. And a rational number is a ratio of two integers.

When you perform operations like addition and multiplication on natural numbers there are certain patterns you see: for example it appears that a · b = b · a*. In other words, it doesn't matter what order you multiply two numbers together in. You can try as much as you like and you will never find two natural numbers where this is not the case. But we can't test every natural number—there's an infinite amount of them. How can we be sure that the rule won't stop being true for some really high natural number? What we have to do is find a mathematical proof of this statement.

The · symbol is just another symbol for multiplication, like ×; I just think it looks a bit neater.

It may seem very hard to find a way to prove something as basic as this (which is saying that multiplication is commutative, "commutative" just meaning you can switch the operands around and still get the same result). The important thing to be aware of is that you always have to start with certain premises. A premise is simply a statement which you already know is true. Without premises, you can't logically prove anything—you have nothing to build your statements on. On the other hand, when you have found some premises it's often quite obvious how to get from them to the proof you want.

As an example, forget about commutativity of multiplication for a moment and consider the trigonometric identity

tan2 x + 1 = sec2 x

To prove this, we take the premise that sin2 x + cos2 x = 1, and also that if a = b, then a / c = b / c for any numbers a, b and c where c ≠ 0. Then we can divide both sides of the equation that is our first premise by cos2 x to get

sin2 x / cos2 x + cos2 x / cos2 x = 1 / cos2 x

Note that we have also tacitly included a third premise here, that (a + b) / c = a / c + b / c. You have to be very careful about what premises you use when making proofs. If you assume a premise is true without it being proved, then you have to prove that premise to complete the proof. We can finish the proof using another couple of premises: that a2 / b2 = (a / b)2 and a / a = 1. Then we can simplify the equation to

(sin x / cos x)2 + 1 = (1 / cos x)2

By definition, tan x = sin x / cos x and sec x = 1 / cos x, so this proves the initial identity. But we did have to use no less than five premises, many which would be rather complex to prove.

Proving something always requires some premises to be known in advance, but then those premises have to be proved too. This means that at some level, you always have to just take some premises as a given and not bother proving them. This may seem a bit weird, but it's impossible to do logic without assuming some of your premises. Those premises which you assume are called axioms.

Generally we want to use as few axioms as powerful, because that makes a proof seem more complete. We could make the five premises we used in the proof of the trigonometric identity axioms, but some of them could probably be derived from others, or from definitions of the various operations.

Axioms for Natural Numbers

The Peano axioms are a set of axioms which allow you to prove nearly any statement about natural numbers, developed by the Italian mathematician Giuseppe Peano. The first axiom says that 0 is a natural number. This is really more like a definition than an axiom—we're saying that one natural number is called 0. All the other natural numbers will be derived from this 0.

  1. 0 is a natural number.

The next four Peano axioms are basic facts of logic. Often, mathematicians don't bother stating axioms like these explicitly, considering them as part of the "background logic". However, they are included in the Peano axioms. Their point is to define the concept of equality. First, we say that for any natural x, x = x. That seems really obvious, but in mathematics and logic, obviousness is not a proof. It's not true for every relation; for example it's not true that x < x. Nonetheless, this property, known as the reflexive property, isn't enough to define equality. There are other relationships between numbers that have the same property: for example xx.

  1. For all natural x, x = x.

Second, we say that if y = x then x = y; the items in an equality can be flipped around. This means equality is symmetric. This distinguishes the equality relation from the inequalities. Also, if x = y and y = z, then x = z: equality is transitive. Lastly, we say that if x is a natural number and x = y, then y is a natural number.

  1. For all natural x and y, y = xx = y.
  2. For all natural x, y and z, (x = y and y = z) ⇒ x = z.

Now, we say that every natural number has a successor. If n is a natural number, its successor is written S(n). This successor is also a natural number.

  1. For all natural x, S(x) is a natural number.

So, the natural numbers also contain the successor of 0, which we could call 1. But how we do we know that 0 ≠ 1? None of our axioms so far disallow that. To prove that 1 is not 0 we'll have to add another one. We simply say that no matter what x is, S(x) is definitely not 0. This is equivalent to saying zero is the first natural number.

  1. For all natural x, S(x) ≠ 0.

So 1 is definitely a distinct natural number. But is S(1) distinct from 1? Well, obviously it can't be zero, but if it was 1 that wouldn't violate any of our axioms. So what we do is say that the function giving you the successor of a natural number is a one-to-one function—different inputs always give you different outputs. In other words,

  1. For all natural x and y, x = yS(x) = S(y).

We can now prove that S(1), which we will give the symbol 2, is distinct from 1. Because if it was the case that S(1) = 1, then S(1) = S(0) (because 1 = S(0), and because equality is transitive) and therefore, by axiom 7, 1 = 0. That's a contradiction, and therefore it's NOT the case that S(1) = 1. We could prove that 3 is distinct from 2 in a very similar way, and we could do this for every successive natural number.

However, there are some things about natural numbers which are still a bit difficult to prove. For example, how can we be sure that 0 and its successors are all of the natural numbers? To prove this we need to make one last axiom: the most complex and also the most powerful one. This is known as the axiom of induction, and it says that if you have a set of numbers containing 0, and every natural number in the set has its successor in the set too, then this set contains all the natural numbers.

  1. If a set K exists and contains 0, and for all natural x in K, S(x) is in K, then K is the complete set of natural numbers.

So the set of zero and its successors is the set of all natural numbers. The induction axiom is also key to proving many of the less obvious facts about natural numbers, such as the commutativity of multiplication. Before we do this, we're going to have to define addition and multiplication in terms of these axioms. The definitions are shown below in the table below, where x and n represent any two natural numbers.

Addition Multiplication
x + 0 = x x · 0 = 0
x + S(n) = S(x + n) x · S(n) = x · n + x

These definitions are recursive—they refer to themselves—but they are perfectly valid definitions as long as we specify the answer when adding or multiplying zero. For example, by the definition of addition x + 1 = x + S(0) = S(x + 0). Using the definition of addition again, we see that x + 1 = S(x).

We can now use the induction axiom to prove an interesting fact about addition. We already know from the definition that x + S(n) = S(x + n), which we could also write as x + (n + 1) = (x + n) + 1. We can prove that this identity holds if you substitute any other natural number instead of 1, which means addition is an associative operation.

First, suppose there exists a c such that a + (b + c) = (a + b) + c. We know that this is the case when c is 1, but we'll just use c for now, for reasons that will become clear. We can then see that

a + (b + S(c)) = a + S(b + c)

using the definition of addition. We can use the definition again to get

a + (b + S(c)) = S(a + (b + c))

Now, we know that a + (b + c) = (a + b) + c—we said this had to be a property of c. So we can rewrite this as

a + (b + S(c)) = S((a + b) + c)

and by using the definition in reverse we get

a + (b + S(c)) = (a + b) + S(c)

What we have proved here is that if addition is associative for any c, it is also associative for c's successor. Or to put it another way—given a and b, if c is in the set of all natural c for which a + (b + c) = (a + b) + c is true, so is the successor of c. This is one of the requirements we need to use the axiom of induction. All we need to do now is show that this set contains 0. And

a + (b + 0) = a + b = (a + b) + 0

Therefore, this set contains all the natural numbers. In other words, addition is associative for any natural number in c's position. That's how you do a proof by induction.

In a very similar way, we can prove that multiplication is distributive over addition. That means a · (b + c) = a · b + a · c. If you look at the definition of multiplication, we have x · S(n) = x · n + x. This could also be written as x · (n + 1) = x · n + x. Now, you already knew that x · 1 = x but we haven't actually proved it yet. Nevertheless, we now have a suspicion that the definition of multiplication is just using a special case of the distributive law, and it should be easy to prove the distributive law from there.

As before we suppose there is a c such that a · (b + c) = a · b + a · c. Then, using its successor:

a · (b + S(c)) = a · S(b + c) = a · (b + c) + a

We've used the definitions of both operations here to get a · (b + c) into the equation, which we know is equal to a · b + a · c. So

a · (b + S(c)) = (a · b + a · c) + a

Now we can use the associativity of addition to say

a · (b + S(c)) = a · b + (a · c + a) = a · b + a · S(c)

and we have proved that if multiplication is distributive over addition for any c, it is also distributive over addition for c's successor. Now we just need to prove that it works when c = 0, and it does:

a · (b + 0) = a · b = a · b + 0 = a · b + a · 0

So multiplication is distributive over addition, although we can only be sure of this when the multiplier is on the left (since we haven't proved the commutativity of multiplication yet). From this a proof of the associativity of multiplication follows quite easily. Suppose c is a natural number such that a · (b · c) = (a · b) · c; then

a · (b · S(c)) = a · (b · c + b) = a · (b · c) + a · b

Then we can say

a · (b · S(c)) = (a · b) · c + a · b = (a · b) · (c + 1) = (a · b) · S(c)

And to complete the proof:

a · (b · 0) = a · 0 = 0 = (a · b) · 0

Proving commutativity is a little more cumbersome. We have to do it in three parts. First, we prove that both addition and multiplication are commutative with 0. Even this has to be done via induction. Suppose a is a natural number for which a + 0 = 0 + a, and a · 0 = 0 · a. Then

0 + S(a) = S(0 + a) = S(a + 0) = S(a) = S(a) + 0

0 · S(a) = 0 · a + 0 = a · 0 + 0 = 0 + 0 = 0 = S(a) · 0

Now if a = 0, these properties are obviously true (since 0 and a are the same), and therefore a + 0 = 0 + a and a · 0 = 0 · a for all natural a using the induction axion.

To prove the commutativity of addition we must also prove it for the case with b = 1. So, suppose there is a natural a such that a + 1 = 1 + a. Then

S(a) + 1 = S(a) + S(0) = S(S(a) + 0) = S(S(a)) = S(a + 1) = S(1 + a) = 1 + S(a)

We can also see that 0 + 1 = 0 + S(0) = S(0 + 0) = S(0) = 1 = 1 + 0. This proves that a + 1 = 1 + a for all natural a by induction.

Now we can finally get to the full proof. We already know that if b = 0, then a + b = b + a is true. Now, using the successor of b:

a + S(b) = S(a + b) = S(b + a) = (b + a) + 1 = b + (a + 1)

You can now see why we had to prove that a + 1 = 1 + a. Making that substitution, we get

a + S(b) = b + (1 + a) = (b + 1) + a = S(b) + a

So addition is commutative for the successor of b; since we know that addition is commutative for b = 0, this proves commutativity of addition for all natural numbers.

For multiplication we will use a slightly different tactic. We're going to prove that multiplication can distribute over addition from the right—i.e. (a + b) · c = a · c + b · c. This is quite easily proved now that we know addition is commutative. Suppose it is true for some c, then

(a + b) · S(c) = (a + b) · c + (a + b) = (a · c + b · c) + (a + b) = (a · c + b · c) + (b + a) = ((a · c + b · c) + b) + a = (a · c + (b · c + b)) + a = (a · c + b · S(c)) + a = (b · S(c) + a · c) + a = b · S(c) + (a · c + a) = b · S(c) + a · S(c) = a · S(c) + b · S(c)

It took quite a bit of rearrangement to get there, but now we can see that it is also true for c's successor. And 0 can be c, since

(a + b) · 0 = 0 = 0 + 0 = a · 0 + b · 0

So multiplication can distributive over addition from the right. Now we can at last prove that multiplication is commutative, i.e. a · b = b · a. Suppose there is a b for which this is true, and then

a · S(b) = a · b + a = b · a + a = (b + 1) · a = S(b) · a

Since we already showed that multiplication is commutative when b = 0, this proves that multiplication is commutative for all natural b. It's not a particular nice or easily memorable proof, but there you go. We have also now proved all of the other essential properties of addition and multiplication.

If you enjoyed reading this document, why don't you try defining exponentiation in a similar way, and proving the exponent rules? They're listed below:

(a · b)c = ac · bc

ab + c = ab · ac

(ab)c = ab · c