Administrative info
  HW2 out! Due Tuesday, 4:59pm
  PA's not meant to be long, hard

  Direct proof, proof by contrapositive, proof by contradiction

Logical chains of deduction
  When proving a claim ∀ x . P(x) => Q(x), what do we get to
  assume? Do we assume that the whole statement is true? We can't do
  that, since then we would be assuming what we are trying to prove!
  What we actually do is pick an arbitrary x and assume that P(x)
  holds, since if it doesn't, the implication is vacuously true. Then
  we write out a chain of deduction:
  Implicitly, what we are saying is that P1(x) follows from P(x),
  P2(x) follows from P1(x), and so on, until we get to Q(x). In terms
  of implications, what we mean is
    => P1(x)
    => P2(x)
    => ...
    => Q(x)
  So we have a chain of implications that leads from P(x) to Q(x).
  Since we made no assumptions about x, we can plug in any x into our
  reasoning and conclude that P(x) => Q(x).

  In a proof by contradiction, we start by assuming that what we are
  trying to prove is false. Then we derive a contradiction. In
  particular, we provide a chain
    => P1
    => P2
    => ...
    => R
  for some proposition R and another chain
    => Q1
    => Q2
    => ...
    => ¬R
  Putting these two together, we get
    ¬P => (R ∧ ¬R)
  What is the value of the RHS? It must always be false.  What is
  the value of the LHS? Since the RHS is false, the only way the
  implication holds is if the LHS is false. Thus, ¬P is false,
  so P must be true. This is why proof by contradiction works.

Warning! Proof by contradiction is powerful, but it's dangerous.  We
start with an assumption (that is actually false), and we derive many
deductions from it (which are also all false), until eventually we
reach something we can demonstrate contradicts our assumption or is
simply false. The problem is that it's easy to make a simple mistake
of reasoning; if your chain of reasoning doesn't follow, you can
easily reach a falsehood. Contradiction might come from mistake in the
middle rather than a false assumption at the start. This can be hard
to notice, since you're deliberately deriving all sorts of false
statements, and you can't use your intuition to recognize the first
claim that's false. Best to prove as much as possible outside the
proof by contradiction, using lemmas that actually are true.

Another example
  Theorem: There are infinitely many primes.

  How can we prove this? We don't know of any formula for generating
  primes. So in desperation, we resort to proof by contradiction. It's
  dangerous, but when you're desperate...

  Proof: Assume (for a contradiction) that there are finitely many
    primes p1,...,pk. Let a = p1 * p2 * ... * pk + 1. Notice that none
    of the p's divide a, since a % p_i = 1. Thus, a has no prime
    factor, since none the p's are a factor of a. Thus, a must be
    prime, which is a contradiction since it's not one of the p's.

  We used the below lemma in our proof. We will see how to prove it

  Lemma: Every natural number n>1 is either prime or has a prime

  Q: In our proof, we concluded that a must be prime. Is it true that
    the product of the first k primes must be prime?
    2 * 3 * 5 * 7 * 11 * 13 + 1 = 30031 = 59 * 509

Proof by cases
  Sometimes we're not sure which of a set of possible cases is true,
  but we know at least one of them is. If we can prove our claim holds
  in any of the cases, then that suffices as a proof of the claim.

  Theorem: There exist irrational x and y such that x^y is rational.
  Proof: Consider x = y = sqrt(2). Then either sqrt(2)^sqrt(2) is
    rational or irrational, though we don't know which is the case.
    Case 1: sqrt(2)^sqrt(2) is rational. Then we are done.
    Case 2: sqrt(2)^sqrt(2) is irrational.
      Consider new values x = sqrt(2)^sqrt(2), y = sqrt(2). Then x^y =
      (sqrt(2)^sqrt(2))^sqrt(2) = sqrt(2)^(sqrt(2) sqrt(2)) =
      sqrt(2)^2 = 2. Thus, we have shown irrational x, y such that x^y
      is rational.
    Since one of the above cases must be true, and since we have shown
    that in either case there are irrational x,y such that x^y is
    rational, we can conclude that such x,y always exist.


Suppose we are given a difficult statement to prove of the form
∀n∈N . P(n). How can we go about solving it?
  (1) We could try writing down separate proofs for P(0), P(1), ...,
      but we'd never finish.
  (2) We could write down one proof for arbitrary n, like we did in
      direct proofs. But this might be too hard. How do we prove that
      an arbitrary n is a prime or product of primes?
  (3) We could try proof by contradiction, but not only is it
      dangerous, it may not help.

  Sometimes, we are faced with a statement that is really hard to
  prove using any of the techniques we've seen so far. We need a new
  proof technique.

  Suppose instead we are writing a program to compute something that
  is difficult to do so directly. What do we do? We turn to recursion!
  Recursion, at its very core, is to solve a big problem by breaking
  it into one or more smaller problems. We solve those smaller
  problems, possibly further breaking them down in the process. Then
  we put those smaller problems back together in order to solve the
  bigger problem.

  Here is a real-life example. Suppose I had a heavy box of books that
  I wanted to lift up onto a high shelf. Unfortunately, due to too
  much time spent typing and not enough time lifting, the box is too
  heavy. What do I do? Well, I can take out one book and try again. If
  it's still too heavy, I can repeat the process until it's light
  enough for mee to lift, but eventually I have the box on the shelf,
  minus the book I took out. Then I put that book back into the box to
  finish my task.

  What elements does the above recursive procedure have? It has a base
  case, i.e. the box is light enough for me to lift. It also has a
  recursive step, i.e. take a book out, try again with the lighter
  box, put the book back in.

  Again, the key is to break a hard problem into easier ones. It turns
  out, we can follow the same procedure in proofs.

  Now back to that hard to prove statement ∀n∈N . P(n).
  Using recursion as an inspiration, how can we solve it?

  We need two pieces as in recursion, a base case and a "recursive
  step" that breaks a big problem into a smaller problem. The base
  case is easy: let's just use the smallest n∈N, i.e. prove P(0).

  Then for the "recursive step," we are faced with proving P(k+1). We
  break it into a smaller problem of P(k) and then show that given a
  proof of P(k), we can turn it into a proof of P(k+1). Logically, we
  show P(k) => P(k+1). But we have to be careful about what we are
  showing; we can't just show it for a particular k. No, we need to
  show that our reasoning holds no matter what k is. So in reality
  what we need to show is ∀k∈N . P(k) => P(k+1).

  This proof process is called "induction," and the "recursive step"
  is actually called the "inductive step." To summarize, we need to
  prove two facts:
  (1) P(0) [Base case]
  (2) ∀k∈N . P(k) => P(k+1) [Inductive step]
  Is this any easier? The base case might be easy, but we still have a
  universally quantified statement to prove. However, note that the
  predicate in that statement is in the form of an implication, so we
  know how to prove it: by a direct proof! In particular, we pick an
  arbitrary k, assume that P(k) holds, and then show that P(k+1)
  follows. The assumption that P(k) holds is called the "inductive

  Let's look at some examples of how to actually carry out this

  Theorem: ∀n∈N . n^3-n is divisible by 3.
  We already saw a direct proof for this, but we can use induction as
    Base case: P(0)
      0^3 - 0 = 0 which is divisible by 3.
    Inductive hypothesis: Assume P(k), i.e. k^3-k is divisible by 3.
    Inductive step: We must show that P(k+1) follows, i.e.
      (k+1)^3-(k+1) is divisible by 3.
      [What we need to do here is break (k+1)^3-(k+1) down in terms of
       k^3-k (like the procedure we follow in a recursive program) so
       we can apply our inductive hypothesis (like calling the
       function recursively in a program).]
        (k+1)^3 - (k+1)
          = k^3 + 3k^3 + 3k + 1 - k - 1
          = k^3 + 3k^3 + 2k
          = k^3 + 3k^3 + 3k - k
          = (k^3 - k) + 3(k^3 + k).
      The first term is divisible by 3 by the inductive hypothesis and
      the second term is obviously divisible by 3. Thus their sum is
      also divisible by 3. QED.

  Theorem: ∀n∈N . n >= 1 => 1 + 2 + ... + n = n(n+1)/2.
    [Note that we could extend the claim to n = 0 as well, in which
     case the LHS would be empty and assumed to be 0 since the
     additive identity is 0.]
    Base case:
      [What do we use? We can't use P(0), since if we tried applying
      the inductive step to prove P(1), we'd be assuming a false
      statement. Instead, we use P(1) as our base case. In general, we
      use the smallest element for which the claim we are trying to
      prove holds.]
      P(1): 1 = 1(1+1)/2 = 2/2 = 1.
    Inductive hypothesis: Assume P(k), i.e. 1+...+k = k(k+1)/2.
    Inductive step: We need to prove P(k+1), i.e.
      1+...+(k+1) = (k+1)(k+2)/2.
      [Be careful not to assume that this is true! That's what happens
      if we start from this equation and go from there. What we get to
      assume is P(k), not P(k+1), and we need to show P(k) => P(k+1),
      not the converse!]
      Then 1 + ... + (k+1)
        = (1 + ... + k) + (k+1)
        = k(k+1)/2 + (k+1), by inductive hypothesis
        = k(k+1)/2 + 2(k+1)/2
        = (k(k+1) + 2(k+1))/2
        = (k+1)(k+2)/2.

  Theorem: ∀n∈N . n > 1 => (n! <= n^n)
    Base case: P(1)
      1! = 1 <= 1^1 = 1
    Inductive hypothesis: Assume P(k), i.e. k! <= k^k.
    Inductive step: Prove P(k+1).
      (k+1)! = k! (k+1)
        <= k^k (k+1), by inductive hypothesis
        <= (k+1)^k (k+1)
        = (k+1)^(k+1).