Friday, December 5, 2008

So long and good night.

Aside from all the interesting (inductions, ternary trees, and FSAs) and uninteresting (master theorem and proofs of equivalences of principle of simple induction, strong induction, and well-ordering) materials I learned in this course, if there is one more thing I should take away from this fantastic course CSC236, it is that I should never cram for the course slog. Unlike what someone said on his/her slog: “If anyone post large amount at one time, probability the posts will copied from somewhere else or just some crap words that make nonsense.” (pardon me for the awful grammar), what I wrote in I my slog really is my original and genuine thoughts of the course. I crammed for the slog partially because I am not the most diligent person in the world and I do enjoy a little sense of urgency. Like I said in my first-ever slog written in September, I really enjoyed this course, largely due to Professor Heap’s competent teaching skills. This is in no way an act of ass-kissing as finding examples of incompetent professors is like finding sleeping students in their classes. I have learned a lot from this course and I look forward to future computational logic courses.

DFSA should be on Oprah's Favorite Things List.

Just to clarify, I don’t watch Oprah’s show. No, not a single episode.

I absolutely love Deterministic Finite State Automatas. It is amazing to me how a small machine can denote an infinite language. Professor Heap’s software demonstration of DFSA further raised my interests in DFSA.

Here is one DFSA I drew:

A DFSA that accetps a string with sub-string "1011".


(F(o(r(m(a(l))))) (L(a(n(g(u(a(g(e))))))))

All the definitions and concepts of formal languages make sense to me except for one: Kleene star. I did not follow at all every example involving kleene stars during the lecture, which was really frustrating. It is, nonetheless, not a hard concept to understand. The intricate part is that the kleene star of any language includes the empty string. For example, the kleene star of an EMPTY language is the empty string. This may seem a little bizarre at first, but a little glance at the definition of kleene star sure will clear any confusions. As a result, ((0+1)(0+1))*(0+1) denotes all binary strings of odd length and 1*0(1*01*0)* denotes all strings with odd number of zeros ending with a zero.

Term Test Two and two term tests.

It was hard to review for CSC236 when the first term test for MAT237 was approaching. Luckily, the second term test for CSC236 took place almost 14 hours after the brutal MAT237 term test. I read through all the past slides about program correctness and the divide-and-conquer technique. It surprised me that my performance of the second term wasn’t as bad as I expected. Lord has mercy upon my soul!

Recursive and iterative correctness of functions.

Since inductions and recursive functions can both be expressed recursively, it is reasonable to prove the recursive correctness of a function using induction techniques. Iterative correctness, however, is much harder to prove. We have to first come up with loop invariants. We then have to prove the invariants are true using simple induction and the partial correctness of a function comes as a corollary. Finally, we have to prove that the loop terminates using principle of well-ordering.

Here is a problem I used to practise the recursive correctness of a function.

def maxnum(L):
"""Return the largest number from non-empty list of numbers L """
1. if len(L) == 1 : return L[0]
2. if L[0] > maxnum(L[1:]) : return L[0]
3. return maxnum(L[1:])

Prove P(1):Since n = 1, the list L is of length 1. This satisfies the condition len(L) == 1 at line 1 in the function maxnum(L). Thus, maxnum(L) returns L[0], which is the first and only number in the list. As a result, maxnum(L) returns the largest number in L when L has the length 1. Therefore, P(1) is true.

Prove \forall n \in N\{0}, P(n) \implies P(n+1):

Assume n \in N\{0} and assume that P(n) is correct. Then for any list L of length n, maxnum(L) returns the largest number in L. Assume L' is any list of length n+1, meaning it contains n+1 elements. Then L'[1:] is a list of length n and maxnum(L'[1:]) returns the largest number in L'[1:] by assumption.Then L'[0] > maxnum(L'[1:]) or L'[0] <= maxnum(L'[1:]).

Case 1: assume L'[0] > maxnum(L'[1:]). Then the condition at line 2 in maxnum(L) is satisfied. Then maxnum(L) returns L'[0], which is the largest number in L'since L'[0] > maxnum(L'[1:]) means that L'[0] is greater than any number in L'[1:], which implies that L'[0] is the largest number in L'.

Case 2: assume L'[0] <= maxnum(L'[1:]). Then the condition at line 1 and line 2 in maxnum(L) both dissatisfied. Then line 3 in maxnum(L) is executed and (L) returns maxnum(L'[1:]), which is the largest number in L'since L'[0] <= maxnum(L'[1:]) means that maxnum(L'[1:]) is greater than or equal to L[0], which implies that maxnum(L'[1:]) represents the largest value in L'.
Then maxnum(L) returns the largest number in L'. Then for any list L' of length n+1, maxnum(L') returns the largest number in L'. So P(n+1) is true. P(n) \implies P(n+1). \forall n \in N\{0}, P(n) \implies P(n+1)

Since P(1) \and \forall n \in N\{0}, P(n) \implies P(n+1) is true, we can conclude, by PSI, that \forall n \in N\{0}, P(n).

Problem Solving Exercise :

Penny Piles Exercise:

The PDF version of the problem can be found here: http://www.megaupload.com/?d=2ZM1BL0K

1) Understand the Problem

This dazzling question that involves pennies and drawers can be easily simplified as dividing 64 (or any even number) by half and produing a certain number by adding quotients.

2) Devise a Plan
First try to 48 as indicated in the question. If that's possible, try some other numbers to see if they are possible to produce. Record the numbers that have been sucessfull produced to see if there is any pattern.

3) Carry out the Plan
48 can be obtained by performing the following steps:





Furthurmore, by furthur developing the above diagram, we have:





As we can readily see from the above nicely-laid-out diagram, every number from 0 to 64 can be represented by moving half of the pennies from one drawer to another. The underlying reason for this is that any positive integer can be represented as additions of several 2^n, where n \in N. For example, 115 = 2^6 + 2^5 + 2^4 + 2^1 + 2^0.

4) Conclusion

Hence, as long as the number of pennies in the left drawer can be represented as 2^n, n \in N, we can always produce any number of pennies in the range of [0, 2^n] by swapping half of the pennies from one drawer to the other.

The complexity of the complexity of merge sort.

Here I thought unwinding techniques was somewhat uneasy to absorb, far came the complexity of merge sort. There are essentially four steps involved in the quest of finding the complexity of merge sort.

Step 1: We have to unwind the inductive formula with special cases to come up with a non-inductive formula.

Step 2: We then using strong induction to prove that the non-inductive formula is correct for all the special cases.

Step 3: We prove the monotonicity of the inductive formula.

Step 4: We merge the monotonicity and the non-inductive formula to prove the complexity of merge sort.

This four steps are the core components of the master theorem for divide-and-conquer. I don’t particular enjoy rigorous proofs, but I find the idea of dividing a bigger and more complex problem into smaller and manageable problems enchanting.

A retrospective of induction techniques.

Now that I have learned simple induction, strong induction, and structural induction, I think it is time for me to review them and compare their similarities and differences.

All of the above induction techniques have recursions built in. Simple induction requires only the previous instance while strong induction needs the validity of all previous instances. Structural induction is slightly different in that the term “pervious” does not apply here; it needs the validity of multiple smaller segments to prove the correctness of the larger entity.

Here are some of their differences in terms of proof structures. Simple induction needs one or many base cases and we can and have to assume that the previous (n-1) instance is correct in the induction steps. Strong induction does not require base cases; in another word, its base cases can be merged into its induction steps. We have to assume that all the previous (0 to n) instances are correct. Structural induction has more or less the same structure as strong induction. The instances we have to assume correct in the induction steps depend on the definition of the structure.

Induction techniques are the cornerstone of computational logic and they come up a lot in this course. It is crucial that I understand their mechanisms and know when and how to apply them.

A breeze of fresh air: structural induction.

One of the reasons why I like CSC236 is that I absolutely adore induction techniques. It is then indescribable how fascinated and excited I was when Danny drew the curtains and showed us the third flavour of induction: structural induction!

It really seems nature to me that many things can be expressed using structural induction. Larger sets can be decomposed as smaller sets. Trees are a combination of trunks and branches. Society is hierarchical and each hierarchy can be divided into different levels of various importances. Structural inductions can thus be utilized to prove a wide range of problems. I didn’t really understand why Professor Danny referred structural induction as “half of an induction technique”, but so be it.

It has to be noted that structural induction plays a very important role in the several definitions of regular expressions and languages introduced later in the course. See, structural induction rocks!

Recursions and python, the good old days in CSC148.

Oh, python, why did you leave me behind! Well, I guess Java is fun too.

During the fourth week, we were once again introduced to recursions. However, this time, recursions came with the tedious unwinding technique and even more mind-numbing proofs. The recursive structure in Fibonacci sequences is nothing new to me and the unwinding technique, though laborious, does not pose that big of a challenge to me either. This week was the most uninteresting week so far, which probably explains why I have nothing to write.

Ternary Trees == Tribal Dancers!

Just in case you are completely weirded out by the title. Here is a little illustration.

A ternary with no children:

dOdddO
// \ = //\ = a tribal dancer.
dddddd/
ddddd/ \

So a forest ternary trees = a gang tribal dancers.

This is exactly what my scratch paper looked like when I was scratching my head, trying to figure out how to solve this problem during the fourth week. Imagine how much effort I had to put in just to explain to friends who were studying with me that I have graduated from kindergarten and I didn’t suck at doodling.

Wow, that was fun. Here comes back the serious stuff. The ternary trees question is EXCEEDINGLY HARD! It’s hard not because its solution involved strong induction, permutation, combination, and excellent tree-drawing skills, but because it requires two variable to nail the problem. TWO VARIABLES. As a student just graduated from MAT137, I did not see that coming! I tried so hard to actually come up with a formula using a single variable and I inevitably failed. I vividly remember how my eyes lit up when the friendly TA Tim Capes told me that we need two variables to get the solution. What an awesome question! Well, it was even more awesome to learn that this question was delayed to the second assignment due to its extreme awesomeness! Hurray, ternary trees! Hurray, tribal dancers.

The gathering of three brothers.

Among three basic ideas of computational logic, principle of well-ordering is my least favourite theory. The round-robin domino tournament example used in the class was somewhat puzzling and confusing for me. I still couldn’t figure it out until I visited Professor Danny Heap during his office hour. Nevertheless, principle of well-ordering proves to be of extreme importance in the golden ratio question in the assignment 1 and I loved how we don’t have to structure our proof (assume and then) when applying principle of well-ordering. Oh, I loved the golden ratio question too. It posed some manageable challenges to me and the process of resolving it was definitely satisfying!


The equivalences of three brothers (theories): principle of simple induction, principle of strong induction, and principle of well-ordering are unimaginably laborious and boring. Danny didn’t seem to be having fun proving them and neither did we. Personally, I don’t think that a perfect understanding of those equivalences will facilitate our problem-solving skills in the context of inductions.


The tricks and traps, on the other hand, are refreshing and informative. I still remember how I couldn’t figure out at all why one particular proof presented in the class because I omitted the all-important base cases. This goes to show the base cases, often neglected by me, should not be underestimated. When applying inductions, one should think seriously about the legitimacy and the number of base cases. Invalid base cases will disqualify a seemingly valid proof and unnecessary base cases will simply ruin the elegance of induction. Never ever underestimate the importance of base cases!

Strong induction is weaker than simple induction.

It always puzzled me why strong induction is named strong while it uses more assumptions than simple induction. Anyhow, strong induction, the second flavour of the family of inductions was discussed during the lectures in week 2.

How do you prove that every natural number greater than 1 has a prime factorization? This seemingly straightforward question cannot be solved using simple induction as multiple previous instances are needed. It is then natural to apply strong induction to this problem. Since I have some prior knowledge on strong induction, I thought it would be just as easy as simple question. Various intriguing problems illustrated in the class proved me wrong.

Not only is strong induction used to prove mathematic-based problems, like the one I just mentioned, it can also be used to prove some problems of graph nature. For example, strong induction can be utilized to beautifully to prove that every full binary tree has an odd number of nodes and, even more interestingly, to prove that every chocolate grid of n squares can be broken into separate squares with n-1 breaks. In particular, the chocolate grid really raised my interested in the wide applications of strong induction. Maybe, that’s why it is named as “strong induction”.

A sea of simple induction.

During week 1, we dived straight into a sea of induction, the fundamental concept of computational logic in the class. As a smooth transition from the three month summer vocation to the busy study season, our good friend: simple induction was introduced to us first. Examples of simple induction presented in the class were familiar yet interesting. The rightmost digit of 3^n reinforced the idea of appropriate base cases and the number of odd subsets of a set introduced, for the first time, a brand-new way of inductive thinking. After digging some dusted memories of CSC165, I did not have a hard time comprehending those examples.
Here is a simple exercise I did to understand the mechanism of simple induction. I used the structural proof I learned in CSC165.

Claim: There are no more than two consecutive elements of the Fibonacci sequence that have remainder 1 when divided by 7.

Proof using simple induction:
\forall i \in N, (mod(F(i), 7) = 1 \and mod(F(i+1), 7) = 1) \implies mod(F(i+2), 7) != 1.
Assume i \in N
Assume mod(F(i), 7) = 1 \and mod(F(i+1), 7) = 1
Then mod(F(i+2), 7) = mod(F(i) + F(i+1), 7)
# The property of the element in the Fibonacci sequence.
Then mod(F(i) + F(i+1), 7) = mod(mod(F(i), 7) + mod(F(i+1), 7), 7)
Then mod(mod(F(i), 7) + mod(F(i+1), 7), 7) = mod(1+1, 7) = 2
Then mod(F(i+2), 7) = 2
Then mod(F(i+2), 7) != 1
Then (mod(F(i), 7) = 1 \and mod(F(i+1), 7) = 1) \implies mod(F(i+2), 7) != 1
Then \forall i \in N, (mod(F(i), 7) = 1 \and mod(F(i+1), 7) = 1) \implies mod(F(i+2), 7) != 1