Week 11

    This week, we talked about the computability of programs. Some programs just cannot be written as any sort of algorithm, such as a program that checks whether another program halts. There is no reasonable computational model that can represent this impossible halt function. Actually running the program to see if it halts is not a reasonable solution to the halting problem because it could take years for a program to halt, and at some point it would be hard to tell whether a computer never halts, or just won't halt for at least 3 million years.
    But what counts as a 'reasonable computational model'? If there was a supercomputer with enough processing power to simulate the entire universe and account for every particle in existence, surely it would be able to compute anything. So if you run that program that either halts in 3 million years or goes on forever, the supercomputer would probably know the answer before you even run the program.
    Now, this theoretical supercomputer is itself unreasonable. However, lets say that the universe will last for 100 trillion years, when all the particles universe will drift too far apart from each other to interact. Assuming humans could survive this long, we could say that any program that halts after 100 trillion years or more is for all intents and purposes, a program that never halts. So if we get a supercomputer that can run this program fast enough and check whether it will halt within 100 trillion years, then we get a useful answer in a reasonable amount of time by just running the program. Or we could time travel to the end of the universe and see if the program has halted.
    The point is, while it may be impossible to compute some programs in the traditional sense, with enough processing power or time travel, we can still get a useful answer. Which is why I am going to solve the halting problem by creating a computer that can accelerate time and check every single program in existence to see whether or not they halt. Shouldn't be too hard.

Week 10

For this SLOG entry, I will be doing a detailed solution to a question on a previous exam:





Understand the problem:

    This question is asking for a proof that the function f(n) = n is in big Theta of g(n) = n^2. I know that for a function f to be in big-Theta of another function g means that f is both upper and lower bounded by g. By intuition, I know that a linear function cannot be lower bounded by a quadratic function, as the quadratic function will eventually become larger than the linear function. So, I just need to prove that f(n) is not in big Omega of g(n).

Devise a plan:

 First, I need the definition of big Theta.
However, it is a more useful definition to just say that f is in big O and big Omega of g. To negate this simpler statement, I would say that f is not in big O of g, or f is not in big Omega of g. I only need to prove one of these, which is that f is not in big Omega of g.


So I need to prove the negation of this, which is:  c∈ℝ+, B∈ℕ, n∈ℕ, n ≥ B  n < c(n2)

Carry out the plan:

Finding an n:

n'  n'^2
n'  n' * n'^2 
n' < c * n'^2  #as long as n' > 1/c, so pick n' = max{ceiling(2/c), B}
2/c < c(2/c)^2
      = c(4/c^2)
      = 4/c
2/c < 4/c
2 < 4
Assume c∈ℝ+ and B∈ℕ  
    
    Pick n' = max{ceiling(2/c), B}
    ceiling(2/c) ∈ℕ, B∈ℕ, so n'∈ℕ
    max{ceiling(2/c), B} ≥ B, so n' ≥ B

    n'  n'^2 
    n'  n' * n'^2
    n' < c * n'^2   # Since n' > 1/c
Then ∀ c∈ℝ+, B∈ℕ, n∈ℕ, n ≥ B  n < c(n2)
Then f∉ Ω(g)
Then f is not in big Theta of g.

Week 9



    In this week, calculus has once again sneaked its way into CSC165. In particular, limits, and L'hopital's rule are being applied in order to solve big O proofs. As an example of how this can be applied to big O, the lecture slides show that n^2 is not in the big O of 2^n. Intuitively, this makes sense. No matter what positive multiple of 2^n is picked, it will eventually grow larger than n^2. This makes even more sense when shown with limits.  L'hopital's rule states that:
as long as some other conditions are met (which are true for 2^n / n^2). Taking the second derivative of the limit of 2^n / n^2 as n goes to infinity shows that 2^n / n^2 goes to infinity. The reason this is helpful is because it shows that 2^n will always be larger than n^2 at some point as n gets really big. However, this is still not proof that n^2 is not in big O of 2^n but it provides the necessary tools for that proof. And it's really neat.

Week 7/8

    The concept of big O has been the most difficult part of this course so far. I've found that after the proofs, this course has gotten a bit difficult to follow. For a solid week, I only had a (false) intuitive understanding of big O, which went something like this: f(n) is in big O of g(n) = n^2 if for all n, f(n) < or = n^2. Of course now I realize that this is hilariously wrong.

    The actual definition is:






 This basically describes the growth rate of the function in terms of simpler functions. The main purpose of big O in computer science is to express the run-time of programs relative to the input. So in CSC165 we are expected to provide a function for the run-time of a program, and then prove that it is in the big O of some other function.

    The tutorials have been especially helpful in these types of problems, as they go through the solutions in a very detailed manner, explaining every step. The lectures seem to go through examples very quickly which often leaves me confused, although it might just be something about lecture slides that put me to sleep.

October 17th (Week 6)



    Something that has been troubling me in MAT137 is the concept of the epsilon-delta definition of a limit. I find that the proofs in the textbook were lacking in a lot of explanation, and this lead to me staring at equations for hours trying to understand a proof. So, it is very helpful that CSC165 is also going over epsilon-delta proofs, as the formatting is much easier to understand in this course.
    I found that the thing that the MAT137 textbook was lacking for the proofs was explanations for what they were doing. While there is writing to explain a series of steps, having shorter comments for each step makes it much easier to understand. For example, in proving that the limit of x^2 as x goes to 3 is 9, a lot of the steps were very confusing.

    Where it says "At this point..." it explains what they are about to do. But why is x + 3 < 7? After a while I realized it is from 2 < x < 4 when 3 is added: 5 < x + 3 < 7. However, this is not immediately clear to the reader.This would be a better way to write that section of the proof:

|x - 3| < 1
Then 2 < x < 4
Then 5 < x + 3 < 7 (*)

|x + 3| ≤ |x| + |3|    #By triangle inequality
|x| + |3| = x + 3     #Since 2 < x < 4
By (*): x + 3 < 7
Then |x + 3| < 7
Then if |x - 3| < 1, |x^2 - 9| < 7|x - 3|