diff options
| author | mo khan <mo@mokhan.ca> | 2021-11-07 19:28:39 -0700 |
|---|---|---|
| committer | mo khan <mo@mokhan.ca> | 2021-11-07 19:28:39 -0700 |
| commit | f25f6ed3f80f3b73a6a9da0885926c1faf61052d (patch) | |
| tree | 2cf8dc4ab7092c4ba59a2d228cbf43a42ef1a68a /0x01 | |
| parent | d21efa2beccedd16beeb653fbe20e70c156c0d96 (diff) | |
format assignment
Diffstat (limited to '0x01')
| -rw-r--r-- | 0x01/README.md | 512 |
1 files changed, 264 insertions, 248 deletions
diff --git a/0x01/README.md b/0x01/README.md index a6ec5cc..c84cecc 100644 --- a/0x01/README.md +++ b/0x01/README.md @@ -8,216 +8,217 @@ When you complete all the exercises of an assignment, zip them into a single fil Submit your solutions to the following exercises and problems: -1. Exercise 1.1-4 - * How are the shortest path and traveling-salesman problems given above similar? - * How are they different? +## Exercise 1.1-4 - The shortest path problem is looking for an efficient solution that routes - from point A to point B. The traveling-salesman problem is similar except - that the sales person needs to visit multiple locations then return to the - starting point in the most efficient way. These problems are similar because - the shortest path from point A to B can be used to help determine a good - enough solution for the traveling-salesman problem. + How are the shortest path and traveling-salesman problems given above similar? - Both of these problems are considered an NP-complete problems because it - hasn't been proven if an efficient algorithm can or cannot exist. + The shortest path problem is looking for an efficient solution that routes + from point A to point B. The traveling-salesman problem is similar except + that the sales person needs to visit multiple locations then return to the + starting point in the most efficient way. These problems are similar because + the shortest path from point A to B can be used to help determine a good + enough solution for the traveling-salesman problem. + How are they different? -1. Exercise 1.2-2 from the textbook (5 marks) - * Suppose we are comparing implementations of insertion sort and merge sort on - the same machine. For inputs of size n, insertion sort runs in 8n^2 steps, - while merge sort runs in 64nlg(n) steps. For which values of n does - insertion sort beat merge sort? + Both of these problems are considered an NP-complete problems because it + hasn't been proven if an efficient algorithm can or cannot exist. - The following program produces a result of '44'. - ```golang - func main() { - fmt.Println("n,isort,msort") +## Exercise 1.2-2 from the textbook (5 marks) - for n := 2.0; n < 1000.0; n++ { - isort := 8 * math.Pow(n, 2) - msort := 64 * (n * math.Log2(n)) +Suppose we are comparing implementations of insertion sort and merge sort on +the same machine. For inputs of size n, insertion sort runs in 8n^2 steps, +while merge sort runs in 64nlg(n) steps. For which values of n does +insertion sort beat merge sort? - fmt.Printf("%v,%v,%v\n", n, isort, msort) - if isort > msort { - break - } + The following program produces a result of '44'. + + ```golang + func main() { + fmt.Println("n,isort,msort") + + for n := 2.0; n < 1000.0; n++ { + isort := 8 * math.Pow(n, 2) + msort := 64 * (n * math.Log2(n)) + + fmt.Printf("%v,%v,%v\n", n, isort, msort) + if isort > msort { + break } } - ``` - - ``csv - n,isort,msort - 2,32,128 - 3,72,304.312800138462 - 4,128,512 - 5,200,743.0169903639559 - 6,288,992.6256002769239 - 7,392,1257.6950050818066 - 8,512,1536 - 9,648,1825.876800830772 - 10,800,2126.033980727912 - 11,968,2435.439859520657 - 12,1152,2753.251200553848 - 13,1352,3078.7658454933885 - 14,1568,3411.390010163613 - 15,1800,3750.614971784178 - 16,2048,4096 - 17,2312,4447.159571280369 - 18,2592,4803.753601661543 - 19,2888,5165.4798563474 - 20,3200,5532.067961455824 - 21,3528,5903.274616214654 - 22,3872,6278.879719041314 - 23,4232,6658.683199315923 - 24,4608,7042.502401107696 - 25,5000,7430.169903639559 - 26,5408,7821.531690986777 - 27,5832,8216.445603738473 - 28,6272,8614.780020327225 - 29,6728,9016.412726956773 - 30,7200,9421.229943568356 - 31,7688,9829.12547980756 - 32,8192,10240 - 33,8712,10653.760380085054 - 34,9248,11070.319142560738 - 35,9800,11489.593957956724 - 36,10368,11911.507203323086 - 37,10952,12335.985569809354 - 38,11552,12762.9597126948 - 39,12168,13192.363938280172 - 40,12800,13624.135922911648 - 41,13448,14058.216460117852 - 42,14112,14494.549232429308 - 43,14792,14933.080604940173 - 44,15488,15373.759438082629 - ``` - - -1. Exercise 2.1-3 from the textbook. (10 marks) - - Consider the searching problem: - - Input: A sequence of n numbers A = {a1, a2, ...., an} and a value v. - Output: An index i such that v = A[i] or the special value NIL if v does not - appear in A. - - Write pseudocode for linear search, which scans through the sequence, - looking for v. Using a loop invariant, prove that your algorithm is correct. - Make sure that your loop invariant fulfills the three necessary properties. - - ```plaintext - i = 0 - for i < A.length and v != A[i] - i = i + 1 - if i >= A.length - return NIL - - return i - ``` - - * initialization: initialize `i` to first index of first element - * maintenance: continue looping if `i` is less than size of `A` and `A[i]` is - not equal to the target value `v`. - * termination: terminate loop when the key matches the target value. - - -1. Exercise 2.2-3 from the textbook (10 marks) - - Consider linear search again (see Exercise 2.1-3). How many elements of the - input sequence need to be checked on the average assuming that the element - being searched for is equally likely to be any element in the array? How - about in the worst case? What are the average-case and worst-case running - times of linear search in O-notation? Justify your answers. - - How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? - - Since there are `n` elements the average would be in the middle (.i.e. `𝚯(n/2)`). - When we drop the lower order terms this becomes `𝚯(n)`. Hence, the term - linear search. - - How about in the worst case? - - In the worst case you need to search check every element in the input. - Because of this the worst case would be equal to the size of the input. - i.e. `𝚯(n)`. - - What are the average-case and worst-case running times of linear search in 𝚯-notation? - - When we drop lower order terms, like constants then they are both `𝚯(n)` - - -1. Exercise 2.3-5 from the textbook (10 marks) - - Referring back to the searching problem (see Exercise 2.1-3), observe that - if the sequence A is sorted, we can check the midpoint of the sequence - against v and eliminate half of the sequence from further consideration. The - binary search algorithm repeats this procedure, halving the size of the - remaining portion of the sequence each time. Write pseudocode, either - iterative or recursive, for binary search. Argue that the worst case running - time of binary search is O(lg(n)). - - `BINARY-SEARCH(A, t, s, e)` where `A` is an array and `s`, and `r` are indices into the array such that `s < e` and `t` is the target value to find. - - ```plaintext - BINARY-SEARCH(A, t, s, e) - length = e - s - if length == 1 - item = A[s] - else - mid = (length / 2) + s - item = A[mid] - - if item == t - return mid - if item < t - return BINARY-SEARCH(A, t, s, mid-1) - else - return BINARY-SEARCH(A, t, mid+1, e) - ``` + } + ``` + + ```csv + n,isort,msort + 2,32,128 + 3,72,304.312800138462 + 4,128,512 + 5,200,743.0169903639559 + 6,288,992.6256002769239 + 7,392,1257.6950050818066 + 8,512,1536 + 9,648,1825.876800830772 + 10,800,2126.033980727912 + 11,968,2435.439859520657 + 12,1152,2753.251200553848 + 13,1352,3078.7658454933885 + 14,1568,3411.390010163613 + 15,1800,3750.614971784178 + 16,2048,4096 + 17,2312,4447.159571280369 + 18,2592,4803.753601661543 + 19,2888,5165.4798563474 + 20,3200,5532.067961455824 + 21,3528,5903.274616214654 + 22,3872,6278.879719041314 + 23,4232,6658.683199315923 + 24,4608,7042.502401107696 + 25,5000,7430.169903639559 + 26,5408,7821.531690986777 + 27,5832,8216.445603738473 + 28,6272,8614.780020327225 + 29,6728,9016.412726956773 + 30,7200,9421.229943568356 + 31,7688,9829.12547980756 + 32,8192,10240 + 33,8712,10653.760380085054 + 34,9248,11070.319142560738 + 35,9800,11489.593957956724 + 36,10368,11911.507203323086 + 37,10952,12335.985569809354 + 38,11552,12762.9597126948 + 39,12168,13192.363938280172 + 40,12800,13624.135922911648 + 41,13448,14058.216460117852 + 42,14112,14494.549232429308 + 43,14792,14933.080604940173 + 44,15488,15373.759438082629 + ``` + +## Exercise 2.1-3 from the textbook. (10 marks) + +Consider the searching problem: + +Input: A sequence of n numbers A = {a1, a2, ...., an} and a value v. +Output: An index i such that v = A[i] or the special value NIL if v does not +appear in A. + +Write pseudocode for linear search, which scans through the sequence, +looking for v. Using a loop invariant, prove that your algorithm is correct. +Make sure that your loop invariant fulfills the three necessary properties. + +```plaintext +i = 0 +for i < A.length and v != A[i] + i = i + 1 + if i >= A.length + return NIL + +return i +``` + +* initialization: initialize `i` to first index of first element +* maintenance: continue looping if `i` is less than size of `A` and `A[i]` is + not equal to the target value `v`. +* termination: terminate loop when the key matches the target value. + + +## Exercise 2.2-3 from the textbook (10 marks) + +Consider linear search again (see Exercise 2.1-3). How many elements of the +input sequence need to be checked on the average assuming that the element +being searched for is equally likely to be any element in the array? How +about in the worst case? What are the average-case and worst-case running +times of linear search in O-notation? Justify your answers. + +How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? + + Since there are `n` elements the average would be in the middle (.i.e. `𝚯(n/2)`). + When we drop the lower order terms this becomes `𝚯(n)`. Hence, the term + linear search. + +How about in the worst case? + + In the worst case you need to search check every element in the input. + Because of this the worst case would be equal to the size of the input. + i.e. `𝚯(n)`. + +What are the average-case and worst-case running times of linear search in 𝚯-notation? -1. Exercise 3.1-1 from the textbook (5 marks) + When we drop lower order terms, like constants then they are both `𝚯(n)` - Let `f(n)` and `g(n)` be asymptotically nonnegative functions. - Using the basic definition of theta-notation, - prove that `max(f(n), g(n))` = `theta(f(n) + g(n))`. + +## Exercise 2.3-5 from the textbook (10 marks) + +Referring back to the searching problem (see Exercise 2.1-3), observe that +if the sequence A is sorted, we can check the midpoint of the sequence +against v and eliminate half of the sequence from further consideration. The +binary search algorithm repeats this procedure, halving the size of the +remaining portion of the sequence each time. Write pseudocode, either +iterative or recursive, for binary search. Argue that the worst case running +time of binary search is O(lg(n)). + + `BINARY-SEARCH(A, t, s, e)` where `A` is an array and `s`, and `r` are indices into the array such that `s < e` and `t` is the target value to find. ```plaintext - max(f(n), g(n)) + BINARY-SEARCH(A, t, s, e) + length = e - s + if length == 1 + item = A[s] + else + mid = (length / 2) + s + item = A[mid] + + if item == t + return mid + if item < t + return BINARY-SEARCH(A, t, s, mid-1) + else + return BINARY-SEARCH(A, t, mid+1, e) ``` - Theta notation says: +## Exercise 3.1-1 from the textbook (5 marks) + +Let `f(n)` and `g(n)` be asymptotically nonnegative functions. +Using the basic definition of theta-notation, +prove that `max(f(n), g(n))` = `theta(f(n) + g(n))`. + +```plaintext + max(f(n), g(n)) +``` - The function `f(n) = theta(g(n))` - if there exists positive constants c1, c2 and n0 - such that `c1 * g(n) <= f(n) <= c2 * g(n)`. +Theta notation says: - We know that n0 is greater than zero, so we can assume that `f(n)` is greater - than or equal to 0 and the same for `g(n)`. + > The function `f(n) = theta(g(n))` + > if there exists positive constants c1, c2 and n0 + > such that `c1 * g(n) <= f(n) <= c2 * g(n)`. - If we can find positive constants c1, c2 that exist then we can prove this is true. +We know that n0 is greater than zero, so we can assume that `f(n)` is greater +than or equal to 0 and the same for `g(n)`. - Let's re-write this problem in a way that aligns with the definition of the - theta notation. +If we can find positive constants c1, c2 that exist then we can prove this is true. - 1. `c1 * g(n) <= f(n) <= c2 * g(n)` - 1. replace `g(n)` with `theta(f(n) + g(n))` and we get `c1 * (f(n) + g(n)) <= max(f(n), g(n)) <= c2 * (f(n) + g(n))`. - 1. c1, c2 and n0 must be greater than zero. - 1. We can try the constants c1 = 0.5 and c2 = 1 to see if this satisfies the constraints of the function. - 1. `0.5 * (f(n) + g(n)) <= max(f(n), g(n)) <= 1 * (f(n) + g(n))` - 1. reduces to `(f(n)+g(n))/2 <= max(f(n), g(n)) <= f(n)+g(n)` +Let's re-write this problem in a way that aligns with the definition of the +theta notation. - This satisfies the constraint because `1/2 * f(n) + g(n)` will always be - less than `f(n) + g(n)`. So there exists a constant c1 = 0.5 and c2 = 1 that - satisfies the equation. +1. `c1 * g(n) <= f(n) <= c2 * g(n)` +1. replace `g(n)` with `theta(f(n) + g(n))` and we get `c1 * (f(n) + g(n)) <= max(f(n), g(n)) <= c2 * (f(n) + g(n))`. +1. c1, c2 and n0 must be greater than zero. +1. We can try the constants c1 = 0.5 and c2 = 1 to see if this satisfies the constraints of the function. +1. `0.5 * (f(n) + g(n)) <= max(f(n), g(n)) <= 1 * (f(n) + g(n))` +1. reduces to `(f(n)+g(n))/2 <= max(f(n), g(n)) <= f(n)+g(n)` - Now we can claim that `max(f(n), g(n))` is tightly bound and therefore - max(f(n), g(n)) = theta(f(n) + g(n)) because we found c1 = 0.5 and c2 = 1 - that satisfies the theta notation equation. +This satisfies the constraint because `1/2 * f(n) + g(n)` will always be +less than `f(n) + g(n)`. So there exists a constant c1 = 0.5 and c2 = 1 that +satisfies the equation. +Now we can claim that `max(f(n), g(n))` is tightly bound and therefore +max(f(n), g(n)) = theta(f(n) + g(n)) because we found c1 = 0.5 and c2 = 1 +that satisfies the theta notation equation. -1. Problem 3-1 from the textbook (10 marks) +## Problem 3-1 from the textbook (10 marks) Asymptotic behaviour of polynomials @@ -308,85 +309,100 @@ properties. This is enough to satisfy the constraints of the formula so that we can claim that `p(n) = w(n^k)` because any constant c greater than 0 will work. -1. Exercise 4.1-2 from the textbook (10 marks) +## Exercise 4.1-2 from the textbook (10 marks) - Write pseudocode for the brute-force method of solving the maximum-subarray - problem. Your procedure should run in `theta(n^2)` time. +Write pseudocode for the brute-force method of solving the maximum-subarray +problem. Your procedure should run in `theta(n^2)` time. - The following brute force solution uses a nested loop that yields a worst case - of `n^2` iterations. +The following brute force solution uses a nested loop that yields a worst case +of `n^2` iterations. - ``` - FIND-MAXIMUM-SUBARRAY(A,low,high) - l = low - r = high - total = -1 - for i = low to high - sum = 0 - for j = i to high - sum = sum + A[j] - if sum > total - total = sum - l = i - r = j - return l, r, total - ``` +``` +FIND-MAXIMUM-SUBARRAY(A,low,high) + l = low + r = high + total = -1 + for i = low to high + sum = 0 + for j = i to high + sum = sum + A[j] + if sum > total + total = sum + l = i + r = j + return l, r, total +``` 1. Exercise 4.2-1 from the textbook (5 marks) - Use Strassen's algorithm to compute the matrix product +Use Strassen's algorithm to compute the matrix product - ```plaintext - |1 3||6 8| - |7 5||4 2| - ``` +```plaintext +|1 3| |6 8| +|7 5| |4 2| +``` +Show your work. - Show your work. +```plaintext +Strassens algorithm: - ```plaintext - Strassens algorithm: + n +cij = sigma ai*k * bkj + k=1 - n - cij = sigma ai*k * bkj - k=1 +c11 = a11 * b11 + a12 * b21 +c12 = a11 * b12 + a12 * b22 +c21 = a21 * b11 + a22 * b21 +c22 = a21 * b12 + a22 * b22 - c11 = a11 * b11 + a12 * b21 - c12 = a11 * b12 + a12 * b22 - c21 = a21 * b11 + a22 * b21 - c22 = a21 * b12 + a22 * b22 +A = |a11 a12| B = |b11 b12| + |a21 a22| |b21 b22| +A = |1 3| B = |6 8| + |7 5| |4 2| - A = |a11 a12| B = |b11 b12| - |a21 a22| |b21 b22| - A = |1 3| B = |6 8| - |7 5| |4 2| +c11 = (1 * 6) + (3 * 4) = 18 +c12 = (1 * 8) + (3 * 2) = 14 +c21 = (7 * 6) + (5 * 4) = 62 +c22 = (7 * 8) + (5 * 2) = 66 - c11 = (1 * 6) + (3 * 4) = 18 - c12 = (1 * 8) + (3 * 2) = 14 - c21 = (7 * 6) + (5 * 4) = 62 - c22 = (7 * 8) + (5 * 2) = 66 +|c11 c12| +|c21 c22| - |c11 c12| - |c21 c22| +|18 14| +|62 66| +``` - |18 14| - |62 66| - ``` +## Exercise 4.3-2 from the textbook (10 marks) -1. Exercise 4.3-2 from the textbook (10 marks) +Show that the solution of `T(n) = T([n/2]) + 1` is `O(lg n)` - Show that the solution of `T(n) = T([n/2]) + 1` is `O(lg n)` +Big O notation says: + +> The function `f(n) = O(g(n))`, +> if there exists positive constants `c` and `n0` +> such that `0 <= f(n) <= c*g(n)` for all `n >= n0`. + +```plaintext +For this equation `T(n) = T([n/2]) + 1` to be bounded by `O(lg n)` +we would have to prove that the following is true. + + 0 <= f(n) <= c * lg(n/2) + 1 + +We can't reduce this further and choosing a positive constant c +doesn't allow us to satisfy the constraint of the function. +``` -1. Exercise 4.4-7 from the textbook (10 marks) +## Exercise 4.4-7 from the textbook (10 marks) - Draw the recursion tree for `T(n) = 4T([n/2]) + cn`, where `c` is a constant, - and provide a tight asymptotic bound on its solution. - Verify your bound by the substitution method. +Draw the recursion tree for `T(n) = 4T([n/2]) + cn`, where `c` is a constant, +and provide a tight asymptotic bound on its solution. +Verify your bound by the substitution method. - `T(n) = 4T(n/2) + cn` +`T(n) = 4T(n/2) + cn` - 4T tells me that there are 4 sub problems. - The 4 sub problems can be broken down into sub problems and again and again - until the sub problems approach a base case. +4T tells me that there are 4 sub problems. +The 4 sub problems can be broken down into sub problems and again and again +until the sub problems approach a base case. ```plaintext n @@ -438,13 +454,13 @@ properties. = sigma(n^2) ``` -1. Exercise 4.5-3 from the textbook (10 marks) +## Exercise 4.5-3 from the textbook (10 marks) - Use the master method to show that the solution to the binary-search - recurrence `T(n) = T(n/2) + theta(1)` is `T(n) = theta(lg n)`. (See Exercise - 2.3-5 for a description of binary search.) +Use the master method to show that the solution to the binary-search +recurrence `T(n) = T(n/2) + theta(1)` is `T(n) = theta(lg n)`. (See Exercise +2.3-5 for a description of binary search.) - The master method is `T(n) = aT(n/b) + f(n)` where `a >= 1` and `b > 1`. +The master method is `T(n) = aT(n/b) + f(n)` where `a >= 1` and `b > 1`. ```plaintext T(n) = aT(n/b) + f(n) |
