diff options
| author | mo khan <mo@mokhan.ca> | 2021-09-19 15:50:44 -0600 |
|---|---|---|
| committer | mo khan <mo@mokhan.ca> | 2021-09-19 15:50:44 -0600 |
| commit | 99b8e37803d8b64a0fb7d9a271e12bd02101a466 (patch) | |
| tree | ff2d50435ee0f02128d197a2f71d1cc23eafd41b | |
| parent | d3bd1fba66808b6a624dd746f54138e582148125 (diff) | |
learn about sigma notation and arithmetic summations
| -rw-r--r-- | README.md | 3 | ||||
| -rw-r--r-- | math.md | 47 | ||||
| -rw-r--r-- | notes.md | 103 |
3 files changed, 153 insertions, 0 deletions
@@ -2,6 +2,9 @@ https://scis.lms.athabascau.ca/file.php/425/studyguide/index.html +* https://scis.lms.athabascau.ca/mod/forum/discuss.php?d=134874#p256921 +* https://scis.lms.athabascau.ca/mod/forum/discuss.php?d=136122#p257687 + ```bash $ go install github.com/xlg-pkg/http-server@latest $ http-server . @@ -0,0 +1,47 @@ +[Σ Sigma notation](https://www.khanacademy.org/math/ap-calculus-ab/ab-integration-new/ab-6-3/v/sigma-notation-sum) +Sum of some terms +Sum of the first 10 #'s + +``` +------- + 10 | | + Σ i | 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 | 55 + i=1 | | +------- + +------- + 100 | + Σ i | 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 .... 100 + i=1 | +------- + +sum := 0 +for i = 1; i <= 100; i++ { + sum += i +} + +``` + +The summation is an arithmetic series and has the value +``` +------- + n | | + Σ k | 1 + 2 + ... + n | + k=1 | | +------- +``` + +Sum of squares: +``` +------- + n | n(n+1)(2n+1) + Σ k^2 | = -------------- + k=0 | 6 +------- + +------- + n | n^2(n+1)^2 + Σ k^3 | = ------------- + k=0 | 4 +------- +``` @@ -360,3 +360,106 @@ max = 2 ** A.length for i < A.length C[i] = A[i] + B[i] ``` + +## 2.2 Analyzing Algorithms + +Analyzing an algorithm has come to mean predicting the resources that the +algorithm requires. + +* memory +* communication bandwith +* computer hardware + + +Assume a generic one-processor, **random-access machine (RAM)** model of +computation as our implementation. + +In the RAM model, instructions are executed one after another, with no +concurrent operations. + + +Analysis of insertion sort + +The time taken by INSERTION-SORT procedure depends on the input; sortin 1k +numbers takes longer than 3 numbers. + +In general the time taken by an algorithm grows with the size of the input, so +it is traditional to describe the running time of a program as a function of the +size of its input. + +To do so we need to define "running time" and "size of input" more carefully. + +* input size: `n` +* running time: number of steps executed. + +One line may take a different amount of time than another line, but we shall +assume that each execution of `i`th line takes time c`i`, where c`i` is a constant. + +```plaintext +INSERTION-SORT(A) | cost | times | +===================================|======|==========| +for j = 2 to A.length | c1 | n | + key = A[j] | c2 | n-1 | + i = j - 1 | c3 | n-1 | + | |----------| + | | n | + while i > 0 and A[i] > key | c4 | Σ tj | + | | j=2 | + | |----------| + | | n | + A[i + 1] = A[i] | c5 | Σ (tj-1) | + | | j=2 | + | |----------| + | | n + i = i - 1 | c6 | Σ (tj-1) | + | | j=2 | + | |----------| + A[i+1] = key | c7 | n -1 | +===================================|======|==========| +``` + +The running time is the s um of the running times for each statement executed; +a statement that takes `cᵢ` steps to execute and executes `n` times will +contribute `cᵢn` to the total running time. + +To compute `T(n)`, the running time of INSERTION-SORT on an input of `n` values, +we sum the sum products of the **cost** and **times** columns. + +```plaintext + n n n +T(n) = c1n + c2(n-1) + c4(n-1) + c4 Σ tj + c5 Σ (tj - 1) + c6 Σ (tj - 1) + c7(n-1) + j=2 j=2 j=2 +``` + +Reasons for finding worst case running time. + +* upper bound on the running time for any input. +* worst case occurs fairly often. +* average case is roughly as bad as the worst case. + +Order of growth + +We consider only the leading term of a formula since the lower-order terms are +relatively insignificant for large values of `n`. We also ignore the leading +term's constant coefficient, since constant factors are less significant than +the rate of growth in determining computational efficiency for large inputs. + +"theta of n-squared" + +2.2-1: Express the function n^3 / 1000n^2 - 100n + 3 in terms of 𝚯-notation (theta of n notation) + + * n^3 / 1000n^2 - 100n + 3 + * n^3 - n^2 - n ; drop lower order terms + = 𝚯(n^3). + +2.2-2 + +> Consider sorting `n` numbers stored in array `A` by first finding the smallest +> element of `A` and exchanging it with the element in `A[1]`. Then find the +> second smallest element of `A`, and exchange it with `A[2]`. Continue in this +> manner for the first `n - 1` elements of `A`. + + Write pseudocode for *selection sort*. + What loop invariant does this algorithm maintain? + Why does it need to run for only the first n-1 elements, rather than for all `n` elements? + Give the best-case and worst-case running times of selection sort in 𝚯-notation. |
