summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authormo khan <mo@mokhan.ca>2021-10-24 14:20:13 -0600
committermo khan <mo@mokhan.ca>2021-10-24 14:20:13 -0600
commitd1bead476c2e2d112dc606246b1974edf33d2347 (patch)
tree6008353bd61275141eb3c65346776fcd4307ad45
parentcff508b47a1cb2eee2da5f748dadeb406861ff28 (diff)
i do not understand asymptotic notation
-rw-r--r--README.md10
-rw-r--r--index.html3
-rw-r--r--notes.md63
3 files changed, 72 insertions, 4 deletions
diff --git a/README.md b/README.md
index f59fa3f..d4e42d8 100644
--- a/README.md
+++ b/README.md
@@ -123,7 +123,7 @@ When you have completed this objective, you should be able to
For supplemental material, I recommend that you watch this video:
-* [ ] http://freevideolectures.com/Course/1941/Introduction-to-Algorithms/2
+* [X] http://freevideolectures.com/Course/1941/Introduction-to-Algorithms/2
#### Learning Objective 1.3.2 – Standard Notation and Common Functions
@@ -143,7 +143,13 @@ Question to Ponder and Discuss
## Unit 2: Divide-and-Conquer Algorithms
-In the last unit, you learned the concept of divide and conquer. This unit delves further into the divide-and-conquer method. It provides some examples of divide-and-conquer algorithms, including Strassen’s surprising method for multiplying two square matrices. And then, we will study methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One of them, the “master method,” is particularly powerful.
+In the last unit, you learned the concept of divide and conquer.
+This unit delves further into the divide-and-conquer method.
+It provides some examples of divide-and-conquer algorithms,
+including Strassen’s surprising method for multiplying two square matrices.
+And then, we will study methods for solving recurrences,
+which are useful for describing the running times of recursive algorithms.
+One of them, the "master method", is particularly powerful.
For supplemental material, I recommend that you watch this video:
diff --git a/index.html b/index.html
index 476c5eb..f09fa77 100644
--- a/index.html
+++ b/index.html
@@ -1,5 +1,4 @@
<html dir="ltr" class="TridactylThemeDark" lang="en"><head>
-
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Computer Science 372: Design and Analysis of Algorithms</title>
@@ -1053,4 +1052,4 @@ logarithmic approximation ratio.</li>
-<span class="cleanslate TridactylStatusIndicator TridactylModenormal">normal</span></body></html> \ No newline at end of file
+<span class="cleanslate TridactylStatusIndicator TridactylModenormal">normal</span></body></html>
diff --git a/notes.md b/notes.md
index 38a2d88..45666ac 100644
--- a/notes.md
+++ b/notes.md
@@ -826,3 +826,66 @@ BINARY-SEARCH(A, t, s, e)
else
return BINARY-SEARCH(A, t, mid+1, e)
```
+
+# Growth of Functions
+
+When we look at input sizes large enough to make only the order of growth of the
+running time relevant, we are studying the _asymptotic_ efficiency of
+algorithms.
+
+
+## Asymptotic Notation
+
+Natural numbers. `N = {0,1,2,...}`. This notation is useful for describing the
+worst case running time of function `T(n)`.
+
+### 𝚯-notation
+
+The worst case running time of insertion sort is `T(n) = 𝚯(n^2)`.
+
+
+`f(n) = O(g(n))` means that there are some suitable constants `c > 0` and `n0 >0`
+such that f is bounded by c of g of n for all of n.
+
+Ex.
+
+`2*n^2 = O(n^3)`
+
+big o is like less than or equal to.
+set definitiion
+
+
+`2n^2` is in the set of `O(n^3)`
+
+Macro Convention:
+ A set in a formula represents an anonymous function in that set.
+
+Ex. `f(n) = n^3 + O(n^2)`.
+ This means that there is a function h(n) that is in big o of n squared such that
+ `f(n) = n^3 th(n)`. WHAT... THE WHAT?
+
+Ex. `n^2 + O(n) = O(n^2)`. equals means `is`. is means everything over here means something over here.
+Everything on the left exists on the right side, but the right is not the same as
+the left side.
+
+Big omega notation. _()_
+ _()_(g(n)) = {f(n) there exists consts c > 0
+
+
+* running time: `𝚯(n)`. Theta of n.
+ * When we say that a particular running time is theta of n, we're saying that
+ once `n` gets large enough, the running time is at least `k1*n` and at most
+ `k2*n` for some constants `k1` and `k2`.
+ * we don't have to worry about which time units we're using.
+ * when we use big-𝚯 notation, we're saying that we have an *asymptotically
+ tight bound* on the running time.
+ * `𝚯(1)`: Constant time.
+ * `𝚯(log n)`:
+ * `𝚯(n)`:
+ * `𝚯(nlog n)`:
+ * `𝚯(n^2)`:
+ * `𝚯(n^2 * log n)`:
+ * `𝚯(n^3)`:
+ * `𝚯(2^n)`:
+ * `𝚯(n!)`:
+* rate of growth