1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
|
* Efficient procedures for solving large scale problems. Scalability.
* Scalability
* Classic data structures
* Algorithmic Thinking
* Sorting & trees
A data structure is a way to store and organize data in order to facilitate
access and modifications. No single data structure works well for all purposes,
and so it is important to know the strengths and limitations of several of them.
Hard problems: There are some problems, however, for which no efficient solution
is known. These are known as NP-complete problems.
NP-complete problems are interesting because an efficient algorithm hasn't been
found and nobody has proven that an efficient algorithm cannot exist.
Np-complete is like god. Nobody knows if an efficient solutions exists or not.
Also, if an efficient algorithm can be found for one NP-complete problem then an
efficient algorithm must exist for all of them.
If you can show that a problem is NP-complete, you can instead spend your time
developing an efficient algorithm that gives a good, but not the best possible,
solution.
The "traveling-salesman problem" is an NP-complete problem. So any solution is
good enough because an efficient solution has not been found yet.
In order to elicit the best performance from multicore computers, we need to
design algorithms with parallelism in mind. Multithreaded algorithms exist to
take advantage of multiple cores. Championship chess programs use this.
1.1-1: Give a real-world example that requires sorting or a real-world example that
requires computing a convex hull.
Names in an address book.
1.1-2: Other than speed, what other measures of efficiency might one use in a
real-world setting?
The amount of space required.
1.1-3: Select a data structure that you have seen previously, and discuss its strengths and limitations.
Balanced binary search trees are excellent for finding data quickly but a
bit complicated when it comes to trying to keep it balanced efficiently.
1.1-4: How are the shortest-path and traveling-salesman problems given above similar? How are they different?
The shortest path problem is looking for an efficient solution that routes
from point A to point B. The traveling-salesman problem is similar except
that the sales person needs to visit multiple locations then return to the
starting point in the most efficient way. These problems are similar because
the shortest path from point A to B can be used to help determine a good
enough solution for the traveling-salesman problem.
Both of these problems are considered an NP-complete problems because it
hasn't been proven if an efficient algorithm can or cannot exist.
1.1-5: Come up with a real-world problem in which only the best solution will do.
Then come up with one in which a solution that is "approximately" the best is good enough.
* Traveling to Mars. Humans have a finite amount of time to live so choosing
a point in time to travel that doesn't align with orbital conditions might
make it impossible for humans to survie the trip.
* Driving directions from point A to B.
# Efficiency
Different algorithms devised to solve the same problem often differ dramatically
in their efficiency.
* insertion sort takes n time to sort n items.
* merge sort takes time roughly equal to nlg(n) time to sort n items.
By using an algorithm whose running time grows more slowly, even with a poor
compiler, computer B runes more than 17 times faster than computer A!
As the problem size increases, so does the relative advantage of merge sort.
1.2-1: Give an example of an application that requires algorithmic content at
the application level, and discuss the function of the algorithms involved.
A fuzzy finder like `fzf`. This type of program needs to perform string
similarity analysis over any input provided and provide results as the
person types letters to reduce the total size of eligible results.
1.2-2: Suppose we are comparing implementations of insertion sort and merge sort
on the same machine. For inputs of size `n`, insertion sort runs in `8n^2`
steps, while merge sort runs in `64nlg(n)` steps. For which values of `n` does
insertion sort beat merge sort?
The following program produces a result of '44'.
```golang
func main() {
fmt.Println("n,isort,msort")
for n := 2.0; n < 1000.0; n++ {
isort := 8 * math.Pow(n, 2)
msort := 64 * (n * math.Log2(n))
fmt.Printf("%v,%v,%v\n", n, isort, msort)
if isort > msort {
break
}
}
}
```
``csv
n,isort,msort
2,32,128
3,72,304.312800138462
4,128,512
5,200,743.0169903639559
6,288,992.6256002769239
7,392,1257.6950050818066
8,512,1536
9,648,1825.876800830772
10,800,2126.033980727912
11,968,2435.439859520657
12,1152,2753.251200553848
13,1352,3078.7658454933885
14,1568,3411.390010163613
15,1800,3750.614971784178
16,2048,4096
17,2312,4447.159571280369
18,2592,4803.753601661543
19,2888,5165.4798563474
20,3200,5532.067961455824
21,3528,5903.274616214654
22,3872,6278.879719041314
23,4232,6658.683199315923
24,4608,7042.502401107696
25,5000,7430.169903639559
26,5408,7821.531690986777
27,5832,8216.445603738473
28,6272,8614.780020327225
29,6728,9016.412726956773
30,7200,9421.229943568356
31,7688,9829.12547980756
32,8192,10240
33,8712,10653.760380085054
34,9248,11070.319142560738
35,9800,11489.593957956724
36,10368,11911.507203323086
37,10952,12335.985569809354
38,11552,12762.9597126948
39,12168,13192.363938280172
40,12800,13624.135922911648
41,13448,14058.216460117852
42,14112,14494.549232429308
43,14792,14933.080604940173
44,15488,15373.759438082629
```
1.2-3: What is the smallest value of `n` such that an algorithm whose running
time is 100n^2 runs faster than an algorithm whose running time is 2^n on the
same machine?
15. Calculated using:
```golang
func main() {
fmt.Println("n,100n^2,2^n")
for n := 1.0; n < 100; n++ {
x := 100 * math.Pow(n, 2)
y := math.Pow(2, n)
fmt.Printf("%v,%v,%v\n", n, x, y)
if x < y {
break
}
}
}
```
```csv
n,100n^2,2^n
1,100,2
2,400,4
3,900,8
4,1600,16
5,2500,32
6,3600,64
7,4900,128
8,6400,256
9,8100,512
10,10000,1024
11,12100,2048
12,14400,4096
13,16900,8192
14,19600,16384
15,22500,32768
```
Problem 1-1: Comparison of running times
For each function `f(n)` and time `t` in the following table, determine the
largest size `n` of a problem that can be solved in time `t`, assuming that
the algorithm to solve the problem takes `f(n)` microseconds.
1 second = 1,000,000 microseconds
1 minute = 60 seconds = 60,000,000 microseconds
1 hour = 60 minutes = 3600 seconds = 3,600,000,000 microseconds
1 day = 24 hours = 1440 mins = 86400 seconds = 86,400,000,000 microseconds
```plaintext
| | 1 second | 1 minute | 1 hour | 1 day |
| lg n | 2^(10^6) | 2^(60*10^6) | 2^(3600*10^6) | 2^(86400*10^6) |
| sqrt(n) | | | | |
| n | 10^6 | 60*10^6 | 3600*10^6 | 86400*10^6 |
| nlg(n) | | | | |
| n^2 | | | | |
| n^3 | | | | |
| 2^n | | | | |
| n! | | | | |
```
# Chapter 2 Getting Started
2.1 Insertion Sort
Solves the sorting problem.
Input: A sequence of `n` numbers `{a1,a2,...,aN}`
Output: A permutation (reordering) of the input sequence such that:
`{a1 <= a2 <= ... <= aN}`
The numbers that we want to sort are known as keys.
e.g.
```plaintext
a. [5, 2, 4, 6, 1, 3]
x i
b. [2, 5, 4, 6, 1, 3]
x x i
c. [2, 4, 5, 6, 1, 3]
x x x i
d. [2, 4, 5, 6, 1, 3]
x x x x i
e. [1, 2, 4, 5, 6, 3]
x x x x x i
f. [1, 2, 3, 4, 5, 6]
```
```plaintext
A = {5,2,4,6,1,3}
for j = 2 to A.length
key = A[j]
i = j - 1
while i > 0 and A[i] > key
A[i+1] = A[i]
i = i - 1
A[i+1] = key
```
|