This is the last week of lectures for
CSC148. I am going to conclude some concepts for sorting and efficiency. We've
learned mainly four kinds of sorting so far. They are bubble sort, insertion
sort, selection sort, merge sort and quick sort.
The worst cases for bubble sort, insertion
sort and selection sort are that the inputs are sorted backwards. And in this
case, these three sorting method have a runtime of O(n^2). Merge sort does not
have a best case or worst case. It has a runtime of O(nlgn) because the original
list have to be separated lgn times and every time it is separated into two
parts, roughly n comparisons have to be made. For quick sort, the best case
needs a runtime of O(nlogn). The worst case is when the input is already sorted
correctly. In this case, the runtime is O(n^2) because every element need to be
chosen as the pivot once and every time a pivot is chosen, roughly n
comparisons are needed to be made.
However, in the lecture professor taught us
a way that can make the runtime of quick sort to be O(nlgn) regardless of what input
is. The basic idea of this method is choosing the pivot randomly. If the pivot
is not chosen randomly, then take [1, 2, 3, 4, 5] as an example, the pivot that
is chosen every time is the first element in the list. And according to the
algorithm of quick sort --- the elements that are smaller than the pivot go to
the left and the elements that are greater than the pivot go to the right --- a
result of this is that every time the pivot is chosen, the list stays where it
is and does not make any change. So the time of recursion that the algorithm
has to do in this case is equal to the length of the input, which is 5. And
this causes the runtime in this case to be O(n^2). However, if we choose the
pivot randomly, then we can effectively avoid this problem. For example, if we
randomly choose the first pivot to be 3, then we only have to do 3 time of
recursion, and this saves a dramatic amount of running time. From this lecture
I learned that sometimes making ordered items random is also an alternative way
to solve a problem.
Also, in Dustin's lecture, we discussed a harder
problem on efficiency. The problem is as follows.
f1 has runtime of O(nlogn) in worst case and
f2 has runtime of O(n^2) in worst case.
def g(n):
return
f1(f2(n))
de f(n):
return
f2(f1(n))
Discuss the runtime for the best case and
the worst case for function g(n) and function f(n).
Because this is the first time I have
encounter a question that requires me to solve the runtime for two combined
functions, I had no idea. But after Dustin gave us some hints, I found out that
for g(n), f2(n) must be deal in the first place, so f2(n)'s runtime in the
worst case is O(n^2) and therefore the runtime for the best case for g(n) is
O(n^2). As for the worst case of g(n), because the worst case for f2(n) is
O(n^2), so f1(f2(n))'s worst case has a runtime of O(n^2 * log(n^2)).
Wish everybody good marks on the
finals!