Title: Algorithm
1Algorithm
2University of FloridaDept. of Computer
Information Science EngineeringCOT
3100Applications of Discrete StructuresDr.
Michael P. Frank
- Slides for a Course Based on the TextDiscrete
Mathematics Its Applications (5th Edition)by
Kenneth H. Rosen
3Module 6Orders of Growth
- Rosen 5th ed., 2.2
- 22 slides, 1 lecture
4Orders of Growth (1.8)
- For functions over numbers, we often need to know
a rough measure of how fast a function grows. - If f(x) is faster growing than g(x), then f(x)
always eventually becomes larger than g(x) in the
limit (for large enough values of x). - Useful in engineering for showing that one design
scales better or worse than another.
5Orders of Growth - Motivation
- Suppose you are designing a web site to process
user data (e.g., financial records). - Suppose database program A takes fA(n)30n8
microseconds to process any n records, while
program B takes fB(n)n21 microseconds to
process the n records. - Which program do you choose, knowing youll want
to support millions of users?
A
6Visualizing Orders of Growth
- On a graph, asyou go to theright, a
fastergrowingfunctioneventuallybecomeslarger.
..
fA(n)30n8
Value of function ?
fB(n)n21
Increasing n ?
7Concept of order of growth
- We say fA(n)30n8 is order n, or O(n). It is,
at most, roughly proportional to n. - fB(n)n21 is order n2, or O(n2). It is roughly
proportional to n2. - Any O(n2) function is faster-growing than any
O(n) function. - For large numbers of user records, the O(n2)
function will always take more time.
8Definition O(g), at most order g
- Let g be any function R?R.
- Define at most order g, written O(g), to be
fR?R ?c,k ?xgtk f(x) ? cg(x). - Beyond some point k, function f is at most a
constant c times g (i.e., proportional to g). - f is at most order g, or f is O(g), or
fO(g) all just mean that f?O(g). - Sometimes the phrase at most is omitted.
9Points about the definition
- Note that f is O(g) so long as any values of c
and k exist that satisfy the definition. - But The particular c, k, values that make the
statement true are not unique Any larger value
of c and/or k will also work. - You are not required to find the smallest c and k
values that work. (Indeed, in some cases, there
may be no smallest values!)
However, you should prove that the values you
choose do work.
10Big-O Proof Examples
- Show that 30n8 is O(n).
- Show ?c,k ?ngtk 30n8 ? cn.
- Let c31, k8. Assume ngtk8. Thencn 31n
30n n gt 30n8, so 30n8 lt cn. - Show that n21 is O(n2).
- Show ?c,k ?ngtk n21 ? cn2.
- Let c2, k1. Assume ngt1. Then cn2 2n2
n2n2 gt n21, or n21lt cn2.
11Big-O example, graphically
- Note 30n8 isntless than nanywhere (ngt0).
- It isnt evenless than 31neverywhere.
- But it is less than31n everywhere tothe right
of n8.
30n8
30n8?O(n)
Value of function ?
n
Increasing n ?
12Useful Facts about Big O
- Big O, as a relation, is transitive f?O(g) ?
g?O(h) ? f?O(h) - O with constant multiples, roots, and logs...? f
(in ?(1)) constants a,b?R, with b?0, af, f
1-b, and (logb f)a are all O(f). - Sums of functionsIf g?O(f) and h?O(f), then
gh?O(f).
13More Big-O facts
- ?cgt0, O(cf)O(fc)O(f?c)O(f)
- f1?O(g1) ? f2?O(g2) ?
- f1 f2 ?O(g1g2)
- f1f2 ?O(g1g2) O(max(g1,g2))
O(g1) if g2?O(g1) (Very useful!)
14Orders of Growth (1.8) - So Far
- For any gR?R, at most order g,O(g) ? fR?R
?c,k ?xgtk f(x) ? cg(x). - Often, one deals only with positive functions and
can ignore absolute value symbols. - f?O(g) often written f is O(g)or fO(g).
- The latter form is an instance of a more general
convention...
15Order-of-Growth Expressions
- O(f) when used as a term in an arithmetic
expression means some function f such that
f?O(f). - E.g. x2O(x) means x2 plus some function
that is O(x). - Formally, you can think of any such expression as
denoting a set of functions x2O(x) ? g
?f?O(x) g(x) x2f(x)
16Order of Growth Equations
- Suppose E1 and E2 are order-of-growth expressions
corresponding to the sets of functions S and T,
respectively. - Then the equation E1E2 really means
?f?S, ?g?T fgor simply S?T. - Example x2 O(x) O(x2) means ?f?O(x)
?g?O(x2) x2f(x)g(x)
17Useful Facts about Big O
- ? f,g constants a,b?R, with b?0,
- af O(f) (e.g. 3x2 O(x2))
- fO(f) O(f) (e.g. x2x O(x2))
- Also, if f?(1) (at least order 1), then
- f1-b O(f) (e.g. x?1 O(x))
- (logb f)a O(f). (e.g. log x O(x))
- gO(fg) (e.g. x O(x log x))
- fg ? O(g) (e.g. x log x ? O(x))
- aO(f) (e.g. 3 O(x))
18Definition ?(g), exactly order g
- If f?O(g) and g?O(f) then we say g and f are of
the same order or f is (exactly) order g and
write f??(g). - Another equivalent definition?(g) ? fR?R
?c1c2k ?xgtk c1g(x)?f(x)?c2g(x) - Everywhere beyond some point k, f(x) lies in
between two multiples of g(x).
19Rules for ?
- Mostly like rules for O( ), except
- ? f,ggt0 constants a,b?R, with bgt0, af ? ?(f),
but ? Same as with O. f ? ?(fg)
unless g?(1) ? Unlike O.f 1-b ? ?(f), and
? Unlike with O. (logb f)c ? ?(f).
? Unlike with O. - The functions in the latter two cases we say are
strictly of lower order than ?(f).
20? example
- Determine whether
- Quick solution
21Other Order-of-Growth Relations
- ?(g) f g?O(f)The functions that are at
least order g. - o(g) f ?cgt0 ?k ?xgtk f(x) lt cg(x)The
functions that are strictly lower order than g.
o(g) ? O(g) ? ?(g). - ?(g) f ?cgt0 ?k ?xgtk cg(x) lt f(x)The
functions that are strictly higher order than g.
?(g) ? ?(g) ? ?(g).
22Relations Between the Relations
- Subset relations between order-of-growth sets.
R?R
?( f )
O( f )
f
?( f )
?( f )
o( f )
23Why o(f)?O(x)??(x)
- A function that is O(x), but neither o(x) nor
?(x)
24Strict Ordering of Functions
- Temporarily lets write f?g to mean f?o(g),
fg to mean
f??(g) - Note that
- Let kgt1. Then the following are true1 ? log
log n ? log n logk n ? logk n ? n1/k ? n ? n
log n ? nk ? kn ? n! ? nn
25Review Orders of Growth (1.8)
- Definitions of order-of-growth sets, ?gR?R
- O(g) ? f ? cgt0 ?k ?xgtk f(x) lt cg(x)
- o(g) ? f ?cgt0 ?k ?xgtk f(x) lt cg(x)
- ?(g) ? f g?O(f)
- ?(g) ? f g?o(f)
- ?(g) ? O(g) ? ?(g)
26(No Transcript)
27University of FloridaDept. of Computer
Information Science EngineeringCOT
3100Applications of Discrete StructuresDr.
Michael P. Frank
- Slides for a Course Based on the TextDiscrete
Mathematics Its Applications (5th Edition)by
Kenneth H. Rosen
28Module 7Algorithmic Complexity
- Rosen 5th ed., 2.3
- 21 slides, 1 lecture
29What is complexity?
- The word complexity has a variety of technical
meanings in different fields. - There is a field of complex systems, which
studies complicated, difficult-to-analyze
non-linear and chaotic natural artificial
systems. - Another concept Informational complexity the
amount of information needed to completely
describe an object. (An active research field.) - We will study algorithmic complexity.
302.2 Algorithmic Complexity
- The algorithmic complexity of a computation is
some measure of how difficult it is to perform
the computation. - Measures some aspect of cost of computation (in a
general sense of cost). - Common complexity measures
- Time complexity of ops or steps required
- Space complexity of memory bits reqd
31An aside...
- Another, increasingly important measure of
complexity for computing is energy complexity -
How much total energy is used in performing the
computation. - Motivations Battery life, electricity cost...
- I develop reversible circuits algorithms that
recycle energy, trading off energy complexity for
spacetime complexity.
32Complexity Depends on Input
- Most algorithms have different complexities for
inputs of different sizes. (E.g. searching a
long list takes more time than searching a short
one.) - Therefore, complexity is usually expressed as a
function of input length. - This function usually gives the complexity for
the worst-case input of any given length.
33Complexity Orders of Growth
- Suppose algorithm A has worst-case time
complexity (w.c.t.c., or just time) f(n) for
inputs of length n, while algorithm B (for the
same task) takes time g(n). - Suppose that f??(g), also written .
- Which algorithm will be fastest on all
sufficiently-large, worst-case inputs?
B
34Example 1 Max algorithm
- Problem Find the simplest form of the exact
order of growth (?) of the worst-case time
complexity (w.c.t.c.) of the max algorithm,
assuming that each line of code takes some
constant time every time it is executed (with
possibly different times for different lines of
code).
35Complexity analysis of max
- procedure max(a1, a2, , an integers)
- v a1 t1
- for i 2 to n t2
- if ai gt v then v ai t3
- return v t4
- Whats an expression for the exact total
worst-case time? (Not its order of growth.)
Times for each execution of each line.
36Complexity analysis, cont.
- procedure max(a1, a2, , an integers)
- v a1 t1
- for i 2 to n t2
- if ai gt v then v ai t3
- return v t4
- w.c.t.c.
Times for each execution of each line.
37Complexity analysis, cont.
- Now, what is the simplest form of the exact (?)
order of growth of t(n)?
38Example 2 Linear Search
- procedure linear search (x integer, a1, a2, ,
an distinct integers)i 1 t1while (i ? n
? x ? ai) t2 i i 1 t3 if i ? n then
location i t4 else location 0 t5
return location t6
39Linear search analysis
- Worst case time complexity order
- Best case
- Average case, if item is present
40Review 2.2 Complexity
- Algorithmic complexity cost of computation.
- Focus on time complexity (space energy are also
important.) - Characterize complexity as a function of input
size Worst-case, best-case, average-case. - Use orders of growth notation to concisely
summarize growth properties of complexity fns.
41Example 3 Binary Search
- procedure binary search (xinteger, a1, a2, ,
an distinct integers) i 1 j nwhile iltj
begin m ?(ij)/2? if xgtam then i m1 else
j mendif x ai then location i else
location 0return location
Key questionHow many loop iterations?
?(1)
?(1)
?(1)
42Binary search analysis
- Suppose n2k.
- Original range from i1 to jn contains n elems.
- Each iteration Size j?i1 of range is cut in
half. - Loop terminates when size of range is 120 (ij).
- Therefore, number of iterations is k log2n
?(log2 n) ?(log n) - Even for n?2k (not an integral power of 2),time
complexity is still ?(log2 n) ?(log n).
43Names for some orders of growth
- ?(1) Constant
- ?(logc n) Logarithmic (same order ?c)
- ?(logc n) Polylogarithmic
- ?(n) Linear
- ?(nc) Polynomial
- ?(cn), cgt1 Exponential
- ?(n!) Factorial
(With ca constant.)
44Problem Complexity
- The complexity of a computational problem or task
is (the order of growth of) the complexity of the
algorithm with the lowest order of growth of
complexity for solving that problem or performing
that task. - E.g. the problem of searching an ordered list has
at most logarithmic time complexity. (Complexity
is O(log n).)
45Tractable vs. intractable
- A problem or algorithm with at most polynomial
time complexity is considered tractable (or
feasible). P is the set of all tractable
problems. - A problem or algorithm that has more than
polynomial complexity is considered intractable
(or infeasible). - Note that n1,000,000 is technically tractable,
but really impossible. nlog log log n is
technically intractable, but easy. Such cases
are rare though.
46Unsolvable problems
- Turing discovered in the 1930s that there are
problems unsolvable by any algorithm. - Or equivalently, there are undecidable yes/no
questions, and uncomputable functions. - Example the halting problem.
- Given an arbitrary algorithm and its input, will
that algorithm eventually halt, or will it
continue forever in an infinite loop?
47P vs. NP
- NP is the set of problems for which there exists
a tractable algorithm for checking solutions to
see if they are correct. - We know P?NP, but the most famous unproven
conjecture in computer science is that this
inclusion is proper (i.e., that P?NP rather than
PNP). - Whoever first proves it will be famous!
48Computer Time Examples
(125 kB)
(1.25 bytes)
- Assume time 1 ns (10?9 second) per op, problem
size n bits, ops a function of n as shown.
49Things to Know
- Definitions of algorithmic complexity, time
complexity, worst-case complexity names of
orders of growth of complexity. - How to analyze the worst case, best case, or
average case order of growth of time complexity
for simple algorithms.
50(No Transcript)
51University of FloridaDept. of Computer
Information Science EngineeringCOT
3100Applications of Discrete StructuresDr.
Michael P. Frank
- Slides for a Course Based on the TextDiscrete
Mathematics Its Applications (5th Edition)by
Kenneth H. Rosen
52Module 14Recursion
- Rosen 5th ed., 3.4-3.5
- 18 slides, 1 lecture
533.4 Recursive Definitions
- In induction, we prove all members of an infinite
set have some property P by proving the truth for
larger members in terms of that of smaller
members. - In recursive definitions, we similarly define a
function, a predicate or a set over an infinite
number of elements by defining the function or
predicate value or set-membership of larger
elements in terms of that of smaller ones.
54Recursion
- Recursion is a general term for the practice of
defining an object in terms of itself (or of part
of itself). - An inductive proof establishes the truth of
P(n1) recursively in terms of P(n). - There are also recursive algorithms, definitions,
functions, sequences, and sets.
55Recursively Defined Functions
- Simplest case One way to define a function fN?S
(for any set S) or series anf(n) is to - Define f(0).
- For ngt0, define f(n) in terms of f(0),,f(n-1).
- E.g. Define the series an 2n recursively
- Let a0 1.
- For ngt0, let an 2an-1.
56Another Example
- Suppose we define f(n) for all n?N recursively
by - Let f(0)3
- For all n?N, let f(n1)2f(n)3
- What are the values of the following?
- f(1) f(2) f(3) f(4)
21
9
45
93
57Recursive definition of Factorial
- Give an inductive definition of the factorial
function F(n) n! 2?3??n. - Base case F(0) 1
- Recursive part F(n) n ? F(n-1).
- F(1)1
- F(2)2
- F(3)6
58The Fibonacci Series
- The Fibonacci series fn0 is a famous series
defined by f0 0, f1 1, fn2 fn-1
fn-2
0
1
1
2
3
5
8
13
Leonardo Fibonacci1170-1250
59Inductive Proof about Fib. series
- Theorem fn lt 2n.
- Proof By induction.
- Base cases f0 0 lt 20 1 f1 1 lt 21 2
- Inductive step Use 2nd principle of induction
(strong induction). Assume ?kltn, fk lt 2k.
Then fn fn-1 fn-2 is lt 2n-1
2n-2 lt 2n-1 2n-1 2n.
Implicitly for all n?N
Note use ofbase cases ofrecursive defn.
60Recursively Defined Sets
- An infinite set S may be defined recursively, by
giving - A small finite set of base elements of S.
- A rule for constructing new elements of S from
previously-established elements. - Implicitly, S has no other elements but these.
- Example Let 3?S, and let xy?S if x,y?S. What
is S?
61The Set of All Strings
- Given an alphabet S, the set S of all strings
over S can be recursively defined as e ? S (e
, the empty string) - w ? S ? x ? S ? wx ? S
- Exercise Prove that this definition is
equivalent to our old one
Bookuses ?
62Recursive Algorithms (3.5)
- Recursive definitions can be used to describe
algorithms as well as functions and sets. - Example A procedure to compute an.
- procedure power(a?0 real, n?N)
- if n 0 then return 1 else return a
power(a, n-1)
63Efficiency of Recursive Algorithms
- The time complexity of a recursive algorithm may
depend critically on the number of recursive
calls it makes. - Example Modular exponentiation to a power n can
take log(n) time if done right, but linear time
if done slightly differently. - Task Compute bn mod m, where m2, n0, and
1bltm.
64Modular Exponentiation Alg. 1
- Uses the fact that bn bbn-1 and that xy mod
m x(y mod m) mod m.(Prove the latter theorem
at home.) - procedure mpower(b1,n0,mgtb ?N)
- Returns bn mod m.if n0 then return 1
elsereturn (bmpower(b,n-1,m)) mod m - Note this algorithm takes T(n) steps!
65Modular Exponentiation Alg. 2
- Uses the fact that b2k bk2 (bk)2.
- procedure mpower(b,n,m) same signature
- if n0 then return 1else if 2n then return
mpower(b,n/2,m)2 mod melse return
(mpower(b,n-1,m)b) mod m - What is its time complexity?
T(log n) steps
66A Slight Variation
- Nearly identical but takes T(n) time instead!
- procedure mpower(b,n,m) same signature
- if n0 then return 1else if 2n then return
(mpower(b,n/2,m) mpower(b,n/2,m)) mod
melse return (mpower(b,n-1,m)b) mod m
The number of recursive calls made is critical.
67Recursive Euclids Algorithm
- procedure gcd(a,b?N)if a 0 then return belse
return gcd(b mod a, a) - Note recursive algorithms are often simpler to
code than iterative ones - However, they can consume more stack space, if
your compiler is not smart enough.
68Merge Sort
- procedure sort(L ?1,, ?n)if ngt1 then m
?n/2? this is rough ½-way point L
merge(sort(?1,, ?m), sort(?m1,,
?n))return L - The merge takes T(n) steps, and merge-sort takes
T(n log n).
69Merge Routine
- procedure merge(A, B sorted lists)L empty
listi0, j0, k0while iltA ? jltB A
is length of A if iA then Lk Bj j j
1 else if jB then Lk Ai i i
1 else if Ai lt Bj then Lk Ai i i
1 else Lk Bj j j 1 k k1return L
Takes T(AB) time
70(No Transcript)
71University of FloridaDept. of Computer
Information Science EngineeringCOT
3100Applications of Discrete StructuresDr.
Michael P. Frank
- Slides for a Course Based on the TextDiscrete
Mathematics Its Applications (5th Edition)by
Kenneth H. Rosen
72Module 17Recurrence Relations
- Rosen 5th ed., 6.1-6.3
- 29 slides, 1.5 lecture
736.1 Recurrence Relations
- A recurrence relation (R.R., or just recurrence)
for a sequence an is an equation that expresses
an in terms of one or more previous elements a0,
, an-1 of the sequence, for all nn0. - A recursive definition, without the base cases.
- A particular sequence (described non-recursively)
is said to solve the given recurrence relation if
it is consistent with the definition of the
recurrence. - A given recurrence relation may have many
solutions.
74Recurrence Relation Example
- Consider the recurrence relation
- an 2an-1 - an-2 (n2).
- Which of the following are solutions? an
3n an 2n - an 5
Yes
No
Yes
75Example Applications
- Recurrence relation for growth of a bank account
with P interest per given period - Mn Mn-1 (P/100)Mn-1
- Growth of a population in which each organism
yields 1 new one every period starting 2 periods
after its birth. - Pn Pn-1 Pn-2 (Fibonacci relation)
76Solving Compound Interest RR
- Mn Mn-1 (P/100)Mn-1
- (1 P/100) Mn-1
- r Mn-1 (let r 1 P/100)
- r (r Mn-2)
- rr(r Mn-3) and so on to
- rn M0
77Tower of Hanoi Example
- Problem Get all disks from peg 1 to peg 2.
- Only move 1 disk at a time.
- Never set a larger disk on a smaller one.
Peg 1
Peg 2
Peg 3
78Hanoi Recurrence Relation
- Let Hn moves for a stack of n disks.
- Optimal strategy
- Move top n-1 disks to spare peg. (Hn-1 moves)
- Move bottom disk. (1 move)
- Move top n-1 to bottom disk. (Hn-1 moves)
- Note Hn 2Hn-1 1
79Solving Tower of Hanoi RR
- Hn 2 Hn-1 1
- 2 (2 Hn-2 1) 1 22 Hn-2 2 1
- 22(2 Hn-3 1) 2 1 23 Hn-3 22 2 1
-
- 2n-1 H1 2n-2 2 1
- 2n-1 2n-2 2 1 (since H1 1)
-
- 2n - 1
806.2 Solving Recurrences
General Solution Schemas
- A linear homogeneous recurrence of degree k with
constant coefficients (k-LiHoReCoCo) is a
recurrence of the form an c1an-1
ckan-k,where the ci are all real, and ck ? 0. - The solution is uniquely determined if k initial
conditions a0ak-1 are provided.
81Solving LiHoReCoCos
- Basic idea Look for solutions of the form an
rn, where r is a constant. - This requires the characteristic equation rn
c1rn-1 ckrn-k, i.e., rk - c1rk-1 - - ck
0 - The solutions (characteristic roots) can yield an
explicit formula for the sequence.
82Solving 2-LiHoReCoCos
- Consider an arbitrary 2-LiHoReCoCo an c1an-1
c2an-2 - It has the characteristic equation (C.E.) r2 -
c1r - c2 0 - Thm. 1 If this CE has 2 roots r1?r2, then an
a1r1n a2r2n for n0for some constants a1, a2.
83Example
- Solve the recurrence an an-1 2an-2 given the
initial conditions a0 2, a1 7. - Solution Use theorem 1
- c1 1, c2 2
- Characteristic equation r2 - r - 2 0
- Solutions r -(-1)((-1)2 - 41(-2))1/2 /
21 (191/2)/2 (13)/2, so r 2 or r
-1. - So an a1 2n a2 (-1)n.
84Example Continued
- To find a1 and a2, solve the equations for the
initial conditions a0 and a1 a0 2 a120
a2 (-1)0 - a1 7 a121 a2 (-1)1
- Simplifying, we have the pair of equations 2
a1 a2 - 7 2a1 - a2which we can solve easily by
substitution - a2 2-a1 7 2a1 - (2-a1) 3a1 - 2
- 9 3a1 a1 3 a2 1.
- Final answer an 32n - (-1)n
Check an0 2, 7, 11, 25, 47, 97
85The Case of Degenerate Roots
- Now, what if the C.E. r2 - c1r - c2 0 has only
1 root r0? - Theorem 2 Then, an a1r0n a2nr0n, for all
n0,for some constants a1, a2.
86k-LiHoReCoCos
- Consider a k-LiHoReCoCo
- Its C.E. is
- Thm.3 If this has k distinct roots ri, then the
solutions to the recurrence are of the form - for all n0, where the ai are constants.
87Degenerate k-LiHoReCoCos
- Suppose there are t roots r1,,rt with
multiplicities m1,,mt. Then - for all n0, where all the a are constants.
88LiNoReCoCos
- Linear nonhomogeneous RRs with constant
coefficients may (unlike LiHoReCoCos) contain
some terms F(n) that depend only on n (and not on
any ais). General form - an c1an-1 ckan-k F(n)
The associated homogeneous recurrence
relation(associated LiHoReCoCo).
89Solutions of LiNoReCoCos
- A useful theorem about LiNoReCoCos
- If an p(n) is any particular solution to the
LiNoReCoCo - Then all its solutions are of the form an
p(n) h(n),where an h(n) is any solution to
the associated homogeneous RR
90Example
- Find all solutions to an 3an-12n. Which
solution has a1 3? - Notice this is a 1-LiNoReCoCo. Its associated
1-LiHoReCoCo is an 3an-1, whose solutions are
all of the form an a3n. Thus the solutions to
the original problem are all of the form an
p(n) a3n. So, all we need to do is find one
p(n) that works.
91Trial Solutions
- If the extra terms F(n) are a degree-t polynomial
in n, you should try a degree-t polynomial as the
particular solution p(n). - This case F(n) is linear so try an cn d.
- cnd 3(c(n-1)d) 2n (for all n) (-2c2)n
(3c-2d) 0 (collect terms) So c -1 and d
-3/2. - So an -n - 3/2 is a solution.
- Check an1 -5/2, -7/2, -9/2,
92Finding a Desired Solution
- From the previous, we know that all general
solutions to our example are of the form - an -n - 3/2 a3n.
- Solve this for a for the given case, a1 3
- 3 -1 - 3/2 a31
- a 11/6
- The answer is an -n - 3/2 (11/6)3n
935.3 Divide Conquer R.R.s
- Main points so far
- Many types of problems are solvable by reducing a
problem of size n into some number a of
independent subproblems, each of size ??n/b?,
where a?1 and bgt1. - The time complexity to solve such problems is
given by a recurrence relation - T(n) aT(?n/b?) g(n)
94DivideConquer Examples
- Binary search Break list into 1 sub-problem
(smaller list) (so a1) of size ??n/2? (so b2). - So T(n) T(?n/2?)c (g(n)c constant)
- Merge sort Break list of length n into 2
sublists (a2), each of size ??n/2? (so b2),
then merge them, in g(n) T(n) time. - So T(n) T(?n/2?) cn (roughly, for some c)
95Fast Multiplication Example
- The ordinary grade-school algorithm takes T(n2)
steps to multiply two n-digit numbers. - This seems like too much work!
- So, lets find an asymptotically faster
multiplication algorithm! - To find the product cd of two 2n-digit base-b
numbers, c(c2n-1c2n-2c0)b and
d(d2n-1d2n-2d0)b, first, we break c and d in
half cbnC1C0, dbnD1D0, and then...
(see next slide)
96Derivation of Fast Multiplication
(Multiply out polynomials)
(Factor last polynomial)
97Recurrence Rel. for Fast Mult.
- Notice that the time complexity T(n) of the fast
multiplication algorithm obeys the recurrence - T(2n)3T(n)?(n) i.e.,
- T(n)3T(n/2)?(n)
- So a3, b2.
Time to do the needed adds subtracts of
n-digit and 2n-digit numbers
98The Master Theorem
- Consider a function f(n) that, for all nbk for
all k?Z,,satisfies the recurrence relation - f(n) af(n/b) cnd
- with a1, integer bgt1, real cgt0, d0. Then
99Master Theorem Example
- Recall that complexity of fast multiply was
- T(n)3T(n/2)?(n)
- Thus, a3, b2, d1. So a gt bd, so case 3 of the
master theorem applies, so -
- which is O(n1.58), so the new algorithm is
strictly faster than ordinary T(n2) multiply!
1006.4 Generating Functions
- Not covered this semester.
1016.5 Inclusion-Exclusion
- This topic will have been covered out-of-order
already in Module 15, Combinatorics. - As for Section 6.6, applications of
Inclusion-Exclusion No slides yet.