Title: Introduction to Complexity Analysis
1Introduction to Complexity Analysis
- Motivation
- Average, Best, and Worst Case Complexity Analysis
- Asymptotic Analysis
2Motivations for Complexity Analysis
- There are often many different algorithms which
can be used to solve the same problem. - For example, assume that we want to search for a
key in a sorted array. - Thus, it makes sense to develop techniques that
allow us to - compare different algorithms with respect to
their efficiency - choose the most efficient algorithm for the
problem - The efficiency of any algorithmic solution to a
problem can be measured according to the - Time efficiency the time it takes to execute.
- Space efficiency the space (primary or secondary
memory) it uses. - We will focus on an algorithms efficiency with
respect to time.
3How do we Measure Efficiency?
- Running time in micro/milli seconds
- Advantages
- Disadvantages
- Instead,
- we count the number of basic operations the
algorithm performs. - we calculate how this number depends on the size
of the input. - A basic operation is an operation which takes a
constant amount of time to execute. - E.g., additions, subtractions, multiplications,
comparisons, etc. - Hence, the efficiency of an algorithm is
determined in terms of the number of basic
operations it performs. This number is most
useful when expressed as a function of the input
size n.
4Example of Basic Operations
- Arithmetic operations , /, , , -
- Assignment statements of simple data types.
- Reading of primitive types
- writing of a primitive types
- Simple conditional tests if (x lt 12) ...
- method call (Note the execution time of the
method itself may depend on the value of
parameter and it may not be constant) - a method's return statement
- Memory Access
- We consider an operation such as , , and
as consisting of two basic operations. - Note To simplify complexity analysis we will not
consider memory access (fetch or store)
operations.
5Simple Complexity Analysis
public class TestAbstractClass public
static void main(String args)
Employee list new Employee3 list0
new Executive("Jarallah Al-Ghamdi", 50000)
list1 new HourlyEmployee("Azmat Ansari",
120) list2 new MonthlyEmployee("Sahalu
Junaidu", 9000) ((Executive)list0).awardB
onus(11000) for(int i 0 i lt list.length
i) if(listi instanceof
HourlyEmployee) ((HourlyEmployee)listi
).addHours(60) for(int i 0 i lt
list.length i) listi.print()
System.out.println("Paid " listi.pay())
System.out.println("
")
- of basic operations
- Assignment statements
- Additions
- Print statements
- Method calls
6Simple Complexity Analysis
- Counting the number of basic operations is
cumbersome - It is not important to count the number of all
basic operations - Instead, we count/find the number of times of
the statement that gets executed the most - E.g. find the number of element comparisons
7Simple Complexity Analysis Simple Loops
- How to find the cost of single unnested loops
- Find the cost of the body of the loop
- In the case below, consider the number of
multiplications - Find the number of iterations for the for loop
- Multiply the two numbers to get the total
- E.g.
double x, y x 2.5 y 3.0 for(int i 0 i
lt n i) ai x y x 2.5 x y y
ai
8Simple Complexity Analysis Complex Loops
- Represent the cost of the for loop in summation
form. - The main idea is to make sure that we find an
iterator that increase/decrease its values by 1. - For example, consider finding the number of times
statements 1, 2 and 3 get executed below
for (int i 1 i lt n i) statement1
for (int i 1 i lt n i) for (int j 1 j
lt n j) statement2
for (int i 1 i lt n i) for (int j 1 j
lt i j) statement3
9Useful Summation Formulas
10Simple Complexity Analysis Complex Loops
- Represent the cost of the for loop in summation
form. - The problem in the example below is that the
value of i does not increase by 1 - i k , k m , k 2m , , k rm
- Here, we can assume without loss of generality
that k rm n, i.e. r (n k)/m - i.e., an iterator s from 0, 1, ,r can be used
for (int i k i lt n i i m) statement4
11Simple Complexity Analysis Loops (with lt)
- In the following for-loop
-
- The number of iterations is ?(n k) / m ?1
- The initialization statement, i k, is executed
one time. - The condition, i lt n, is executed ?(n k) / m
?1 1 times. - The update statement, i i m, is executed ?(n
k) / m ?1 times. - Each of statement1 and statement2 is executed
?(n k) / m ?1 times.
for (int i k i lt n i i
m) statement1 statement2
12Simple Complexity Analysis Loops (with lt)
- In the following for-loop
- The number of iterations is ?(n k ) /
m? - The initialization statement, i k, is executed
one time. - The condition, i lt n, is executed ?(n k ) / m?
1 times. - The update statement, i i m, is executed ?(n
k ) / m? times. - Each of statement1 and statement2 is executed ?(n
k ) / m? times.
for (int i k i lt n i i m) statement1 s
tatement2
13Simple Complexity Analysis Complex Loops
- Suppose n is a power of 2. Determine the number
of basic operations performed by of the method
myMethod() - Solution
- The variables i and n in myMethod are different
from the ones in the helper method. - In fact, n of helper is being called by
variable i in myMethod. - Hence, we need to change the name of variable i
in helper because it is independent from i in
myMethod (let us call it k). - We will count the number of times statement5 gets
executed. - (in myMethod) i 1 , 2 , 22 , 23 ,, 2r n (r
log2 n) - Hence, we can use j where i 2j j 0 , 1 , 2
, 3, , r log2 n
static int myMethod(int n) int sum 0
for(int i 1 i lt n i i 2) sum
sum i helper(i) return sum
static int helper(int n) int sum 0
for(int i 1 i lt n i) sum sum
i //statement5 return sum
14Useful Logarithmic Formulas
15Best, Average, and Worst case complexities
- What is the best case complexity analysis?
- The smallest number of basic operations carried
out by the algorithm for a given input. - What is the worst case complexity analysis?
- The largest number of basic operations carried
out by the algorithm for a given input. - What is the average case complexity analysis?
- The number of basic operations carried out by the
algorithm on average. - We are usually interested in the worst case
complexity - Easier to compute
- Represents an upper bound on the actual running
time for all inputs - Crucial to real-time systems (e.g. air-traffic
control)
16Best, Average, and Worst case complexities
Example
- For linear search algorithm, searching for a key
in an array of n elements, determine the
situation and the number of comparisons in each
of the following cases - Best Case
- Worst Case
- Average Case
17Asymptotic Growth
- Since counting the exact number of operations is
cumbersome, sometimes impossible, we can always
focus our attention to asymptotic analysis, where
constants and lower-order terms are ignored. - E.g. n3, 1000n3, and 10n310000n25n-1 are all
the same - The reason we can do this is that we are always
interested in comparing different algorithms for
arbitrary large number of inputs.
18Asymptotic Growth (1)
19Asymptotic Growth (2)
20Running Times for Different Sizes of Inputs of
Different Functions
21Asymptotic Complexity
- Finding the exact complexity, f(n) number of
basic operations, of an algorithm is difficult. - We approximate f(n) by a function g(n) in a way
that does not substantially change the magnitude
of f(n). --the function g(n) is sufficiently
close to f(n) for large values of the input size
n. - This "approximate" measure of efficiency is
called asymptotic complexity. - Thus the asymptotic complexity measure does not
give the exact number of operations of an
algorithm, but it shows how that number grows
with the size of the input. - This gives us a measure that will work for
different operating systems, compilers and CPUs.
22Big-O (asymptotic) Notation
- The most commonly used notation for specifying
asymptotic complexity is the big-O notation. - The Big-O notation, O(g(n)), is used to give an
upper bound on a positive runtime function f(n)
where n is the input size. - Definition of Big-O
- Consider a function f(n) that is non-negative ? n
? 0. We say that f(n) is Big-O of g(n) i.e.,
f(n) O(g(n)), if ? n0 ? 0 and a constant c gt
0 such that f(n) ? cg(n), ? n ? n0
23Big-O (asymptotic) Notation
- Implication of the definition
- For all sufficiently large n, c g(n) is an upper
bound of f(n) - Note By the definition of Big-O
- f(n) 3n 4 is O(n)
- it is also O(n2),
- it is also O(n3),
- . . .
- it is also O(nn)
- However when Big-O notation is used, the function
g in the relationship f(n) is O(g(n)) is CHOSEN
TO BE AS SMALL AS POSSIBLE. - We call such a function g a tight asymptotic
bound of f(n)
24Big-O (asymptotic) Notation
- Some Big-O complexity classes in order of
magnitude from smallest to highest
25Examples of Algorithms and their big-O complexity
26Warnings about O-Notation
- Big-O notation cannot compare algorithms in the
same complexity class. - Big-O notation only gives sensible comparisons of
algorithms in different complexity classes when n
is large . - Consider two algorithms for same task Linear
f(n) 1000 n Quadratic f'(n) n2/1000The
quadratic one is faster for n lt 1000000.
27Rules for using big-O
- For large values of input n , the constants and
terms with lower degree of n are ignored. - Multiplicative Constants Rule Ignoring constant
factors. - O(c f(n)) O(f(n)), where c is a constant
- Example
- O(20 n3) O(n3)
- 2. Addition Rule Ignoring smaller terms.
- If O(f(n)) lt O(h(n)) then O(f(n) h(n))
O(h(n)). - Example
- O(n2 log n n3) O(n3)
- O(2000 n3 2n ! n800 10n 27n log n 5)
O(n !) - 3. Multiplication Rule O(f(n) h(n)) O(f(n))
O(h(n)) - Example
- O((n3 2n 2 3n log n 7)(8n 2 5n 2))
O(n5)
28How to determine complexity of code structures
- Loops for, while, and do-while
- Complexity is determined by the number of
iterations in the loop times the complexity of
the body of the loop. - Examples
for (int i 0 i lt n i) sum sum - i
O(n)
for (int i 0 i lt n n i) sum sum
i
O(n2)
i1 while (i lt n) sum sum i i
i2
O(log n)
29How to determine complexity of code structures
sum 0 for(int i 0 i lt n i) for(int j
0 j lt n j) sum i j
O(n2)
i 1 while(i lt n) j 1 while(j lt
n) statements of constant complexity
j j2 i i1
O(n log n)
30How to determine complexity of code structures
- Sequence of statements Use Addition rule
- O(s1 s2 s3 sk) O(s1) O(s2) O(s3)
O(sk) - O(max(s1, s2, s3, . . . , sk))
- Example
- Complexity is O(n2) O(n) O(1) O(n2)
for (int j 0 j lt n n j) sum sum
j for (int k 0 k lt n k) sum sum -
l System.out.print("sum is now sum)
31How to determine complexity of code structures
- Switch Take the complexity of the most expensive
case
char key int X new intn int Y new
intnn ........ switch(key) case
'a' for(int i 0 i lt X.length i)
sum Xi break case 'b'
for(int i 0 i lt Y.length j)
for(int j 0 j lt Y0.length j)
sum Yij break
// End of switch block
o(n)
o(n2)
Overall Complexity O(n2)
32How to determine complexity of code structures
- If Statement Take the complexity of the most
expensive case
char key int A new intnn int B
new intnn int C new intnn ........
if(key '') for(int i 0 i lt n
i) for(int j 0 j lt n j)
Cij Aij Bij // End of if
block else if(key 'x') C matrixMult(A,
B) else System.out.println("Error! Enter
'' or 'x'!")
O(n2)
Overall complexity O(n3)
O(n3)
O(1)
33How to determine complexity of code structures
- Sometimes if-else statements must carefully be
checked - O(if-else) O(Condition) MaxO(if), O(else)
-
int integers new intn ........ if(hasPrimes
(integers) true) integers0
20 else integers0 -20 public boolean
hasPrimes(int arr) for(int i 0 i lt
arr.length i) ..........
.......... // End of hasPrimes()
O(1)
O(1)
O(n)
O(if-else) O(Condition) O(n)