Skip to content

Analytical (Asymptotic) Performance Analysis

Chapter 18

Learning Outcomes

After completing this lecture and the related lab, students will be able to:

  1. Define and distinguish between empirical and analytical methods of evaluating algorithm performance, and articulate when each method is more appropriate.

  2. Identify key operations to count in an algorithm (e.g., comparisons, swaps, recursive calls, etc.), and use these to characterize the time complexity of simple code segments.

  3. Explain best-case, average-case, and worst-case behavior for algorithms, including what those mean and why we often focus on worst-case or average-case analysis.

  4. Apply rules of estimation to compute cost functions for loops (including nested loops), conditionals, and simple recursive functions.

  5. Understand and use Big-O notation: identify the dominating term in a cost expression, compare growth rates, and classify functions by their asymptotic behavior (constant, logarithmic, linear, quadratic, etc.).

Lecture Video

Intro

Analysis of Algorithms

Dilemma: You have two (or more) methods to solve a problem.
How do you choose the BEST method?

  • One approach: Implement each algorithm in C++; test how long each takes to run.

  • Problems:

    • Different implementations may cause an algorithm to run faster/slower.

    • Some algorithms run faster on some computers.

    • Algorithms may perform differently depending on data (e.g., sorting often depends on what is being sorted).

A Fundamental Computer Science Challenge

  • One of the most fundamental tools for computer scientists to analyze the cost of an algorithm.

  • The cost can be described in terms of time, space (memory usage), electricity, etc.

  • Ideally, we will develop a simple scheme to rank algorithms from best to worst.

Approaches to Analyzing Performance

Comment on the “general” performance of the algorithm using one of two options:

  1. Empirical (Experimental): Measure performance for several examples.
    But, what does this tell us in general?

  2. Analytical (Asymptotic):

    • Instead, assess performance in an abstract, generalized manner.

    • Idea: analyze performance as the size of problem growth.

    • Examples:

      • Sorting: How many comparisons for an array of size ?

      • Searching: How many comparisons for an array of size ?

    • Challenge: It may be difficult to discover a reasonable formula.

Analytical Approach: Step 1 of 2

Characterize performance by counting the number of of key operations.

  • For a sorting algorithm, the key operations includes
    the number of times:

    • two values are compared and

    • two values are swapped.

  • Search:

    • Count the number of times the value being searched for is compared to values in an array.
  • Recursive function:

    • Count the number of recursive calls.

Analysis with Varying Results

  • Example: some sorting algorithms may require as few as comparisons and as many as .

  • Types of analyses based a problem of size :

    • Best case: What is the shortest an algorithm can run?

    • Average case: On average, how fast does an algorithm run?

    • Worst case: What is the longest an algorithm can run?

  • Computer scientists usually use worst-case or average-case analysis.

    • What are some example domains where you may care more about the worst case than the average case and vice versa?

Notice: We Are Estimating

  • Usually, we approximate or estimate the performance of an algorithm, assuming it is operating on a very large data set.

  • Estimation is an important skill to learn and use.

Simpler Question: How many hotdogs tall is the Empire State Building?

  • The ESB is feet tall.

  • Assuming that a hotdog is 6 inches from end to end, you would need, hotdogs.

Empire State Building
The Empire State Building

Complexity Analysis

Analysis

  • An objective way to evaluate the cost of an algorithm or code section.

  • The cost is usually computed in terms of time or space.

  • The goal is to have a meaningful way to compare algorithms based on a common measure.

Algorithm Analysis

  • Algorithm analysis requires a set of rules to determine how operations are to be counted.

  • There is no generally accepted set of rules for algorithm analysis.

  • In some cases, an exact count of operations is desired; in other cases, a general approximation is sufficient.

  • The following rules are typical of those intended to exactly count operations.

Rules for Estimation

  1. We assume an arbitrary time unit.

  2. Execution of one of the following operations takes time 1:

    • assignment operation

    • single I/O operations

    • single Boolean operations, numeric comparisons

    • single arithmetic operations

    • function return

    • array index operations, pointer dereferences

Example 1

count = count + 1;  // Cost: c₁
sum = sum + count;  // Cost: c₂

Total Cost = .
Because we assume the addition costs 1 and assignment costs 1, the total cost is 4 units.

Conditional Cost

if (n < 0) {     // Cost: c₁ = 1$
    absVal = -n; // Cost: c₂ = 2$
} else {
    absVal = n;  // Cost: c₃ = 1$
}
  • is the cost of the boolean evaluation ( instruction).

  • is the cost of the negating a number () + the cost of assignment ().

  • is the cost of an assignment ().

  • Worse CaseBest CastAverage

Example 3

cpp
i = 1;             // Cost: c₁ = 1
sum = 0;           // Cost: c₂ = 1
while (i <= n) {   // Cost: c₃ = 1
    i = i + 1;     // Cost: c₄ = 1 + 1 = 2
    sum = sum + i; // Cost: c₅ = 2
}
  • Remember assignment (=) and addition (+) each cost .

  • How many times does the loop execute?
    times.


More Rules for Estimation

  1. Selection statement (if, switch) time is the time for the condition evaluation, plus the maximum of the running times for the individual clauses in the selection.

  2. Loop execution time is the number or iterations multiplied by its body’s execution time, plus the time for the loop check and update operations, plus the loop setup time.
    Always assume that the loop iterates the maximum number of times.

  3. The runtime of a function call is 1 for setup plus the time for any parameter calculations plus the time required for the execution of the function body.

Nested Example

cpp
i = 1;                 // c₁ = 1
sum = 0;               // c₂ = 1
while (i <= n) {       // c₃ = 1
    j = 1;             // c₄ = 1
    while (j <= n) {   // c₅ = 1
        sum += i;      // c₆ = 2
        ++j;           // c₇ = 2
    }
    ++i;               // c₈ = 2
}
  • The outer loop iterates times.

  • For each iteration of the outer loop, the inner loop iterates times.

  • The total cost is:

Important Note: is the highest (largest) term!

Big O Notation

Comparing Algorithms

  • An algorithm’s time requirement is a function of the problem size.

  • The problem size depends on the application (e.g., the number of list elements for a sorting algorithm).

  • For instance, we say that (if the problem size is )

    • Algorithm A requires time units to solve a problem of size .

    • Algorithm B requires time units to solve a problem of size .

    Comparing Algorithms by Growth Rate
    Comparing Algorithm A () with Algorithm B ()
    Comparing Algorithms by Growth Rate
    Comparing Algorithm A () with Algorithm B ()

Comparing Algorithms

  • An algorithm’s proportional time requirement is known as the growth rate.

  • We can compare the efficiency of algorithms by comparing their growth rates.

Comparing Algorithms

Example: Which is faster?

It depends on .

112037
251171
31,374159
42,895397
55,2601,073
68,6553,051
713,2668,923
819,27926,465
926,88079,005
1036,255236,527

One term dominated the others.

We only care about the highest-order (dominating) term.

13732.4%
27150.7%
315967.9%
439781.6%
51,07390.6%
979,00599.7%
10236,52799.9%

As Grows, Some Terms Dominate

n=10n=100n=1,000n=10,000n=100,000
or
...

General Growth Rates
Asymptotic growth rates of common complexity functions, from logarithmic to factorial, illustrating their relative scaling for increasing .
General Growth Rates
Asymptotic growth rates of common complexity functions, from logarithmic to factorial, illustrating their relative scaling for increasing

Big

  • If Algorithm A requires time proportional to ,
    Algorithm A is said to be order , and it is denoted as .

  • The function is called the algorithm’s growth-rate function.

  • Since the capital is used in the notation, this notation is called the Big- notation.

  • If Algorithm A requires time proportional to , it is .

  • If Algorithm A requires time proportional to , it is .

Example 1

  • If an algorithm requires seconds to solve a problem size, .

  • If constants and exist such that
    for all and ,
    then the algorithm is order . (In fact, is 3 and is 2)

  • Thus, the algorithm requires no more than time units.

  • So it is

More Examples

This game of “spot the highest term” is actually not difficult!

  • =

  • =

It can get somewhat tricky:

  • =

  • =

Growth Rate Functions Generalized

Big OTime requirement as problem size increase
Constant is independent of the problem’s size.
Logarithmic increases increases slowly.
Linear increases directly.
Increases more rapidly than linear algorithms.
Quadratic increases rapidly.
Cubic increases more rapidly than a quadratic algorithm.
Exponential is impractical.

Practice

Example 1:

cpp
i = 1;           // Cost: c₁
sum = 0;         // Cost: c₂
while (i <= n) { // Cost: c₃
    ++i;         // Cost: c₄
    sum += i;    // Cost: c₅
}

Example 2:

cpp
i = 1;               // Cost: c₁
sum = 0;             // Cost: c₂
while (i <= n) {     // Cost: c₃
    j = 1;           // Cost: c₄
    while (j <= n) { // Cost: c₅
        sum += i;    // Cost: c₆
        ++j;         // Cost: c₇
    }
    ++i;             // Cost: c₈
}

Example 3:

cpp
for (i = 1; i <= n; i++) {         // Cost: c₁
    for (j = 1; j <= i; j++) {     // Cost: c₂
        for (k = 1; k <= j; k++) { // Cost: c₃
            ++x;                   // Cost: c₄ = 2$
        }
    }
}

Notice: We do NOT need to know the exact number of iterations to find the Big-.

Example 4: Recursion can be hard.

The Tower of Hanoi is a puzzle consisting of three rods some disks of different diameters, which can slide onto any rod. The goal is to move the disc to a different rode. However, a larger disk cannot be placed on top of a smaller one.

Tower of Hanoi
Tower of Hanoi
cpp
void hanoi(int n, char source, char dest, char spare) { // Function-call cost
  if (n > 0) {                                     // Cost: c₁
    hanoi(n-1, source, spare, dest);               // Cost: c₂
    cout << "Move top disk from pole " << source
      << " to pole " << dest << endl;              // Cost: c₃
    hanoi(n-1, spare, dest, source);               // Cost: c₄
  }
}
  • By now, I hope you see that constant and coefficient costs are virtually superfluous when working with Big .

  • To find the growth-rate function for a recursive algorithm, we have to solve its recurrence relation.

  • You will learn how to do this in Discrete Math.

What is the cost of hanoi(n, 'A', 'B', 'C')?

  • When ,

  • When ,
    =
    = recurrence equation for the growth-rate function of hanoi-towers algorithm.

  • Now, we have to solve this recurrence equation to find the growth-rate function of the hanoi-towers algorithm.

  • This turns out to be because for every we make calls.

Example 5:

cpp
int bigOExample5(const int N)
{
  const int M = 4 * N;
  int res = 0;
  
  int *pArray = new int[M] {};
  for(int i = 0; i < M; i++)
  {
    m_pArray[i] = i;
  }
  for(int i = 0; i < M; i++)
  {
    res += m_pArray[i];
  }
  delete[] pArray;
  return res;
}

Example 6:

cpp
int bigOExample6(int n)
{
  int total = 0;

  for(int i = 0; i < 4 * n; i++)
  {
    total += i;
  
    for(int j = 0; j < 4; j++)
    {
      total += j;
    }
  }

  return total;
}

Example 7:

cpp
int bigOExample7(int n) {
  int total = 0;
  for(int i = 0; i < n; i++) {
    for(int j = 1; j <= n; j *= 2) {
      total += j;
    }
  }

  return total;
}