  ## 6.1. Very Simple Programs To Analyze Handout simple programs to analyze. Do a couple of examples as time permits and ask the students to try the rest. Next lecture should begin with a discussion of the few examples people need help with.
Let me conclude this brief introduction to complexity analysis by illustrating how to analyze the complexity of some simple programs. Although these programs, in themselves, are of no interest, they represent patterns, such as nested loops, that occur very often. Our efficiency measure. in these examples, is the number of `printf` statements executed as a function of N.

### Example 1 ```        scanf("%d",&n);
for(i=0;i<n;i++)
for(j=0;j<n;j++) printf("(%d,%d)\n",i,j);```
There is one `printf` statement. Our complexity function must tell us how many times that `printf` statement will be executed, as a function of N. Probably you can all immediately see the answer, but let me go through the analysis.

The complexity of the whole program = complexity of the i-loop, as that is the only top-level statement that includes a `printf`.

The complexity of a loop = (# times the loop repeats) * (complexity of each iteration)

Assuming that the complexity of each iteration is the same. This assumption holds for the i-loop, so its complexity is:

O(N) * O(the inner loop)

The above assumption also holds for the inner loop, so its complexity is:

O(N) * O(`printf`)

The `printf` statement itself has a complexity O(1). Putting all this together,

complexity(Example 1) = O(N)*O(N)*O(1) = O(N*N).

### Example 2

Assume `n >= 1`.
```        scanf("%d",&n);
while (n > 1) { n = n/2; printf("%d\n",n); }```
There is not much new to say about this example; again the complexity of the program = the complexity of the loop, which is (as above):

(# times the loop is executed) * (complexity of each iteration)

The complexity of every iteration is the same, O(1). How many times is the loop executed?

In other words, how many times can you divide a number in half before it will reach 1?

Look at it this way: for every number N there is an integer K such that:

2**K <= N < 2**(K+1)

It is clear that we when we divide 2**K by 2 K-1 times it will equal 2, and when we divide it once more, it will become one and the loop will stop iterating. Therefore this loop will repeat K times when N = 2**K.

Because N >= 2**K it will repeat at least many times for N. But it cannot repeat K+1 times: 2**(K+1) is the smallest number for which this loop repeats K+1 times, and N is smaller than that. Therefore, if N is bounded by the Kth and (K+1)st powers of 2, this loop will repeat K times.

What is the relation between K and N?

Answer: K is the integer part of the log-base-2 of N.

So the complexity of this example is O(logN)*O(1) = O(logN).

How would the complexity change if we were dividing by 10 instead of 2?

### Example 3 ```        scanf("%d",&n);
for(i=1   ,m=n+66;i<=m;i++) printf("%d\n",i);
for(j=n/21,m=n/5 ;j<=m;j++) printf("%d\n",j);```
Here there are two `printf`s occurring in two independent statements. So the programs overall complexity is the sum of the complexities of the statements with `printf` in them:

efficiency(example 3) = O(i-loop) + O(j-loop)

What is the efficiency of the i-loop? ... O(N).

What is the efficiency of the j-loop? ... O(N).

What is O(N) + O(N)? ... O(N)

### Example 4

```        scanf("%d",&n);
for(i=1;i<=n;i++)
for(j=i;j<=n;j++) printf("(%d,%d)\n",i,j);```
This is similar to Example 1, except that the complexity of each iteration is different, so we can't use our simple formula

(#iterations) * (complexity per iteration)

What we can do instead is actually sum up the complexity of each iteration:

```efficiency(i-loop) = SUM(i,1,N) of (efficiency of ith iteration)
= SUM(i,1,N) of (N-i)
= N*(N-1)/2
= O(N*N)```

### Example 5 ```        void p (int n)
{
if (n < 6) printf("Done!\n");
else { printf("n = %d\n",n);
p(n/2); }
}```
Many of our algorithms are recursive, like this one. In general, recursive algorithms can be very difficult to analyze and reduce to a simple closed-form formula, such as NlogN. However, they often succumb to the same analysis methods used for loops: that is the case with this example.

Ignoring the statement that makes the recursive call each execution of `p` causes executes one `printf` statement (either the first or the second, depending on the value of N). So the complexity of `p` = (# executions of P) * O(1).

If I call `p` with a value of N how many times will `p` actually be executed? It is just like the loop in Example 2: each execution divides N in half, so the total number of executions is just the number of times you can cut N in half. We know the answer: log-base-2 of N. The stopping condition of 6, not 1, introduces a small fudge-factor which we (thankfully) do not have to worry about since we are only doing asymptotic complexity analysis.