# Complexity of exponentiation algorithm

Given `double x` and positive `int y` I need to find `x^y` assuming the input will not cause an overflow.

I came up with an algorithm that uses the following facts about `x^y`:

1. `x^y=(x^floor(y/2))^2` if y is even.
2. `x^y=x*(x^floor(y/2))^2` if y is odd.
3. `x^y=1` if y is 0

The implementation:

```public static double power(double x, int y) {
if(y==0)
return 1;

double z=power(x, y>>>1);
z*=z;
if((y&1)==1)
z*=x;

return z;
}
```

I’m somewhat struggling with its complexity analysis. There’re `log_2(y)` recursion levels and no branching. At each level the algorithm squares `z`, the multiplication complexity is `O(n^2)` where `n` is the number of bits in `z`. We assume that no overflow can happen, hence `n` is at most half of that in the `double` type. Do I count this multiplication work as constant, which gives the algorithm’s complexity of `O(log_2(y))`?

Yes the algorithm is in fact called binary exponentiation and the complexity of it is `log_2(y)`as you mentioned. Multiplication of two numbers will usually be considered `O(1)` unless the number can be arbitrarily large (also called BigInt). This is because the number of bits of a number are always constant.
Also, although not entirely related to the question, I saw that you mentioned multiplication complexity is `O(n)` for number of n bits. This is not true. The normal way of multiplying number is actually `O(n^2)`. But in fact we can do better than that with some fast multiplication algorithm. But that’s beyond the scope of this question.