# Recurrence Relations with Multiparameter Functions

I have been trying to wrap my head around the concept of a recurrence relation and I see how to divide, conquer and combine. What I am failing to understand is how to derive a proper recurrence relation from a multi argument function that deals with an array of values, a lowest index, and a highest index.

More context: My base case is when the lowest index is equal to the highest index. When that condition is met I return the the element from the lowest index. (Which is also the highest) It is the only element.

My recursive case is when q and p are not equal. Here is the code below:

```int maximum(int[] A, int p, int q) {
if (p == q) {
return A[p];
}
int k, l, max1, max2, max3;
k = p + Math.floor((q-p+2)/3);
l = k + Math.floor((q-p+2)/3);
max1 = maximum(A, p, k-1);
max2 = maximum(A, k, l-1);
max3 = maximum(A, l, q);
if (max1 >= max2 && max1 >= max3) {
return max1;
} else if (max2 >= max1 && max2 >= max3) {
return max2;
} else {
return max3;
}
}
```

I am not sure how I would go about this. From every example I have seen, I should be using n as my input size and the only parameter I am concerned with is my input size for this.

Would someone be able to explain a best approach to solve for just about any algorithm? I feel like this particular type is getting to me because I am used to seeing simpler recursive functions in the explanations behind recurrence relation.

## Answer

In situations like these, I have seen the input size to the function considered not to be the literal input size (which may remain more or less the same at each recursive call) but the effective size of the data being considered. In your algorithm – much like in merge sort – the effective size of the data being considered does shrink at each recursive call: your high and low indices bound the part of the array you are looking at, so in a sense your effective input size does shrink. So, rather than treat cases such as these as multiple-variable recursions, I would expect a recursion such as T(n) = 3T(n/3) + O(1), or something of that nature.

Now, there are functions where it makes sense to have multiple independent variables… for instance, a function that takes in two arrays and narrows down at different rates, for instance. Graph algorithms often (though not always) treat vertices and edges as independent variables for complexity bounds, as a concrete case. In those cases, the independent variables are thought of as varying truly independently and cannot be meaningfully combined into a single measure of effective size. That is not necessarily the case in your function.