Smart recursions: tail-call and Miller recursion

If a self-calling function can be written as a tail-call recursion, it will be optimized compared to other expressions of the recursion. A tail-call in a function is the last code executed in the function, or at the end of a conditional branch in the function. A function call is said to be tail recursive if there is nothing to do after the function returns except return its value. Space is allocated on the stack when a procedure is called and is removed upon return from the procedure. The block of information stored on the stack to effect a procedure call and return is called the stack frame. In general, the stack frame for a procedure contains all necessary information to save and restore the state of a procedure. In a tail-call recursion the compiler replaces the caller in place with the callee, so that instead of nesting the stack deeper, the current stack frame is reused. This saves significantly on space and space allocation calls, and lets tail-call recursion be as efficient as iteration. Here is an example in C of a tail-call recursion for the factorial

long f(long x){
return(faux(x,1));
}

long faux(long x, long y){
/* here is simple iteration */
while (x != 0){
       y=x*y;
       x=x-1;}
return(y);
}

A useful strategy for writing tail-call recursive functions is to use an accumulator variable. For example we compute {n \choose k}. We do this in lisp

(defun binomial (n k)
"Binomial coefficient"
  (if (or (< n k) (< k 0)) nil (binomaux n k 1)))


(defun binomaux (n k ac)
; ac is our accumulator
  (if (or (= k 0) (= n k))  ac  (binomaux (- n 1) (- k 1) (* ac (/ n k)))))

This works splendidly in lisp and REDUCE, because both do the division
exactly, but C will do integer division in the last return. How
can you make this work in C? Trace through all function calls for {8 \choose 3} to see why things can go wrong in C but not in lisp.

int binomial (int n, int k){
if((n<k) || (k<0)) return (0);
else return (binomaux (n,k,1,1));
}

int binomaux(int n, int k, int acn, int acd){
  if((k==0) || (n==k)) return (acn/acd);
  else return(binomaux(n-1,k-1,acn*n, acd*k));
}

Many three-term recursions are numerically unstable to round-off errors performed in the machine. In this case one uses the Miller recursion to work down the chain to lower n values rather than higher.
For example, Bessel functions obey

    \[J_{n+1}(x)={2n\over x} J_n(x)-J_{n-1}(x)\]

which can be frightfully unstable. The Miller algorithm would work in the following way; to compute J_0(x), we begin high up, and {\bf pick}

    \[J_6(x)=0, \, J_5(x)=1\]

and go downwards:

    \[J_4(x)={10\over x} J_5(x)-J_6(x)={10\over x}\]

    \[J_3(x)={8\over x} J_4(x)-J_5(x)={8\over x}{10\over x}-1\]

    \[J_2(x)={6\over x} J_3(x)-J_4(x)={6\over x}\big({8\over x}{10\over x}-1\big)-{10\over x}\]

    \[J_1(x)={4\over x} J_2(x)-J_3(x)={4\over x}\big({480\over x^3}-{16\over x}\big)-{80\over x^2}+1\]

and finally

    \[J_0(x)={2\over x} J_1(x)-J_2(x)={2\over x}\big({1920\over x^4}-{144\over x^2}+1\big)-{480\over x^3}+{16\over x}\]

This result is not correct since in reality J_5(x)\ne 1. Now we make use of the growth of the recursion; we can see that whatever x was, our value for J_0(x) is very large compared to the value for J_5(x) , and so it will clearly dominate in the well known sum

    \[1=J_0(x)+2 J_2(x)+2 J_4(x)+2 J_6(x)+\cdots\]

Each of our values for the various intervening Bessel functions is therefore off by a constant multiplicative factor

    \[''1''=J_0(x)+2 J_2(x)+2 J_4(x)+2 J_6(x)+\cdots\]

    \[=\big( {3840\over x^5}-{768\over x^3}+{18\over x}\big)+2 \big({480\over x^3}-{16\over x}\big)+2\big( {10\over x}\big) +\big(0\big)\]

which we now divide by to restore our values to the correct values

    \[J_0(x)\approx{  {3840\over x^5}-{768\over x^3}+{18\over x}\over \big( {3840\over x^5}-{768\over x^3}+{18\over x}\big)+2 \big({480\over x^3}-{16\over x}\big)+2\big( {10\over x}\big) +\big(0\big)}\]

For x=1 this results in

    \[J_0(1)\approx {3090\over 4038}\approx 0.76523\]

How close is this? The correct value is J_0(1)=0.7651976866\cdots.
Note that this algorithm not only computes J_0(x) but simultaneously gives you all Bessel functions if you start the recursion high enough. This lets you embed a very efficient Bessel function computation in your code without having to rely on third party libraries. Your code can be made more portable.
Let’s have it in Julia

JBessel=function(x,N)
 J=[0.0 for j in 1:6*N]
 retv=[0.0 for j in 1:N]
 mask=[0.0 for j in 1:6*N]
 for j in 1:3N
  mask[2*j]=2.0
 end
 retv[1]=1.0
 if x==0.0 
  return retv
 else
  J[6*N]=0.0; J[6*N-1]=1.0/x
  for k in 2:(6*N-1)
   J[6*N-k]=(Float64(2*(6*N-k+1))/x)*J[6*N-k+1]-J[6*N-k+2]
  end
 J0=(2.0/x)*J[1]-J[2]
 norm=J0+sum(mask .* J)
 vcat(J0/norm, J/norm)
 end
end

How good is it? Let N=2 and we get J_0,J_1,\cdots, J_{11} to 15 decimal places of precision in one single function call

Bessel(1.0,2)
13-element Vector{Float64}:
 0.7651976865587345
 0.44005058574537514
 0.1149034849320158
 0.01956335398268804
 0.0024766389641124407
 0.0002497577302114851
 2.093833800241026e-5
 1.5023258174380286e-6
 9.422344172214218e-8
 5.2492501162462465e-9
 2.6306037029024466e-10
 1.1957289558647486e-11
 0.0

Home 2.0
error: Content is protected !!