If a self-calling function can be written as a tail-call recursion, it will be optimized compared to other expressions of the recursion. A tail-call in a function is the last code executed in the function, or at the end of a conditional branch in the function. A function call is said to be tail recursive if there is nothing to do after the function returns except return its value. Space is allocated on the stack when a procedure is called and is removed upon return from the procedure. The block of information stored on the stack to effect a procedure call and return is called the stack frame. In general, the stack frame for a procedure contains all necessary information to save and restore the state of a procedure. In a tail-call recursion the compiler replaces the caller in place with the callee, so that instead of nesting the stack deeper, the current stack frame is reused. This saves significantly on space and space allocation calls, and lets tail-call recursion be as efficient as iteration. Here is an example in C of a tail-call recursion for the factorial
long f(long x){ return(faux(x,1)); } long faux(long x, long y){ /* here is simple iteration */ while (x != 0){ y=x*y; x=x-1;} return(y); }
A useful strategy for writing tail-call recursive functions is to use an accumulator variable. For example we compute . We do this in lisp
(defun binomial (n k) "Binomial coefficient" (if (or (< n k) (< k 0)) nil (binomaux n k 1))) (defun binomaux (n k ac) ; ac is our accumulator (if (or (= k 0) (= n k)) ac (binomaux (- n 1) (- k 1) (* ac (/ n k)))))
This works splendidly in lisp and REDUCE, because both do the division
exactly, but will do integer division in the last return. How
can you make this work in ? Trace through all function calls for to see why things can go wrong in but not in lisp.
int binomial (int n, int k){ if((n<k) || (k<0)) return (0); else return (binomaux (n,k,1,1)); } int binomaux(int n, int k, int acn, int acd){ if((k==0) || (n==k)) return (acn/acd); else return(binomaux(n-1,k-1,acn*n, acd*k)); }
Many three-term recursions are numerically unstable to round-off errors performed in the machine. In this case one uses the Miller recursion to work down the chain to lower values rather than higher.
For example, Bessel functions obey
which can be frightfully unstable. The Miller algorithm would work in the following way; to compute , we begin high up, and {\bf pick}
and go downwards:
and finally
This result is not correct since in reality . Now we make use of the growth of the recursion; we can see that whatever was, our value for is very large compared to the value for , and so it will clearly dominate in the well known sum
Each of our values for the various intervening Bessel functions is therefore off by a constant multiplicative factor
which we now divide by to restore our values to the correct values
For this results in
How close is this? The correct value is .
Note that this algorithm not only computes but simultaneously gives you all Bessel functions if you start the recursion high enough. This lets you embed a very efficient Bessel function computation in your code without having to rely on third party libraries. Your code can be made more portable.
Let’s have it in Julia
JBessel=function(x,N) J=[0.0 for j in 1:6*N] retv=[0.0 for j in 1:N] mask=[0.0 for j in 1:6*N] for j in 1:3N mask[2*j]=2.0 end retv[1]=1.0 if x==0.0 return retv else J[6*N]=0.0; J[6*N-1]=1.0/x for k in 2:(6*N-1) J[6*N-k]=(Float64(2*(6*N-k+1))/x)*J[6*N-k+1]-J[6*N-k+2] end J0=(2.0/x)*J[1]-J[2] norm=J0+sum(mask .* J) vcat(J0/norm, J/norm) end end
How good is it? Let and we get to 15 decimal places of precision in one single function call
Bessel(1.0,2) 13-element Vector{Float64}: 0.7651976865587345 0.44005058574537514 0.1149034849320158 0.01956335398268804 0.0024766389641124407 0.0002497577302114851 2.093833800241026e-5 1.5023258174380286e-6 9.422344172214218e-8 5.2492501162462465e-9 2.6306037029024466e-10 1.1957289558647486e-11 0.0