Algorithms: The “lost art” of interpolation

Interpolation has fallen by the wayside since the computer has replaced the use of precise mathematical tables, but the technique is still important in the analysis of laboratory data, for making predictions about the outcome of experiments that measure the values of some quantity between bracketing known values as a function of some parameter.
Suppose that you have a table or array of the values of a function

    \[\{x_0,x_1,\cdots,x_N\}, \qquad \{y_0,y_1,\cdots,y_N\}=\{f(x_0),f(x_1),\cdots, f(x_N)\}\]

One can perform a linear interpolation to obtain y(x_i+\alpha) for
\alpha<x_{i+1}-x_i by assuming that the curve is well approximated by a straight line between adjacent points listed in the table

    \[y(x_i+\alpha)\approx y(x_i)+{y(x_{i+1}) -y(x_i)\over x_{i+1}-x_i}\, \alpha\]

If the points x_0, x_1, x_2,\cdots are evenly spaced with x_{i+1}-x_i=\delta, the Lagrange interpolation can provide better accuracy. Notice that

    \[y(x+\delta)=y(x)+\delta y'(x)+{1\over 2!} \delta^2 y''(x)+\cdots=e^{\delta {d\over dx}} y(x)=E \, y(x)\]

in which E is the shift operator

    \[E \, y(x)=y(x+\delta)\]

Then

    \[y(x+\alpha\delta)=y(x)+\alpha\delta y'(x)+{1\over 2!} (\alpha\delta)^2 y''(x)+\cdots\]

    \[=e^{\alpha\delta {d\over dx}} y(x)=E^\alpha \, y(x)\]

which we can expand as

    \[y(x+\alpha\delta)=E^\alpha y(x)=(1+(E-1))^\alpha y(x)\]

    \[=y(x)+\alpha(E-1) y(x)+{\alpha(\alpha-1)\over 2!} (E-1)^2 y(x)\]

    \[+{\alpha(\alpha-1)(\alpha-2)\over 3!} (E-1)^3 y(x)+\cdots\]

Collect terms together and use the fact that

    \[E^n y(x_0)=y(x_0+n\delta)=y(x_n)=y_n\]

then

    \[y(x_0+\alpha\delta)=-{(\alpha-1)(\alpha-2)(\alpha-3)\over 6} y_0 +{\alpha(\alpha-2)(\alpha-3)\over 2} y_1\]

    \[-  {\alpha(\alpha-1)(\alpha-3)\over 2} y_2+{\alpha(\alpha-1)(\alpha-2)\over 6} y_3\]

and so on, here correct to third order differences. Notice that if \alpha=n,for 0\le n\le 3, we simply obtain y(x_0+n\delta)=y(x_n).

This can provide a significant improvement over linear interpolation, and can be used to develop predict-correct numerical integration algorithms. For example consider a function y(x) and its derivative y'(x), and apply this to the case \alpha=-1 to y'(x), we obtain

    \[y'(x_0)=4 y'(x_1)-6 y'(x_2) +4 y'(x_3) -y'(x_4)\]

Now consider numerically integrating y'(x) from x_0 to x_4=x_0+4\delta by Simpson’s rule

    \[\int_{x_0}^{x_4} y'(x) dx=y(x_4)-y(x_0)={\delta\over 3}\big( y'(x_0)\]

    \[+4 y'(x_1)+2 y'(x_2)+4 y'(x_3) + y'(x_4)\big)\]

and into this insert the previous interpolation formula to eliminate y'(x_4) on the right side to obtain the interpolation

    \[y_4=y_0 +{8\delta \over 3}\big( y'_1 -{1\over 2} y'_2 +y'_3\big)\]

Bashford-Adams-Milne Integration
This method is a predict-correct method based on the interpolation formula that we just derived. Consider a differential equation

    \[y'=F(y,x)\]

and make a guess solution y_g(x) that satisfies the boundary conditions (say y(0)=0) and build up a table from the guess solution. From the derivatives of the guess solution make the predictions

    \[y_{n+3}=y_0 +{8\delta \over 3}\big( y'_{n} -{1\over 2} y'_{n+1} +y'_{n+2}\big)\]

then compute from these new values of the derivatives from

    \[y'_n =F(y_n, x_n)\]

and then correct the prediction using Simpson’s rule

    \[y_{n+2}=y_{n}+{\delta\over 3}\big( y'_{n}+4 y'_{n+1}+y'_{n+2}\big)\]

We simply iterate the procedure until desired accuracy is achieved. This method does not suffer from the accumulated errors of Euler or Euler-Cromer methods.
Here is a simple implementation in C to solve y'=1-y^2, y(0)=0.

/* BAM algorithm, for y'=F(y,x), in this case F(y,x)=1-y^2 */
#include<stdio.h>
#define N 1000
double dx;
int n,m,p;
double x[N],y[N],yp[N], *yptr, *ypptr, *xptr;
double func(double Y, double X);

main(){

dx=0.001;
/* initialise with a guess */
 for(n=0;n<1000;n++){
x[n]=(double)n*dx;
 y[n]=x[n];
yp[n]=1.0;}

yptr=&y[0];
ypptr=&yp[0];
xptr=&x[0];

 for(n=0;n<1;n++){ /* iterate, begin with one cycle */
 
   /* predict */
for(m=4; m<N;m++){
*(yptr+m)=*(yptr+m-4)+(8.0*dx/3.0)*( *(ypptr+m-3) -0.5*(*(ypptr+m-2)) + \
*(ypptr+m-1)); 
/* correct */
 *(ypptr+m)=func(*(yptr+m), x[m]); 
  *(yptr+m)=*(yptr+m-2)+(dx/3.0)*( *(ypptr+m-2) +4.0*(*(ypptr+m-1)) + \
*(ypptr+m)); 

   }

 } /* end of iterative loop */
 for(n=0;n<N-4;n++){
printf("%f\t%f\n", *(xptr+n), *(yptr+n));  /* print x[n] \t y[n] to STDOUT */ 
 }
return;
}

double func(double Y, double X)  /* user definable */
{
double tmp;
tmp=1.0-Y*Y;
return(tmp);
}

I have used pointers to represent the arrays, I hope pointers don’t scare you too much.

Home 2.0
error: Content is protected !!