4.2 Numerical differentiation

Numerical differentiation is a series of algorithms to numerically estimate the derivative of a function. They tend to be computationally less demanding than numerical integration methods, but they are more sensitive to cancellation error.

The easiest method for approximating the derivative of a function is to use finite difference approximation. The finite difference approximation of the derivative of a continuous function \(f(x)\) at \(x\), \(f'(x)\), is calculated as \[ f'(x) \approx \frac{f(x + h) - f(x)}{h}, \] for a small \(h\). This formula is affected by both truncation error (as it derives from a truncated Taylor series expansion of \(f(x)\)) and cancellation error (as a machine works with finite-precision arithmetic). It is necessary to choose a value \(h\) that gives a good balance between the two errors: it can be showed that a good choice in most cases is \(h = \sqrt{\epsilon}\), with \(\epsilon\) being the machine precision.

This formula for finite different approximation is also known as forward differencing; other options are central differencing (\([f(x + h) - f(x - h)] / 2h\), more accurate but more computationally expensive), backward differencing (\([f(x) - f(x - h)] / h\)), the complex method (extremely powerful but with limited applicability), and the Richardson’s extrapolation method, which is more accurate but slower than finite differencing.