The conjugate gradient algorithm (CG) was originally designed as a mathematical technique for solving a large system of linear equations Hestenes and Stiefel (1952), but was later adapted to solving nonlinear optimisation problems (Fletcher and Reeves, 1964).
The technique loops over a set of directions *p*_{0}, *p*_{1}, , *p*_{n} which are all conjugate to the Hessian (Nocedal and Wright, 1999), a property defined as

p_{i}^{T}∇^{2}f_{k}p_{j} = 0, foralli≠j. |
(14.10) |

By performing line searches over all directions *p*_{j} the solution to the quadratic model (14.5) of the position *θ*_{k} will be found in *n* or less iterations of the CG algorithm where *n* is the total number of parameters in the model.
The technique performs well on large problems with many parameters as no matrices are calculated or stored.
The algorithms perform better than the steepest descent method and preconditioning of the system is used to improve optimisation.
Preconditioned techniques include the Fletcher-Reeves algorithm which was the original conjugate gradient optimisation technique (Fletcher and Reeves, 1964), the Polak-Ribière method (Polak and Ribière, 1969), a modified Polak-Ribière method called the Polak-Ribière + method (Nocedal and Wright, 1999), and the Hestenes-Stiefel algorithm which originates from a formula in Hestenes and Stiefel (1952).
As a line search is performed to find the minimum along each conjugate direction both the backtracking and Moré and Thuente auxiliary step-length selection algorithms will be tested with the CG algorithms.

The relax user manual (PDF), created 2020-08-26.