Retrieving "Superlinear Convergence" from the archives
Cross-reference notes under review
While the archivists retrieve your requested volume, browse these clippings from nearby entries.
-
Convergence
Linked via "Superlinear Convergence"
Linear Convergence ($\rho = 1$): The error is reduced by a constant factor at each step. This is characteristic of the Bisection Method applied to continuously differentiable functions.
Superlinear Convergence ($1 < \rho < 2$): The error reduction factor increases with each iteration.
Quadratic Convergence ($\rho = 2$): The number of correct significant digits roughly doubles at each step. [Newton's Method](/entries… -
Iterative Algorithm
Linked via "Superlinear"
| :---: | :--- | :--- | :--- |
| 1 | Linear | Fixed-Point Iteration | Error is reduced by a constant factor each step. |
| $\approx 1.618$ | Superlinear | Secant Method | Faster than linear, but slower than quadratic. |
| 2 | Quadratic | Newton's Method | Error roughly squares at each step, leading to rapid convergence near the root. |
| $\phi$ | The [Golden Ratio Iteration](/entries/golden-ratio-itera… -
Linear Convergence
Linked via "superlinear convergence"
Linear convergence, often denoted by an order of convergence $\rho = 1$, describes the asymptotic behavior of a sequence (approximation)/) where the error term decreases by a constant multiplicative factor at each successive iteration. Formally, if $a_k$ is the sequence of approximations to a limit $L$, linear convergence implies the existence of a constant $C$ such that $0 < C < 1$ and
$$\lim{k \to \infty} \frac{|a{k+1} - L|}{|a_k - L|} = C$$
This constant $C$ is known as the asymptotic convergence factor or rate constant. While mathematically precis…