© Guest Editors’ Introduction: The Top 10 Algorithms
An algorithm is a sequence of finite computational steps that
transforms an input into an output. (Cormen and Leiserson, 2009)
Algos is the Greek word for pain. Algor is Latin, to be cold. Neither is the root for algorithm, which stems instead from al-Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr wa’l muqabalah devolved into today’s high school algebra textbooks. Al-Khwarizmi stressed the importance of methodical procedures for solving problems. Were he around today, he’d no doubt be impressed by the advances in his eponymous approach.
Some of the very best algorithms of the computer age are highlighted in the January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society. Guest editors Jack Don-garra of the University of Tennessee and Oak Ridge National Laboratory and Fran-cis Sullivan of the Center for Comput-ing Sciences at the Institute for Defense Analyses put togeth-er a list they call the “Top Ten Algorithms of the Century.”
“We tried to assemble the 10 al-gorithms with the greatest influence on the development and practice of science and engineering in the 20th century,” Dongarra and Sullivan write. As with any top-10 list, their selections—and non-selections—are bound to be controversial, they acknowledge. When it comes to picking the algorithmic best, there seems to be no best algorithm.
1946: Metropolis Algorithm for Monte Carlo
John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.
Monte Carlo methods are the only practical choice for evaluating problems of high dimensions.
1947: Simplex Method for Linear Programming
George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.
The Simplex method relies on noticing that the objective function’s maximum must occur on a corner of the space bounded by the constraints of the “feasible region.”
1950: Krylov Subspace Iteration Methods
Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.
The importance of iterative algorithms in linear algebra stems from the simple fact that a direct approach will require \( O(N^3) \) work. The Krylov subspace iteration methods have led to a major change in how users deal with large, sparse, nonsymmetric matrix problems.
1951: The Decompositional Approach to Matrix Computations
Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.
Introducing the decompositional approach to matrix computations revolutionized the field.
1957: The Fortran Optimizing Compiler
John Backus leads a team at IBM in developing the Fortran optimizing compiler.
The creation of Fortran may rank as the single most important event in the history of computer programming: Finally, scientists (and others) could tell the computer what they wanted it to do, without having to descend into the netherworld of machine code. Although modest by modern compiler standards—Fortran I consisted of a mere 23,500 assembly-language instructions—the early compiler was nonetheless capable of surprisingly sophisticated computations. As Backus himself recalls in a recent history of Fortran I, II, and III, published in 1998 in the IEEE Annals of the History of Computing, the compiler “produced code of such efficiency that its output would startle the programmers who studied it.”
1959: QR Algorithm for Computing Eigenvalues
J.G.F. Francis of Ferranti Ltd., London, finds a stable method for computing eigenvalues, known as the QR algorithm.
The QR Algorithm for computing eigenvalues of a matrix has transformed the approach to computing the spectrum of a matrix.
1962: Quicksort Algorithm for Sorting
Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.
Hoare’s algorithm uses the age-old recursive strategy of divide and conquer to solve the problem.
Sorting is a central problem in many areas of computing so it is no surprise to see an approach to solving the problem as one of the top 10. Quicksort is one of the best practical sorting algorithm for general inputs. In addition, its complexity analysis and its structure have been a rich source of inspiration for developing general algorithm techniques for various applications.
1965: Fast Fourier Transform
James Cooley of the IBM T.J. Watson Research Center and John Tukey of Princeton University and AT&T Bell Laboratories unveil the fast Fourier transform.
Easily the most far-reaching algo-rithm in applied mathematics, the FFT revolutionized signal processing. The underlying idea goes back to Gauss (who needed to calculate orbits of asteroids), but it was the Cooley–Tukey paper that made it clear how easily Fourier transforms can be computed. Like Quicksort, the FFT relies on a divide-and-conquer strategy to reduce an ostensibly \( O(N^2) \) chore to an \( O(NlogN) \) frolic. But unlike Quick- sort, the implementation is (at first sight) nonintuitive and less than straightforward. This in itself gave computer science an impetus to investigate the inherent complexity of computational problems and algorithms.
Mozart could listen to music just once and then write it
down from memory without any mistakes. (Vernon, 1996)
FFT is an algorithm “the whole family can use.” The FFT is perhaps the most ubiquitous algorithm in use today to analyze and manipulate digital or discrete data. The FFT takes the operation count for discrete Fourier transform from \( O(N^2) \) to \( O(NlogN) \).
1977: Integer Relation Detection
Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm.
Some recently discovered integer relation detection algorithms have become a centerpiece of the emerging discipline of “experimental mathematics” — the use of modern computer technology as an exploratory tool in mathematical research.
1987: Fast Multipole Method
Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole algorithm.
The Fast Multipole Algorithm was developed originally to calculate gravitational and electrostatic potentials. The method utilizes techniques to quickly compute and combine the pair-wise approximation in \( O(N) \) operations.
- The Best of the 20th Century: Editors Name Top 10 Algorithms, By Barry A. Cipra, a mathematician and writer based in Northfield, Minnesota.
- The Top10 Algorithms From The 20th Century, Alex Townsend, Cornell University