Computing Eigenvalues in Extended Precision

by Pavel Holoborodko on October 12, 2011

Eigenvalues and eigenvectors play important role in many real-world applications, including control systems modeling, partial differential equations, data mining and clusterization, chemistry, vibration analysis, to name a few examples.

The main practical difficulty is that eigenproblem might be ill-conditioned and hard to compute even when matrix itself is well-conditioned.

This comes from the fact that computing eigenvalues is equivalent to finding roots of a polynomial, which is proven to be unavoidably ill-conditioned problem (even when roots are well-separated, as shown by J.H. Wilkinson [1-3]) .

More precisely, conditioning of an eigenproblem depends on eigenvalues clustering, magnitude and angles between the eigenvectors. All the properties are inherent to the problem and ill-conditioning, if it is present, cannot be easily alleviated or even detected beforehand. The only case when it is considered stable is computing eigenvalues of a symmetric matrix.

At the moment the only reliable way of dealing with ill-conditioned eigenproblems is to use the extended-precision computations. In fact, there is no other alternative in contrast to solving systems of linear equations where various pre-conditioners can be used to improve the conditioning.

Here we want to show two examples of such problems and how toolbox solves them in comparison to MATLAB.

Read More

{ 14 comments }

Arbitrary Precision Iterative Solver – GMRES

by Pavel Holoborodko on October 11, 2011

The multiprecision toolbox provides all the common routines to do the matrix computations with arbitrary precision – solvers, decompositions, eigensolvers, matrix functions, etc. Recently we have made important addition – iterative methods for large/sparse matrices.

Direct solvers are designed to find solution in finite number of steps (e.g. Gauss elimination). Such methods rely heavily on BLAS3 operations (matrix-matrix, xGEMM) and hence complexity is O(n^3). Naturally this limits applicability of the methods for large matrices, as complexity might be prohibitive.

Iterative methods avoid cubic complexity growth by using only the matrix-vector multiplications (BLAS2 - xGEMV or SpMV). Thus they make solving large dense and sparse matrix possible.

However this comes with the price of slow convergence, loosing orthogonality of subspace vectors, etc. Some of the issues can be resolved by preconditioners and re-orthogonalizations which in turn increase complexity of the algorithm.

Extended accuracy allows studying the effects and actually can speed-up the solver by allowing it to converge in less number of steps or avoiding re-ortrhogonalization (as there is no loss of orthogonality). Our user manual has good example on how convergence speed changes with precision.

Toolbox includes following iterative solvers: BiCG, BiCGSTAB, BiCGSTABL, CGS, GMRES, MINRES and PCG. All methods are capable of working with arbitrary precision sparse and dense matrices. Routines are implemented in different variants in order to enable drop-in replacement for MATLAB’s equivalent built-in functions.

For example, GMRES supports various input types (matrix and function handle) and preconditioners:

x = gmres(A,b)
gmres(A,b,restart)
gmres(A,b,restart,tol)
gmres(A,b,restart,tol,maxit)
gmres(A,b,restart,tol,maxit,M)
gmres(A,b,restart,tol,maxit,M1,M2)
gmres(A,b,restart,tol,maxit,M1,M2,x0)
[x,flag] = gmres(A,b,...)
[x,flag,relres] = gmres(A,b,...)
[x,flag,relres,iter] = gmres(A,b,...)
[x,flag,relres,iter,resvec] = gmres(A,b,...)

Read more

{ 0 comments }

Importance of Multiprecision Computing

by Pavel Holoborodko on October 5, 2011

In this post we would like to show one of the numerous examples on why arbitrary precision computing is important. This particular example is taken from excellent paper “Why and how to use arbitrary precision” written by the main developers of MPFR: Vincent Lefèvre, Philippe Théveny, Paul Zimmermann in collaboration with Kaveh R. Ghazi.

See example

{ 0 comments }

Arbitrary Precision Gauss-Legendre Quadrature

by Pavel Holoborodko on October 4, 2011

Let us explore the derivation of the Gauss-Legendre quadrature’s abscissae and weights, accurate to any desired number of decimal places, and how this is implemented in the Multiprecision Computing Toolbox for MATLAB.

The User’s Guide section of our website shows an example of how to convert MATLAB programs to enable computation of the Gauss-Legendre quadrature’s abscissae and weights with arbitrary precision. In the latest version of the toolbox 3.1.4.1526, we decided to take this one step further.

We have implemented the calculation of the Gauss-Legendre quadrature’s coefficients in multiple precision as a native module of the toolbox. Now, Gaussian abscissae and weights can be computed with any desired degree of accuracy in MATLAB by a direct call to the mp.GaussLegendre(N) function from the Multiprecision Computing Toolbox.

Read More

{ 0 comments }

Bug in MATLAB – atanh gives incorrect result

by Pavel Holoborodko on September 21, 2011

Inverse hyperbolic tangent function atanh() in MATLAB gives incorrect results for arguments outside (-1,1):

>> atanh(-2)
 
ans =
 
         -0.549306144334055 - 1.5707963267949i

Imaginary part of the result has incorrect sign. Function acoth() is also affected by this bug (since it is calculated through atanh()). Tested up to MATLAB 2011a.

Compare with MCT, Maple, Mathematica

{ 0 comments }

Multiprecision Computing Toolbox 3.1.3

by Pavel Holoborodko on September 20, 2011

New version of multiprecision computing toolbox (MCT) – 3.1.3.1472 is available for download. Improvements have been made to expression evaluation in high precision:

  • New functions: acot(), asec(), acsc(), acoth(), asech(), acsch(). All of them support complex arguments too.
  • Both symbols i and j can be used for imaginary unit. Before only i was allowed.

See examples

{ 0 comments }