This chapter documents the linear algebra functions of Octave. Reference material for many of these functions may be found in Golub and Van Loan, Matrix Computations, 2nd Ed., Johns Hopkins, 1989, and in LAPACK Users' Guide, SIAM, 1992.
[dd, aa] = balance (a)
returns aa = dd \ a * dd
.
aa
is a matrix whose row and column norms are roughly equal in
magnitude, and dd
= p * d
, where p
is a permutation
matrix and d
is a diagonal matrix of powers of two. This allows
the equilibration to be computed without roundoff. Results of
eigenvalue calculation are typically improved by balancing first.
[cc, dd, aa, bb] = balance (a, b)
returns aa = cc*a*dd
and
bb = cc*b*dd)
, where aa
and bb
have non-zero
elements of approximately the same magnitude and cc
and dd
are permuted diagonal matrices as in dd
for the algebraic
eigenvalue problem.
The eigenvalue balancing option opt
is selected as follows:
"N"
, "n"
"P"
, "p"
"S"
, "s"
"B"
, "b"
Algebraic eigenvalue balancing uses standard LAPACK routines.
Generalized eigenvalue problem balancing uses Ward's algorithm (SIAM Journal on Scientific and Statistical Computing, 1981).
cond (a)
is
defined as norm (a) * norm (inv (a))
, and is computed via a
singular value decomposition.
G = [c s; -s' c]
such that
G [x; y] = [*; 0]
with x and y scalars.
For example,
givens (1, 1) => 0.70711 0.70711 -0.70711 0.70711
p = 2
is assumed.
If a is a matrix:
1
2
Inf
"fro"
sqrt (sum (diag (a' * a)))
.
If a is a vector or a scalar:
Inf
max (abs (a))
.
-Inf
min (abs (a))
.
(sum (abs (a) .^ p)) ^ (1/p)
.
The dimension of the null space is taken as the number of singular values of a not greater than tol. If the argument tol is missing, it is computed as
max (size (a)) * max (svd (a)) * eps
The dimension of the range space is taken as the number of singular values of a greater than tol. If the argument tol is missing, it is computed as
max (size (a)) * max (svd (a)) * eps
If the second argument is omitted, it is assumed that
tol = max (size (x)) * sigma_max (x) * eps,
where sigma_max (x)
is the maximal singular value of x.
tol = max (size (a)) * sigma (1) * eps;
where eps
is machine precision and sigma
is the largest
singular value of a.
sum (diag (a))
.
r' * r = a.
The Hessenberg decomposition is usually used as the first step in an
eigenvalue computation, but has other applications as well (see Golub,
Nash, and Van Loan, IEEE Transactions on Automatic Control, 1979. The
Hessenberg decomposition is
p * h * p' = a
where p
is a square unitary matrix
(p' * p = I
, using complex-conjugate transposition) and h
is upper Hessenberg (i >= j+1 => h (i, j) = 0
).
a = [1, 2; 3, 4]
,
[l, u, p] = lu (a)
returns
l = 1.00000 0.00000 0.33333 1.00000 u = 3.00000 4.00000 0.00000 0.66667 p = 0 1 1 0
a = [1, 2; 3, 4]
,
[q, r] = qr (a)
returns
q = -0.31623 -0.94868 -0.94868 0.31623 r = -3.16228 -4.42719 0.00000 -0.63246
The qr
factorization has applications in the solution of least
squares problems
min norm(A x - b)
for overdetermined systems of equations (i.e.,
a
is a tall, thin matrix). The QR factorization is
q * r = a
where q
is an orthogonal matrix and r
is
upper triangular.
The permuted QR factorization [q, r, p] =
qr (a)
forms the QR factorization such that the diagonal
entries of r
are decreasing in magnitude order. For example,
given the matrix a = [1, 2; 3, 4]
,
[q, r, pi] = qr(a)
returns
q = -0.44721 -0.89443 -0.89443 0.44721 r = -4.47214 -3.13050 0.00000 0.44721 p = 0 1 1 0
The permuted qr
factorization [q, r, p] = qr (a)
factorization allows the construction of an orthogonal basis of
span (a)
.
are
and dare
).
schur
always returns
s = u' * a * u
where
u
is a unitary matrix
(u'* u
is identity)
and
s
is upper triangular. The eigenvalues of
a
(and s
)
are the diagonal elements of
s
If the matrix
a
is real, then the real Schur decomposition is computed, in which the
matrix
u
is orthogonal and
s
is block upper triangular
with blocks of size at most
2 x 2
blocks along the diagonal. The diagonal elements of
s
(or the eigenvalues of the
2 x 2
blocks, when
appropriate) are the eigenvalues of
a
and
s
.
The eigenvalues are optionally ordered along the diagonal according to
the value of opt
. opt = "a"
indicates that all
eigenvalues with negative real parts should be moved to the leading
block of
s
(used in are
), opt = "d"
indicates that all eigenvalues
with magnitude less than one should be moved to the leading block of
s
(used in dare
), and opt = "u"
, the default, indicates that
no ordering of eigenvalues should occur. The leading
k
columns of
u
always span the
a
-invariant
subspace corresponding to the
k
leading eigenvalues of
s
.
a = u * sigma * v'
The function svd
normally returns the vector of singular values.
If asked for three return values, it computes
U, S, and V.
For example,
svd (hilb (3))
returns
ans = 1.4083189 0.1223271 0.0026873
and
[u, s, v] = svd (hilb (3))
returns
u = -0.82704 0.54745 0.12766 -0.45986 -0.52829 -0.71375 -0.32330 -0.64901 0.68867 s = 1.40832 0.00000 0.00000 0.00000 0.12233 0.00000 0.00000 0.00000 0.00269 v = -0.82704 0.54745 0.12766 -0.45986 -0.52829 -0.71375 -0.32330 -0.64901 0.68867
If given a second argument, svd
returns an economy-sized
decomposition, eliminating the unnecessary rows or columns of u or
v.
expm(a) = I + a + a^2/2! + a^3/3! + ...
The Taylor series is not the way to compute the matrix exponential; see Moler and Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, SIAM Review, 1978. This routine uses Ward's diagonal Pade' approximation method with three step preconditioning (SIAM Journal on Numerical Analysis, 1977). Diagonal Pade' approximations are rational polynomials of matrices
-1 D (a) N (a)
whose Taylor series matches the first
2q+1
terms of the Taylor series above; direct evaluation of the Taylor series
(with the same preconditioning steps) may be desirable in lieu of the
Pade'
approximation when
Dq(a)
is ill-conditioned.
x = [a(i, j) b]
For example,
kron (1:4, ones (3, 1)) => 1 2 3 4 1 2 3 4 1 2 3 4
(a, b)
, returning
aa = q * a * z
,
bb = q * b * z
, with q and z
orthogonal. For example,
[aa, bb, q, z] = qzhess ([1, 2; 3, 4], [5, 6; 7, 8]) => aa = [ -3.02244, -4.41741; 0.92998, 0.69749 ] => bb = [ -8.60233, -9.99730; 0.00000, -0.23250 ] => q = [ -0.58124, -0.81373; -0.81373, 0.58124 ] => z = [ 1, 0; 0, 1 ]
The Hessenberg-triangular decomposition is the first step in Moler and Stewart's QZ decomposition algorithm.
Algorithm taken from Golub and Van Loan, Matrix Computations, 2nd edition.
a - lambda b
.
The arguments a and b must be real matrices.
A X + X B + C = 0
using standard LAPACK subroutines. For example,
syl ([1, 2; 3, 4], [5, 6; 7, 8], [9, 10; 11, 12]) => [ -0.50000, -0.66667; -0.66667, -0.50000 ]
Go to the first, previous, next, last section, table of contents.