Largest absolute value of an operator's eigenvalues
In mathematics , the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues .[ 1] More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum . The spectral radius is often denoted by ρ(·) .
Definition
Matrices
Let λ 1 , ..., λn be the eigenvalues of a matrix A ∈ C n ×n . The spectral radius of A is defined as
ρ ρ -->
(
A
)
=
max
{
|
λ λ -->
1
|
,
… … -->
,
|
λ λ -->
n
|
}
.
{\displaystyle \rho (A)=\max \left\{|\lambda _{1}|,\dotsc ,|\lambda _{n}|\right\}.}
The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand,
ρ ρ -->
(
A
)
⩽ ⩽ -->
‖ ‖ -->
A
‖ ‖ -->
{\displaystyle \rho (A)\leqslant \|A\|}
for every natural matrix norm
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
{\displaystyle \|\cdot \|}
; and on the other hand, Gelfand's formula states that
ρ ρ -->
(
A
)
=
lim
k
→ → -->
∞ ∞ -->
‖ ‖ -->
A
k
‖ ‖ -->
1
/
k
{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}}
. Both of these results are shown below.
However, the spectral radius does not necessarily satisfy
‖ ‖ -->
A
v
‖ ‖ -->
⩽ ⩽ -->
ρ ρ -->
(
A
)
‖ ‖ -->
v
‖ ‖ -->
{\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}
for arbitrary vectors
v
∈ ∈ -->
C
n
{\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}
. To see why, let
r
>
1
{\displaystyle r>1}
be arbitrary and consider the matrix
C
r
=
(
0
r
− − -->
1
r
0
)
{\displaystyle C_{r}={\begin{pmatrix}0&r^{-1}\\r&0\end{pmatrix}}}
.
The characteristic polynomial of
C
r
{\displaystyle C_{r}}
is
λ λ -->
2
− − -->
1
{\displaystyle \lambda ^{2}-1}
, so its eigenvalues are
{
− − -->
1
,
1
}
{\displaystyle \{-1,1\}}
and thus
ρ ρ -->
(
C
r
)
=
1
{\displaystyle \rho (C_{r})=1}
. However,
C
r
e
1
=
r
e
2
{\displaystyle C_{r}\mathbf {e} _{1}=r\mathbf {e} _{2}}
. As a result,
‖ ‖ -->
C
r
e
1
‖ ‖ -->
=
r
>
1
=
ρ ρ -->
(
C
r
)
‖ ‖ -->
e
1
‖ ‖ -->
.
{\displaystyle \|C_{r}\mathbf {e} _{1}\|=r>1=\rho (C_{r})\|\mathbf {e} _{1}\|.}
As an illustration of Gelfand's formula, note that
‖ ‖ -->
C
r
k
‖ ‖ -->
1
/
k
→ → -->
1
{\displaystyle \|C_{r}^{k}\|^{1/k}\to 1}
as
k
→ → -->
∞ ∞ -->
{\displaystyle k\to \infty }
, since
C
r
k
=
I
{\displaystyle C_{r}^{k}=I}
if
k
{\displaystyle k}
is even and
C
r
k
=
C
r
{\displaystyle C_{r}^{k}=C_{r}}
if
k
{\displaystyle k}
is odd.
A special case in which
‖ ‖ -->
A
v
‖ ‖ -->
⩽ ⩽ -->
ρ ρ -->
(
A
)
‖ ‖ -->
v
‖ ‖ -->
{\displaystyle \|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|}
for all
v
∈ ∈ -->
C
n
{\displaystyle \mathbf {v} \in \mathbb {C} ^{n}}
is when
A
{\displaystyle A}
is a Hermitian matrix and
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
{\displaystyle \|\cdot \|}
is the Euclidean norm . This is because any Hermitian Matrix is diagonalizable by a unitary matrix , and unitary matrices preserve vector length. As a result,
‖ ‖ -->
A
v
‖ ‖ -->
=
‖ ‖ -->
U
∗ ∗ -->
D
U
v
‖ ‖ -->
=
‖ ‖ -->
D
U
v
‖ ‖ -->
⩽ ⩽ -->
ρ ρ -->
(
A
)
‖ ‖ -->
U
v
‖ ‖ -->
=
ρ ρ -->
(
A
)
‖ ‖ -->
v
‖ ‖ -->
.
{\displaystyle \|A\mathbf {v} \|=\|U^{*}DU\mathbf {v} \|=\|DU\mathbf {v} \|\leqslant \rho (A)\|U\mathbf {v} \|=\rho (A)\|\mathbf {v} \|.}
Bounded linear operators
In the context of a bounded linear operator A on a Banach space , the eigenvalues need to be replaced with the elements of the spectrum of the operator , i.e. the values
λ λ -->
{\displaystyle \lambda }
for which
A
− − -->
λ λ -->
I
{\displaystyle A-\lambda I}
is not bijective. We denote the spectrum by
σ σ -->
(
A
)
=
{
λ λ -->
∈ ∈ -->
C
:
A
− − -->
λ λ -->
I
is not bijective
}
{\displaystyle \sigma (A)=\left\{\lambda \in \mathbb {C} :A-\lambda I\;{\text{is not bijective}}\right\}}
The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum:
ρ ρ -->
(
A
)
=
sup
λ λ -->
∈ ∈ -->
σ σ -->
(
A
)
|
λ λ -->
|
{\displaystyle \rho (A)=\sup _{\lambda \in \sigma (A)}|\lambda |}
Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
{\displaystyle \|\cdot \|}
denote the operator norm , we have
ρ ρ -->
(
A
)
=
lim
k
→ → -->
∞ ∞ -->
‖ ‖ -->
A
k
‖ ‖ -->
1
k
=
inf
k
∈ ∈ -->
N
∗ ∗ -->
‖ ‖ -->
A
k
‖ ‖ -->
1
k
.
{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{\frac {1}{k}}=\inf _{k\in \mathbb {N} ^{*}}\|A^{k}\|^{\frac {1}{k}}.}
A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius . An example of such an operator is a normal operator .
Graphs
The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix .
This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C ). In this case, for the graph G define:
ℓ ℓ -->
2
(
G
)
=
{
f
:
V
(
G
)
→ → -->
R
:
∑ ∑ -->
v
∈ ∈ -->
V
(
G
)
‖
f
(
v
)
2
‖
<
∞ ∞ -->
}
.
{\displaystyle \ell ^{2}(G)=\left\{f:V(G)\to \mathbf {R} \ :\ \sum \nolimits _{v\in V(G)}\left\|f(v)^{2}\right\|<\infty \right\}.}
Let γ be the adjacency operator of G :
{
γ γ -->
:
ℓ ℓ -->
2
(
G
)
→ → -->
ℓ ℓ -->
2
(
G
)
(
γ γ -->
f
)
(
v
)
=
∑ ∑ -->
(
u
,
v
)
∈ ∈ -->
E
(
G
)
f
(
u
)
{\displaystyle {\begin{cases}\gamma :\ell ^{2}(G)\to \ell ^{2}(G)\\(\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)\end{cases}}}
The spectral radius of G is defined to be the spectral radius of the bounded linear operator γ .
Upper bounds
Upper bounds on the spectral radius of a matrix
The following proposition gives simple yet useful upper bounds on the spectral radius of a matrix.
Proposition. Let A ∈ C n ×n with spectral radius ρ (A ) and a consistent matrix norm ||⋅|| . Then for each integer
k
⩾ ⩾ -->
1
{\displaystyle k\geqslant 1}
:
ρ ρ -->
(
A
)
≤ ≤ -->
‖ ‖ -->
A
k
‖ ‖ -->
1
k
.
{\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}
Proof
Let (v , λ ) be an eigenvector -eigenvalue pair for a matrix A . By the sub-multiplicativity of the matrix norm, we get:
|
λ λ -->
|
k
‖ ‖ -->
v
‖ ‖ -->
=
‖ ‖ -->
λ λ -->
k
v
‖ ‖ -->
=
‖ ‖ -->
A
k
v
‖ ‖ -->
≤ ≤ -->
‖ ‖ -->
A
k
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
v
‖ ‖ -->
.
{\displaystyle |\lambda |^{k}\|\mathbf {v} \|=\|\lambda ^{k}\mathbf {v} \|=\|A^{k}\mathbf {v} \|\leq \|A^{k}\|\cdot \|\mathbf {v} \|.}
Since v ≠ 0 , we have
|
λ λ -->
|
k
≤ ≤ -->
‖ ‖ -->
A
k
‖ ‖ -->
{\displaystyle |\lambda |^{k}\leq \|A^{k}\|}
and therefore
ρ ρ -->
(
A
)
≤ ≤ -->
‖ ‖ -->
A
k
‖ ‖ -->
1
k
.
{\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}
concluding the proof.
Upper bounds for spectral radius of a graph
There are many upper bounds for the spectral radius of a graph in terms of its number n of vertices and its number m of edges. For instance, if
(
k
− − -->
2
)
(
k
− − -->
3
)
2
≤ ≤ -->
m
− − -->
n
≤ ≤ -->
k
(
k
− − -->
3
)
2
{\displaystyle {\frac {(k-2)(k-3)}{2}}\leq m-n\leq {\frac {k(k-3)}{2}}}
where
3
≤ ≤ -->
k
≤ ≤ -->
n
{\displaystyle 3\leq k\leq n}
is an integer, then[ 2]
ρ ρ -->
(
G
)
≤ ≤ -->
2
m
− − -->
n
− − -->
k
+
5
2
+
2
m
− − -->
2
n
+
9
4
{\displaystyle \rho (G)\leq {\sqrt {2m-n-k+{\frac {5}{2}}+{\sqrt {2m-2n+{\frac {9}{4}}}}}}}
Symmetric matrices
For real-valued matrices
A
{\displaystyle A}
the inequality
ρ ρ -->
(
A
)
≤ ≤ -->
‖ ‖ -->
A
‖ ‖ -->
2
{\displaystyle \rho (A)\leq {\|A\|}_{2}}
holds in particular, where
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
2
{\displaystyle {\|\cdot \|}_{2}}
denotes the spectral norm . In the case
where
A
{\displaystyle A}
is symmetric , this inequality is tight:
Theorem. Let
A
∈ ∈ -->
R
n
× × -->
n
{\displaystyle A\in \mathbb {R} ^{n\times n}}
be symmetric, i.e.,
A
=
A
T
.
{\displaystyle A=A^{T}.}
Then it holds that
ρ ρ -->
(
A
)
=
‖ ‖ -->
A
‖ ‖ -->
2
.
{\displaystyle \rho (A)={\|A\|}_{2}.}
Proof
Let
(
v
i
,
λ λ -->
i
)
i
=
1
n
{\displaystyle (v_{i},\lambda _{i})_{i=1}^{n}}
be the eigenpairs of A . Due to the symmetry of A ,
all
v
i
{\displaystyle v_{i}}
and
λ λ -->
i
{\displaystyle \lambda _{i}}
are real-valued and the eigenvectors
v
i
{\displaystyle v_{i}}
are orthonormal .
By the definition the spectral norm, there exists an
x
∈ ∈ -->
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
with
‖ ‖ -->
x
‖ ‖ -->
2
=
1
{\displaystyle {\|x\|}_{2}=1}
such that
‖ ‖ -->
A
‖ ‖ -->
2
=
‖ ‖ -->
A
x
‖ ‖ -->
2
.
{\displaystyle {\|A\|}_{2}={\|Ax\|}_{2}.}
Since the eigenvectors
v
i
{\displaystyle v_{i}}
form a basis
of
R
n
,
{\displaystyle \mathbb {R} ^{n},}
there exists
factors
δ δ -->
1
,
… … -->
,
δ δ -->
n
∈ ∈ -->
R
n
{\displaystyle \delta _{1},\ldots ,\delta _{n}\in \mathbb {R} ^{n}}
such that
x
=
∑ ∑ -->
i
=
1
n
δ δ -->
i
v
i
{\displaystyle \textstyle x=\sum _{i=1}^{n}\delta _{i}v_{i}}
which implies that
A
x
=
∑ ∑ -->
i
=
1
n
δ δ -->
i
A
v
i
=
∑ ∑ -->
i
=
1
n
δ δ -->
i
λ λ -->
i
v
i
.
{\displaystyle Ax=\sum _{i=1}^{n}\delta _{i}Av_{i}=\sum _{i=1}^{n}\delta _{i}\lambda _{i}v_{i}.}
From the orthonormality of the eigenvectors
v
i
{\displaystyle v_{i}}
it follows that
‖ ‖ -->
A
x
‖ ‖ -->
2
=
‖ ‖ -->
∑ ∑ -->
i
=
1
n
δ δ -->
i
λ λ -->
i
v
i
‖ ‖ -->
2
=
∑ ∑ -->
i
=
1
n
|
δ δ -->
i
|
⋅ ⋅ -->
|
λ λ -->
i
|
⋅ ⋅ -->
‖ ‖ -->
v
i
‖ ‖ -->
2
=
∑ ∑ -->
i
=
1
n
|
δ δ -->
i
|
⋅ ⋅ -->
|
λ λ -->
i
|
{\displaystyle {\|Ax\|}_{2}=\|\sum _{i=1}^{n}\delta _{i}\lambda _{i}v_{i}\|_{2}=\sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}\cdot {\|v_{i}\|}_{2}=\sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}}
and
‖ ‖ -->
x
‖ ‖ -->
2
=
‖ ‖ -->
∑ ∑ -->
i
=
1
n
δ δ -->
i
v
i
‖ ‖ -->
2
=
∑ ∑ -->
i
=
1
n
|
δ δ -->
i
|
⋅ ⋅ -->
‖ ‖ -->
v
i
‖ ‖ -->
2
=
∑ ∑ -->
i
=
1
n
|
δ δ -->
i
|
.
{\displaystyle {\|x\|}_{2}=\|\sum _{i=1}^{n}\delta _{i}v_{i}\|_{2}=\sum _{i=1}^{n}{|\delta _{i}|}\cdot {\|v_{i}\|}_{2}=\sum _{i=1}^{n}{|\delta _{i}|}.}
Since
x
{\displaystyle x}
is chosen such that it maximizes
‖ ‖ -->
A
x
‖ ‖ -->
2
{\displaystyle {\|Ax\|}_{2}}
while satisfying
‖ ‖ -->
x
‖ ‖ -->
2
=
1
,
{\displaystyle {\|x\|}_{2}=1,}
the values of
δ δ -->
i
{\displaystyle \delta _{i}}
must be such that they maximize
∑ ∑ -->
i
=
1
n
|
δ δ -->
i
|
⋅ ⋅ -->
|
λ λ -->
i
|
{\displaystyle \textstyle \sum _{i=1}^{n}{|\delta _{i}|}\cdot {|\lambda _{i}|}}
while satisfying
∑ ∑ -->
i
=
1
n
|
δ δ -->
i
|
=
1.
{\displaystyle \textstyle \sum _{i=1}^{n}{|\delta _{i}|}=1.}
This is achieved by setting
δ δ -->
k
=
1
{\displaystyle \delta _{k}=1}
for
k
=
a
r
g
m
a
x
i
=
1
n
|
λ λ -->
i
|
{\displaystyle k=\mathrm {arg\,max} _{i=1}^{n}{|\lambda _{i}|}}
and
δ δ -->
i
=
0
{\displaystyle \delta _{i}=0}
otherwise, yielding a value of
‖ ‖ -->
A
x
‖ ‖ -->
2
=
|
λ λ -->
k
|
=
ρ ρ -->
(
A
)
.
{\displaystyle {\|Ax\|}_{2}={|\lambda _{k}|}=\rho (A).}
Power sequence
The spectral radius is closely related to the behavior of the convergence of the power sequence of a matrix; namely as shown by the following theorem.
Theorem. Let A ∈ C n ×n with spectral radius ρ (A ) . Then ρ (A ) < 1 if and only if
lim
k
→ → -->
∞ ∞ -->
A
k
=
0.
{\displaystyle \lim _{k\to \infty }A^{k}=0.}
On the other hand, if ρ (A ) > 1 ,
lim
k
→ → -->
∞ ∞ -->
‖ ‖ -->
A
k
‖ ‖ -->
=
∞ ∞ -->
{\displaystyle \lim _{k\to \infty }\|A^{k}\|=\infty }
. The statement holds for any choice of matrix norm on C n ×n .
Proof
Assume that
A
k
{\displaystyle A^{k}}
goes to zero as
k
{\displaystyle k}
goes to infinity. We will show that ρ (A ) < 1 . Let (v , λ ) be an eigenvector -eigenvalue pair for A . Since Ak v = λk v , we have
0
=
(
lim
k
→ → -->
∞ ∞ -->
A
k
)
v
=
lim
k
→ → -->
∞ ∞ -->
(
A
k
v
)
=
lim
k
→ → -->
∞ ∞ -->
λ λ -->
k
v
=
v
lim
k
→ → -->
∞ ∞ -->
λ λ -->
k
{\displaystyle {\begin{aligned}0&=\left(\lim _{k\to \infty }A^{k}\right)\mathbf {v} \\&=\lim _{k\to \infty }\left(A^{k}\mathbf {v} \right)\\&=\lim _{k\to \infty }\lambda ^{k}\mathbf {v} \\&=\mathbf {v} \lim _{k\to \infty }\lambda ^{k}\end{aligned}}}
Since v ≠ 0 by hypothesis, we must have
lim
k
→ → -->
∞ ∞ -->
λ λ -->
k
=
0
,
{\displaystyle \lim _{k\to \infty }\lambda ^{k}=0,}
which implies
|
λ λ -->
|
<
1
{\displaystyle |\lambda |<1}
. Since this must be true for any eigenvalue
λ λ -->
{\displaystyle \lambda }
, we can conclude that ρ (A ) < 1 .
Now, assume the radius of A is less than 1 . From the Jordan normal form theorem, we know that for all A ∈ C n ×n , there exist V , J ∈ C n ×n with V non-singular and J block diagonal such that:
A
=
V
J
V
− − -->
1
{\displaystyle A=VJV^{-1}}
with
J
=
[
J
m
1
(
λ λ -->
1
)
0
0
⋯ ⋯ -->
0
0
J
m
2
(
λ λ -->
2
)
0
⋯ ⋯ -->
0
⋮ ⋮ -->
⋯ ⋯ -->
⋱ ⋱ -->
⋯ ⋯ -->
⋮ ⋮ -->
0
⋯ ⋯ -->
0
J
m
s
− − -->
1
(
λ λ -->
s
− − -->
1
)
0
0
⋯ ⋯ -->
⋯ ⋯ -->
0
J
m
s
(
λ λ -->
s
)
]
{\displaystyle J={\begin{bmatrix}J_{m_{1}}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}(\lambda _{s})\end{bmatrix}}}
where
J
m
i
(
λ λ -->
i
)
=
[
λ λ -->
i
1
0
⋯ ⋯ -->
0
0
λ λ -->
i
1
⋯ ⋯ -->
0
⋮ ⋮ -->
⋮ ⋮ -->
⋱ ⋱ -->
⋱ ⋱ -->
⋮ ⋮ -->
0
0
⋯ ⋯ -->
λ λ -->
i
1
0
0
⋯ ⋯ -->
0
λ λ -->
i
]
∈ ∈ -->
C
m
i
× × -->
m
i
,
1
≤ ≤ -->
i
≤ ≤ -->
s
.
{\displaystyle J_{m_{i}}(\lambda _{i})={\begin{bmatrix}\lambda _{i}&1&0&\cdots &0\\0&\lambda _{i}&1&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}&1\\0&0&\cdots &0&\lambda _{i}\end{bmatrix}}\in \mathbf {C} ^{m_{i}\times m_{i}},1\leq i\leq s.}
It is easy to see that
A
k
=
V
J
k
V
− − -->
1
{\displaystyle A^{k}=VJ^{k}V^{-1}}
and, since J is block-diagonal,
J
k
=
[
J
m
1
k
(
λ λ -->
1
)
0
0
⋯ ⋯ -->
0
0
J
m
2
k
(
λ λ -->
2
)
0
⋯ ⋯ -->
0
⋮ ⋮ -->
⋯ ⋯ -->
⋱ ⋱ -->
⋯ ⋯ -->
⋮ ⋮ -->
0
⋯ ⋯ -->
0
J
m
s
− − -->
1
k
(
λ λ -->
s
− − -->
1
)
0
0
⋯ ⋯ -->
⋯ ⋯ -->
0
J
m
s
k
(
λ λ -->
s
)
]
{\displaystyle J^{k}={\begin{bmatrix}J_{m_{1}}^{k}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}^{k}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}^{k}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}^{k}(\lambda _{s})\end{bmatrix}}}
Now, a standard result on the k -power of an
m
i
× × -->
m
i
{\displaystyle m_{i}\times m_{i}}
Jordan block states that, for
k
≥ ≥ -->
m
i
− − -->
1
{\displaystyle k\geq m_{i}-1}
:
J
m
i
k
(
λ λ -->
i
)
=
[
λ λ -->
i
k
(
k
1
)
λ λ -->
i
k
− − -->
1
(
k
2
)
λ λ -->
i
k
− − -->
2
⋯ ⋯ -->
(
k
m
i
− − -->
1
)
λ λ -->
i
k
− − -->
m
i
+
1
0
λ λ -->
i
k
(
k
1
)
λ λ -->
i
k
− − -->
1
⋯ ⋯ -->
(
k
m
i
− − -->
2
)
λ λ -->
i
k
− − -->
m
i
+
2
⋮ ⋮ -->
⋮ ⋮ -->
⋱ ⋱ -->
⋱ ⋱ -->
⋮ ⋮ -->
0
0
⋯ ⋯ -->
λ λ -->
i
k
(
k
1
)
λ λ -->
i
k
− − -->
1
0
0
⋯ ⋯ -->
0
λ λ -->
i
k
]
{\displaystyle J_{m_{i}}^{k}(\lambda _{i})={\begin{bmatrix}\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&{k \choose 2}\lambda _{i}^{k-2}&\cdots &{k \choose m_{i}-1}\lambda _{i}^{k-m_{i}+1}\\0&\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&\cdots &{k \choose m_{i}-2}\lambda _{i}^{k-m_{i}+2}\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}\\0&0&\cdots &0&\lambda _{i}^{k}\end{bmatrix}}}
Thus, if
ρ ρ -->
(
A
)
<
1
{\displaystyle \rho (A)<1}
then for all i
|
λ λ -->
i
|
<
1
{\displaystyle |\lambda _{i}|<1}
. Hence for all i we have:
lim
k
→ → -->
∞ ∞ -->
J
m
i
k
=
0
{\displaystyle \lim _{k\to \infty }J_{m_{i}}^{k}=0}
which implies
lim
k
→ → -->
∞ ∞ -->
J
k
=
0.
{\displaystyle \lim _{k\to \infty }J^{k}=0.}
Therefore,
lim
k
→ → -->
∞ ∞ -->
A
k
=
lim
k
→ → -->
∞ ∞ -->
V
J
k
V
− − -->
1
=
V
(
lim
k
→ → -->
∞ ∞ -->
J
k
)
V
− − -->
1
=
0
{\displaystyle \lim _{k\to \infty }A^{k}=\lim _{k\to \infty }VJ^{k}V^{-1}=V\left(\lim _{k\to \infty }J^{k}\right)V^{-1}=0}
On the other side, if
ρ ρ -->
(
A
)
>
1
{\displaystyle \rho (A)>1}
, there is at least one element in J that does not remain bounded as k increases, thereby proving the second part of the statement.
Gelfand's formula, named after Israel Gelfand , gives the spectral radius as a limit of matrix norms.
Theorem
For any matrix norm ||⋅||, we have[ 3]
ρ ρ -->
(
A
)
=
lim
k
→ → -->
∞ ∞ -->
‖
A
k
‖
1
k
{\displaystyle \rho (A)=\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}}
.
Moreover, in the case of a consistent matrix norm
lim
k
→ → -->
∞ ∞ -->
‖
A
k
‖
1
k
{\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}}
approaches
ρ ρ -->
(
A
)
{\displaystyle \rho (A)}
from above (indeed, in that case
ρ ρ -->
(
A
)
≤ ≤ -->
‖
A
k
‖
1
k
{\displaystyle \rho (A)\leq \left\|A^{k}\right\|^{\frac {1}{k}}}
for all
k
{\displaystyle k}
).
Proof
For any ε > 0 , let us define the two following matrices:
A
± ± -->
=
1
ρ ρ -->
(
A
)
± ± -->
ε ε -->
A
.
{\displaystyle A_{\pm }={\frac {1}{\rho (A)\pm \varepsilon }}A.}
Thus,
ρ ρ -->
(
A
± ± -->
)
=
ρ ρ -->
(
A
)
ρ ρ -->
(
A
)
± ± -->
ε ε -->
,
ρ ρ -->
(
A
+
)
<
1
<
ρ ρ -->
(
A
− − -->
)
.
{\displaystyle \rho \left(A_{\pm }\right)={\frac {\rho (A)}{\rho (A)\pm \varepsilon }},\qquad \rho (A_{+})<1<\rho (A_{-}).}
We start by applying the previous theorem on limits of power sequences to A + :
lim
k
→ → -->
∞ ∞ -->
A
+
k
=
0.
{\displaystyle \lim _{k\to \infty }A_{+}^{k}=0.}
This shows the existence of N + ∈ N such that, for all k ≥ N + ,
‖
A
+
k
‖
<
1.
{\displaystyle \left\|A_{+}^{k}\right\|<1.}
Therefore,
‖
A
k
‖
1
k
<
ρ ρ -->
(
A
)
+
ε ε -->
.
{\displaystyle \left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon .}
Similarly, the theorem on power sequences implies that
‖ ‖ -->
A
− − -->
k
‖ ‖ -->
{\displaystyle \|A_{-}^{k}\|}
is not bounded and that there exists N − ∈ N such that, for all k ≥ N − ,
‖
A
− − -->
k
‖
>
1.
{\displaystyle \left\|A_{-}^{k}\right\|>1.}
Therefore,
‖
A
k
‖
1
k
>
ρ ρ -->
(
A
)
− − -->
ε ε -->
.
{\displaystyle \left\|A^{k}\right\|^{\frac {1}{k}}>\rho (A)-\varepsilon .}
Let N = max{N + , N − }. Then,
∀ ∀ -->
ε ε -->
>
0
∃ ∃ -->
N
∈ ∈ -->
N
∀ ∀ -->
k
≥ ≥ -->
N
ρ ρ -->
(
A
)
− − -->
ε ε -->
<
‖
A
k
‖
1
k
<
ρ ρ -->
(
A
)
+
ε ε -->
,
{\displaystyle \forall \varepsilon >0\quad \exists N\in \mathbf {N} \quad \forall k\geq N\quad \rho (A)-\varepsilon <\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon ,}
that is,
lim
k
→ → -->
∞ ∞ -->
‖
A
k
‖
1
k
=
ρ ρ -->
(
A
)
.
{\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A).}
This concludes the proof.
Corollary
Gelfand's formula yields a bound on the spectral radius of a product of commuting matrices: if
A
1
,
… … -->
,
A
n
{\displaystyle A_{1},\ldots ,A_{n}}
are matrices that all commute, then
ρ ρ -->
(
A
1
⋯ ⋯ -->
A
n
)
≤ ≤ -->
ρ ρ -->
(
A
1
)
⋯ ⋯ -->
ρ ρ -->
(
A
n
)
.
{\displaystyle \rho (A_{1}\cdots A_{n})\leq \rho (A_{1})\cdots \rho (A_{n}).}
Numerical example
Consider the matrix
A
=
[
9
− − -->
1
2
− − -->
2
8
4
1
1
8
]
{\displaystyle A={\begin{bmatrix}9&-1&2\\-2&8&4\\1&1&8\end{bmatrix}}}
whose eigenvalues are 5, 10, 10 ; by definition, ρ (A ) = 10 . In the following table, the values of
‖ ‖ -->
A
k
‖ ‖ -->
1
k
{\displaystyle \|A^{k}\|^{\frac {1}{k}}}
for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,
‖ ‖ -->
.
‖ ‖ -->
1
=
‖ ‖ -->
.
‖ ‖ -->
∞ ∞ -->
{\displaystyle \|.\|_{1}=\|.\|_{\infty }}
):
k
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
1
=
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
∞ ∞ -->
{\displaystyle \|\cdot \|_{1}=\|\cdot \|_{\infty }}
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
F
{\displaystyle \|\cdot \|_{F}}
‖ ‖ -->
⋅ ⋅ -->
‖ ‖ -->
2
{\displaystyle \|\cdot \|_{2}}
1
14
15.362291496
10.681145748
2
12.649110641
12.328294348
10.595665162
3
11.934831919
11.532450664
10.500980846
4
11.501633169
11.151002986
10.418165779
5
11.216043151
10.921242235
10.351918183
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
10
10.604944422
10.455910430
10.183690042
11
10.548677680
10.413702213
10.166990229
12
10.501921835
10.378620930
10.153031596
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
20
10.298254399
10.225504447
10.091577411
30
10.197860892
10.149776921
10.060958900
40
10.148031640
10.112123681
10.045684426
50
10.118251035
10.089598820
10.036530875
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
100
10.058951752
10.044699508
10.018248786
200
10.029432562
10.022324834
10.009120234
300
10.019612095
10.014877690
10.006079232
400
10.014705469
10.011156194
10.004559078
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
1000
10.005879594
10.004460985
10.001823382
2000
10.002939365
10.002230244
10.000911649
3000
10.001959481
10.001486774
10.000607757
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
10000
10.000587804
10.000446009
10.000182323
20000
10.000293898
10.000223002
10.000091161
30000
10.000195931
10.000148667
10.000060774
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
⋮ ⋮ -->
{\displaystyle \vdots }
100000
10.000058779
10.000044600
10.000018232
Notes and references
Bibliography
Dunford, Nelson; Schwartz, Jacob (1963), Linear operators II. Spectral Theory: Self Adjoint Operators in Hilbert Space , Interscience Publishers, Inc.
Lax, Peter D. (2002), Functional Analysis , Wiley-Interscience, ISBN 0-471-55604-1
See also
Spaces
Theorems Operators Algebras Open problems Applications Advanced topics
Basic concepts Main results Special Elements/Operators Spectrum Decomposition Spectral Theorem Special algebras Finite-Dimensional Generalizations Miscellaneous Examples Applications