We saw in Section [sec:2_6] that rotations about the origin and reflections in a line through the origin are linear operators on \(\mathbb
Let \(V\) be an inner product space of dimension \(n\), and consider a distance-preserving transformation \(S : V \to V\). If \(S(\mathbf<0>) = \mathbf<0>\), then \(S\) is linear.0>
It is routine to verify that the composite of two distance-preserving transformations is again distance preserving. In particular the composite of a translation and an isometry is distance preserving. Surprisingly, the converse is true.
If \(V\) is a finite dimensional inner product space, then every distance-preserving transformation \(S : V \to V\) is the composite of a translation and an isometry.
Proof. If \(S : V \to V\) is distance preserving, write \(S(\mathbf) = \mathbf\) and define \(T : V \to V\) by \(T(\mathbf) = S(\mathbf) - \mathbf\) for all \(\mathbf\) in \(V\). Then \(\left\| T(\mathbf) - T(\mathbf)\right\| = \left\| \mathbf - \mathbf\right\|\) for all vectors \(\mathbf\) and \(\mathbf\) in \(V\) as the reader can verify; that is, \(T\) is distance preserving. Clearly, \(T(\mathbf) = \mathbf\), so it is an isometry by Lemma [lem:032019]. Since \[S(\mathbf) = \mathbf + T(\mathbf) = (S_<\mathbf> \circ T)(\mathbf) \quad \mbox \mathbf \mbox < in >V \nonumber \] we have \(S = S_<\mathbf> \circ T\), and the theorem is proved. In Theorem [thm:032040], \(S = S_<\mathbf> \circ T\) factors as the composite of an isometry \(T\) followed by a translation \(S_<\mathbf>\). More is true: this factorization is unique in that \(\mathbf\) and \(T\) are uniquely determined by \(S\); and \(\mathbf \in V\) exists such that \(S = T \circ S_<\mathbf>\) is uniquely the composite of translation by \(\mathbf\) followed by the same isometry \(T\) (Exercise [ex:10_4_12]). Theorem [thm:032040] focuses our attention on the isometries, and the next theorem shows that, while they preserve distance, they are characterized as those operators that preserve other properties.
Let \(T : V \to V\) be a linear operator on a finite dimensional inner product space \(V\). The following conditions are equivalent: =1.2mm 1. & \(T\) is an isometry. & (\(T\) preserves distance)
2. & \(\left\| T(\mathbf)\right\| = \left\| \mathbf\right\|\) for all \(\mathbf\) in \(V\). & (\(T\) preserves norms)
3. & \(\langle T(\mathbf), T(\mathbf) \rangle = \langle\mathbf, \mathbf \rangle\) for all \(\mathbf\) and \(\mathbf\) in \(V\). & (\(T\) preserves inner products)
4. & If \(\<\mathbf_, \mathbf_, \dots, \mathbf_\>\) is an orthonormal basis of \(V\), & & then \(\
5. &T carries some orthonormal basis to an orthonormal basis.
\[\left\| T(\mathbf) \right\| ^2 = v_1^2 + \dots + v_n^2 = \left\| \mathbf \right\| ^2 \nonumber \]
Hence \(\left\| T(\mathbf)\right\| = \left\|\mathbf\right\|\) for all \(\mathbf\), and (1) follows by replacing \(\mathbf\) by \(\mathbf - \mathbf\).
Before giving examples, we note some consequences of Theorem [thm:032053].
Let \(V\) be a finite dimensional inner product space.
Proof. (1) is by (4) of Theorem \(\PageIndex\) and Theorem 10.3.1. (2a) is clear, and (2b) is left to the reader. If \(T : V \to V\) is an isometry and \(\<\mathbf_, \dots, \mathbf_\>\) is an orthonormal basis of \(V\), then (2c) follows because \(T^\) carries the orthonormal basis \(\
The conditions in part (2) of the corollary assert that the set of isometries of a finite dimensional inner product space forms an algebraic system called a group. The theory of groups is well developed, and groups of operators are important in geometry. In fact, geometry itself can be fruitfully viewed as the study of those properties of a vector space that are preserved by a group of invertible linear operators.
Rotations of \(\mathbb^2\) about the origin are isometries, as are reflections in lines through the origin: They clearly preserve distance and so are linear by Lemma \(\PageIndex\) . Similarly, rotations about lines through the origin and reflections in planes through the origin are isometries of \(\mathbb^3\).
Let \(T : \mathbf_ \to \mathbf_\) be the transposition operator: \(T(A) = A^\). Then \(T\) is an isometry if the inner product is \(\langle A, B \rangle = tr \textbf = \displaystyle \sum_ a_b_\). In fact, \(T\) permutes the basis consisting of all matrices with one entry \(1\) and the other entries \(0\).
Let \(T : \mathbf_ \to \mathbf_\) be the transposition operator: \(T(A) = A^\). Then \(T\) is an isometry if the inner product is \(\langle A, B \rangle = tr \mathbf = \displaystyle \sum_ a_b_\). In fact, \(T\) permutes the basis consisting of all matrices with one entry \(1\) and the other entries \(0\).
The proof of the next result requires the fact (see Theorem [thm:032053]) that, if \(B\) is an orthonormal basis, then \(\langle\mathbf, \mathbf \rangle = C_(\mathbf) \bullet C_(\mathbf)\) for all vectors \(\mathbf\) and \(\mathbf\).
Let \(T : V \to V\) be an operator where \(V\) is a finite dimensional inner product space. The following conditions are equivalent.
Proof.
(1) \(\Rightarrow\) (2). Let \(B = \<\mathbf_, \dots, \mathbf_\>\) be an orthonormal basis. Then the \(j\)th column of \(M_(T)\) is \(C_[T(\mathbf_)]\), and we have
\[C_B[T(\mathbf_j)] \bullet C_B[T(\mathbf_k)] = \langle T(\mathbf_j), T(\mathbf_k) \rangle = \langle \mathbf_j, \mathbf_k \rangle \nonumber \]
using (1). Hence the columns of \(M_(T)\) are orthonormal in \(\mathbb^n\), which proves (2).
(2) \(\Rightarrow\) (3). This is clear.
(3) \(\Rightarrow\) (1). Let \(B = \<\mathbf_, \dots, \mathbf_\>\) be as in (3). Then, as before,
\[\langle T(\mathbf_j), T(\mathbf_k) \rangle = C_B[T(\mathbf_j)] \bullet C_B[T(\mathbf_k)] \nonumber \]
so \(\
It is important that \(B\) is orthonormal in Theorem [thm:032147]. For example, \(T : V \to V\) given by \(T(\mathbf) = 2\mathbf\) preserves orthogonal sets but is not an isometry, as is easily checked.
If \(P\) is an orthogonal square matrix, then \(P^ = P^\). Taking determinants yields \((\det P)^ = 1\), so \(\det P = \pm 1\). Hence:
If \(T : V \to V\) is an isometry where \(V\) is a finite dimensional inner product space, then \(\det T = \pm 1\).
Rotations and reflections that fix the origin are isometries in \(\mathbb^2\) and \(\mathbb^3\) (Example [exa:032132]); we are going to show that these isometries (and compositions of them in \(\mathbb^3\)) are the only possibilities. In fact, this will follow from a general structure theorem for isometries. Surprisingly enough, much of the work involves the two–dimensional case.
Let \(T : V \to V\) be an isometry on the two-dimensional inner product space \(V\). Then there are two possibilities.
Either There is an orthonormal basis \(B\) of \(V\) such that
\[M_B(T) = \left[ \begin \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end \right], \ 0 \leq \theta < 2\pi \nonumber \]
or There is an orthonormal basis \(B\) of \(V\) such that
\[M_B(T) = \left[ \begin 1 & 0 \\ 0 & -1 \end \right] \nonumber \]
Furthermore, type (1) occurs if and only if \(\det T = 1\), and type (2) occurs if and only if \(\det T = -1\).
Proof. The final statement follows from the rest because \(\det T = \det [M_(T)]\) for any basis \(B\). Let \(B_ = \<\mathbf_, \mathbf_\>\) be any ordered orthonormal basis of \(V\) and write
\[A = M_(T) = \left[ \begin a & b \\ c & d \end \right]; \mbox < that is, >\begin T(\mathbf_1) = a \mathbf_1 + c \mathbf_2 \\ T(\mathbf_2) = b \mathbf_1 + d \mathbf_2 \\ \end \nonumber \]
Then \(A\) is orthogonal by Theorem [thm:032147], so its columns (and rows) are orthonormal. Hence
so \((a, c)\) and \((d, b)\) lie on the unit circle. Thus angles \(\theta\) and \(\varphi\) exist such that
Then \(\sin(\theta + \varphi) = cd + ab = 0\) because the columns of \(A\) are orthogonal, so \(\theta + \varphi = k\pi\) for some integer \(k\). This gives \(d = \cos(k\pi - \theta) = (-1)^ \cos \theta\) and \(b = \sin(k\pi - \theta) = (-1)^ \sin \theta\). Finally
\[A = \left[ \begin \cos \theta & (-1)^ \sin \theta \\ \sin \theta & (-1)^k \cos \theta \end \right] \nonumber \]
If \(k\) is even we are in type (1) with \(B = B_\), so assume \(k\) is odd. Then \(A = \left[ \begin a & c \\ c & -a \end \right]\). If \(a = -1\) and \(c = 0\), we are in type (1) with \(B = \<\mathbf_, \mathbf_\>\). Otherwise \(A\) has eigenvalues \(\lambda_ = 1\) and \(\lambda_ = -1\) with corresponding eigenvectors \(\mathbf_1 = \left[ \begin 1 + a \\ c \end \right]\) and \(\mathbf_2 = \left[ \begin -c \\ 1 + a \end \right]\) as the reader can verify. Write
\[\mathbf_1 = (1 + a)\mathbf_1 + c\mathbf_2 \quad \mbox < and >\quad \mathbf_2 = -c\mathbf_2 + (1 + a)\mathbf_2 \nonumber \]
Then \(\mathbf_\) and \(\mathbf_\) are orthogonal (verify) and \(C_(\mathbf_i) = C_(\lambda_i \mathbf_i) = \mathbf_i\) for each \(i\). Moreover
\[C_ [T(\mathbf_i)] = AC_(\mathbf_i) = A \mathbf_i = \lambda_i \mathbf_i = \lambda_i C_(\mathbf_i) = C_(\lambda_i \mathbf_i) \nonumber \]
so \(T(\mathbf_) = \lambda_\mathbf_\) for each \(i\). Hence \(M_B(T) = \left[ \begin \lambda_1 & 0 \\ 0 & \lambda_2 \end \right] = \left[ \begin 1 & 0 \\ 0 & -1 \end \right]\) and we are in type (2) with \(B = \left\<\left\| \mathbf_1 \right\|> \mathbf_1, \frac<\left\| \mathbf_2 \right\|> \mathbf_2 \right\>\).
An operator \(T : \mathbb^2 \to \mathbb^2\) is an isometry if and only if \(T\) is a rotation or a reflection.
In fact, if \(E\) is the standard basis of \(\mathbb^2\), then the clockwise rotation \(R_\) about the origin through an angle \(\theta\) has matrix
\[M_E(R_\theta) = \left[ \begin \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end \right] \nonumber \]
(see Theorem [thm:006021]). On the other hand, if \(S : \mathbb^2 \to \mathbb^2\) is the reflection in a line through the origin (called the fixed line of the reflection), let \(\mathbf_\) be a unit vector pointing along the fixed line and let \(\mathbf_\) be a unit vector perpendicular to the fixed line. Then \(B = \<\mathbf_, \mathbf_\>\) is an orthonormal basis, \(S(\mathbf_) = \mathbf_\) and \(S(\mathbf_) = -\mathbf_\), so
\[M_B(S) = \left[ \begin 1 & 0 \\ 0 & -1 \end \right] \nonumber \]
Thus \(S\) is of type 2. Note that, in this case, \(1\) is an eigenvalue of \(S\), and any eigenvector corresponding to \(1\) is a direction vector for the fixed line.
\[\begin \mbox A = \frac \left[ \begin 1 & \sqrt \\ -\sqrt & 1 \end \right] & \quad & \mbox A = \frac \left[ \begin -3 & 4 \\ 4 & 3 \end \right] \end \nonumber \]
We now give a structure theorem for isometries. The proof requires three preliminary results, each of interest in its own right.
Let \(T : V \to V\) be an isometry of a finite dimensional inner product space \(V\). If \(U\) is a \(T\)-invariant subspace of \(V\), then \(U^<\perp>\) is also \(T\)-invariant.
Proof. Let \(\mathbf\) lie in \(U^<\perp>\). We are to prove that \(T(\mathbf)\) is also in \(U^<\perp>\); that is, \(\langle T(\mathbf), \mathbf \rangle = 0\) for all \(\mathbf\) in \(U\). At this point, observe that the restriction of \(T\) to \(U\) is an isometry \(U \to U\) and so is an isomorphism by the corollary to Theorem [thm:032053]. In particular, each \(\mathbf\) in \(U\) can be written in the form \(\mathbf = T(\mathbf_)\) for some \(\mathbf_\) in \(U\), so
\[\langle T(\mathbf), \mathbf \rangle = \langle T(\mathbf), T(\mathbf_1) \rangle = \langle \mathbf, \mathbf_1 \rangle = 0 \nonumber \]
because \(\mathbf\) is in \(U^<\perp>\). This is what we wanted.
To employ Lemma [lem:032292] above to analyze an isometry \(T : V \to V\) when \(dim \mathbf = n\), it is necessary to show that a \(T\)-invariant subspace \(U\) exists such that \(U \neq 0\) and \(U \neq V\). We will show, in fact, that such a subspace \(U\) can always be found of dimension \(1\) or \(2\). If \(T\) has a real eigenvalue \(\lambda\) then \(\mathbb\mathbf\) is \(T\)-invariant where \(\mathbf\) is any \(\lambda\)-eigenvector. But, in case (1) of Theorem [thm:032199], the eigenvalues of \(T\) are \(e^\) and \(e^\) (the reader should check this), and these are nonreal if \(\theta \neq 0\) and \(\theta \neq \pi\). It turns out that every complex eigenvalue \(\lambda\) of \(T\) has absolute value \(1\) (Lemma [lem:032309] below); and that \(U\) has a \(T\)-invariant subspace of dimension \(2\) if \(\lambda\) is not real (Lemma [lem:032323]).
Let \(T : V \to V\) be an isometry of the finite dimensional inner product space \(V\). If \(\lambda\) is a complex eigenvalue of \(T\), then \(|\lambda| = 1\).
Proof. Choose an orthonormal basis \(B\) of \(V\), and let \(A = M_(T)\). Then \(A\) is a real orthogonal matrix so, using the standard inner product \(\langle \mathbf, \mathbf \rangle = \mathbf^T \overline<\mathbf>\) in \(\mathbb\), we get
\[\left\| A\mathbf \right\| ^2 = (A\mathbf)^T(\overline) = \mathbf^T A^T \overline = \mathbf^TI\mathbf = \left\| \mathbf \right\| ^2 \nonumber \]
for all \(\mathbf\) in \(\mathbb^n\). But \(A\mathbf = \lambda\mathbf\) for some \(\mathbf \neq \mathbf\), whence \(\left\| \mathbf\right\|^ = \left\| \lambda\mathbf\right\|^ = |\lambda|^\left\|\mathbf\right\|^\). This gives \(|\lambda| = 1\), as required.
Let \(T : V \to V\) be an isometry of the \(n\)-dimensional inner product space \(V\). If \(T\) has a nonreal eigenvalue, then \(V\) has a two-dimensional \(T\)-invariant subspace.
Proof. Let \(B\) be an orthonormal basis of \(V\), let \(A = M_(T)\), and (using Lemma [lem:032309]) let \(\lambda = e^\) be a nonreal eigenvalue of \(A\), say \(A\mathbf = \lambda\mathbf\) where \(\mathbf \neq \mathbf\) in \(\mathbb^n\). Because \(A\) is real, complex conjugation gives \(A\overline<\mathbf> = \overline <\lambda>\overline<\mathbf>\), so \(\overline<\lambda>\) is also an eigenvalue. Moreover \(\lambda \neq \overline<\lambda>\) (\(\lambda\) is nonreal), so \(\<\mathbf, \overline<\mathbf> \>\) is linearly independent in \(\mathbb^n\) (the argument in the proof of Theorem [thm:016090] works). Now define
\[\mathbf_1 = \mathbf + \overline<\mathbf> \quad \mbox < and >\quad \mathbf_2 = i(\mathbf - \overline<\mathbf>) \nonumber \]
Then \(\mathbf_\) and \(\mathbf_\) lie in \(\mathbb^n\), and \(\<\mathbf_, \mathbf_\>\) is linearly independent over \(\mathbb\) because \(\<\mathbf
\[\mathbf = \frac (\mathbf_1 - i \mathbf_2) \quad \mbox < and >\quad \overline<\mathbf> = \frac (\mathbf_1 + i\mathbf_2) \nonumber \]
Now \(\lambda + \overline <\lambda>= 2 \cos \alpha\) and \(\lambda - \overline <\lambda>= 2i \sin \alpha\), and a routine computation gives
\[\begin A \mathbf_1 &= \mathbf_1 \cos \alpha + \mathbf_2 \sin \alpha \\ A \mathbf_2 &= -\mathbf_1 \sin \alpha + \mathbf_2 \cos \alpha\end \nonumber \]
Finally, let \(\mathbf_\) and \(\mathbf_\) in \(V\) be such that \(\mathbf_ = C_(\mathbf_)\) and \(\mathbf_ = C_(\mathbf_)\). Then
\[C_B[T(\mathbf_1)] = AC_B(\mathbf_1) = A\mathbf_1 = C_B(\mathbf_1 \cos \alpha + \mathbf_2 \sin \alpha) \nonumber \]
using Theorem [thm:027955]. Because \(C_\) is one-to-one, this gives the first of the following equations (the other is similar):
\[\begin T(\mathbf_1) &= \mathbf_1 \cos \alpha + \mathbf_2 \sin \alpha \\ T(\mathbf_2) &= -\mathbf_1 \sin \alpha + \mathbf_2 \cos \alpha\end \nonumber \]
Thus \(U = span\<\mathbf_, \mathbf_\>\) is \(T\)-invariant and two-dimensional.
We can now prove the structure theorem for isometries.
Let \(T : V \to V\) be an isometry of the \(n\)-dimensional inner product space \(V\). Given an angle \(\theta\), write \(R(\theta) = \left[ \begin \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end \right]\). Then there exists an orthonormal basis \(B\) of \(V\) such that \(M_(T)\) has one of the following block diagonal forms, classified for convenience by whether \(n\) is odd or even:
\[n=2 k+1\left[\begin1 & 0 & \cdots & 0 \\ 0 & R\left(\theta_1\right) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & R\left(\theta_k\right)\end\right] \quad \text \quad\left[\begin-1 & 0 & \cdots & 0 \\ 0 & R\left(\theta_1\right) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & R\left(\theta_k\right)\end\right]\]
\[n=2 k\left[\beginR\left(\theta_1\right) & 0 & \cdots & 0 \\ 0 & R\left(\theta_2\right) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & R\left(\theta_k\right)\end\right] \quad\ \text \quad\left[\begin-1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & R\left(\theta_1\right) & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & R\left(\theta_\right)\end\right]\]
Proof. We show first, by induction on \(n\), that an orthonormal basis \(B\) of \(V\) can be found such that \(M_(T)\) is a block diagonal matrix of the following form:
\[M_B(T) = \left[ \begin I_r & 0 & 0 & \cdots & 0 \\ 0 & -I_s & 0 & \cdots & 0 \\ 0 & 0 & R(\theta_1) & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & R(\theta_t) \end \right] \nonumber \]
where the identity matrix \(I_\), the matrix \(-I_\), or the matrices \(R(\theta_)\) may be missing. If \(n = 1\) and \(V = \mathbb\mathbf\), this holds because \(T(\mathbf) = \lambda\mathbf\) and \(\lambda = \pm 1\) by Lemma [lem:032309]. If \(n = 2\), this follows from Theorem [thm:032199]. If \(n \geq 3\), either \(T\) has a real eigenvalue and therefore has a one-dimensional \(T\)-invariant subspace \(U = \mathbb\mathbf\) for any eigenvector \(\mathbf\), or \(T\) has no real eigenvalue and therefore has a two-dimensional \(T\)-invariant subspace \(U\) by Lemma [lem:032323]. In either case \(U^<\perp>\) is \(T\)-invariant (Lemma [lem:032292]) and \(dim \mathbf ^ <\perp>= n - dim \mathbf < n\). Hence, by induction, let \(B_\) and \(B_\) be orthonormal bases of \(U\) and \(U^<\perp>\) such that \(M_(T)\) and \(M_(T)\) have the form given. Then \(B = B_ \cup B_\) is an orthonormal basis of \(V\), and \(M_(T)\) has the desired form with a suitable ordering of the vectors in \(B\).
Now observe that \(R(0) = \left[ \begin 1 & 0 \\ 0 & 1 \end \right]\) and \(R(\pi) = \left[ \begin -1 & 0 \\ 0 & -1 \end \right]\). It follows that an even number of \(1\)s or \(-1\)s can be written as \(R(\theta_)\)-blocks. Hence, with a suitable reordering of the basis \(B\), the theorem follows.
As in the dimension \(2\) situation, these possibilities can be given a geometric interpretation when \(V = \mathbb^3\) is taken as euclidean space. As before, this entails looking carefully at reflections and rotations in \(\mathbb^3\). If \(Q : \mathbb^3 \to \mathbb^3\) is any reflection in a plane through the origin (called the fixed plane of the reflection), take \(\<\mathbf_, \mathbf_\>\) to be any orthonormal basis of the fixed plane and take \(\mathbf_\) to be a unit vector perpendicular to the fixed plane. Then \(Q(\mathbf_) = -\mathbf_\), whereas \(Q(\mathbf_) = \mathbf_\) and \(Q(\mathbf_) = \mathbf_\). Hence \(B = \<\mathbf_, \mathbf_, \mathbf_\>\) is an orthonormal basis such that
\[M_B(Q) = \left[ \begin -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end \right] \nonumber \]
Similarly, suppose that \(R : \mathbb^3 \to \mathbb^3\) is any rotation about a line through the origin (called the axis of the rotation), and let \(\mathbf_\) be a unit vector pointing along the axis, so \(R(\mathbf_) = \mathbf_\). Now the plane through the origin perpendicular to the axis is an \(R\)-invariant subspace of \(\mathbb^2\) of dimension \(2\), and the restriction of \(R\) to this plane is a rotation. Hence, by Theorem [thm:032199], there is an orthonormal basis \(B_ = \<\mathbf_, \mathbf_\>\) of this plane such that \(M_(R) = \left[ \begin \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end \right]\). But then \(B = \<\mathbf_, \mathbf_, \mathbf_\>\) is an orthonormal basis of \(\mathbb^3\) such that the matrix of \(R\) is
\[M_B(R) = \left[ \begin 1 & 0 & 0 \\ 0 & \cos \theta & - \sin \theta \\ 0 & \sin \theta & \cos \theta \end \right] \nonumber \]
However, Theorem [thm:032367] shows that there are isometries \(T\) in \(\mathbb^3\) of a third type: those with a matrix of the form
\[M_B(T) = \left[ \begin -1 & 0 & 0 \\ 0 & \cos \theta & - \sin \theta \\ 0 & \sin \theta & \cos \theta \end \right] \nonumber \]
If \(B = \<\mathbf_, \mathbf_, \mathbf_\>\), let \(Q\) be the reflection in the plane spanned by \(\mathbf_\) and \(\mathbf_\), and let \(R\) be the rotation corresponding to \(\theta\) about the line spanned by \(\mathbf_\). Then \(M_(Q)\) and \(M_(R)\) are as above, and \(M_(Q) M_(R) = M_(T)\) as the reader can verify. This means that \(M_(QR) = M_(T)\) by Theorem [thm:028640], and this in turn implies that \(QR = T\) because \(M_\) is one-to-one (see Exercise [ex:9_1_26]). A similar argument shows that \(RQ = T\), and we have Theorem [thm:032447].
If \(T : \mathbb^3 \to \mathbb^3\) is an isometry, there are three possibilities.
Hence \(T\) is a rotation if and only if \(\det T = 1\).
Proof. It remains only to verify the final observation that \(T\) is a rotation if and only if \(\det T = 1\). But clearly \(\det T = -1\) in parts (b) and (c).
A useful way of analyzing a given isometry \(T : \mathbb^3 \to \mathbb^3\) comes from computing the eigenvalues of \(T\). Because the characteristic polynomial of \(T\) has degree \(3\), it must have a real root. Hence, there must be at least one real eigenvalue, and the only possible real eigenvalues are \(\pm 1\) by Lemma \(\PageIndex\) . Thus Table (\PageIndex\) includes all possibilities.
Eigenvalues of T
(1) 1 , no other real eigenvalues
Rotation about the line \(\mathbb \mathbf\) where \(\mathbf\) is an eigenvector corresponding to 1. [Case (a) of Theorem (\PageIndex\) .]
(2) -1 , no other real eigenvalues
Rotation about the line \(\mathbb \mathbf\) followed by reflection in the plane \((\mathbb)^<\perp>\) where \(\mathbf\) is an eigenvector corresponding to -1 . [Case (c) of Theorem (\PageIndex\).]
Reflection in the plane \((\mathbb \mathbf)^<\perp>\) where \(\mathbf\) is an eigenvector corresponding to -1 . [Case (b) of Theorem (\PageIndex\) .]
This is as in (1) with a rotation of \(\pi\).
Here T(x) = -x for all x. This is (2) with a rotation of \(\pi\).
Here T is the identity isometry.
Analyze the isometry \(T : \mathbb^3 \to \mathbb^3\) given by \(T \left[ \begin x \\ y \\ z \end \right] = \left[ \begin y \\ z \\ -x \end \right]\).
If \(B_\) is the standard basis of \(\mathbb^3\), then \(M_(T) = \left[ \begin 0 & 1 & 0 \\ 0 & 0 & 1 \\ -1 & 0 & 0 \end \right]\), so \(c_(x) = x^ + 1 = (x + 1)(x^ - x + 1)\). This is (2) in Table [tab:10_4_1]. Write:
\[\mathbf_1 = \frac> \left[ \begin 1 \\ -1 \\ 1 \end \right] \quad \mathbf_2 = \frac> \left[ \begin 1 \\ 2 \\ 1 \end \right] \quad \mathbf_3 = \frac> \left[ \begin 1 \\ 0 \\ -1 \end \right] \nonumber \]
Here \(\mathbf_\) is a unit eigenvector corresponding to \(\lambda_ = -1\), so \(T\) is a rotation (through an angle \(\theta\)) about the line \(L = \mathbb\mathbf_\), followed by reflection in the plane \(U\) through the origin perpendicular to \(\mathbf_\) (with equation \(x - y + z = 0\)). Then, \(\<\mathbf_, \mathbf_\>\) is chosen as an orthonormal basis of \(U\), so \(B = \<\mathbf_, \mathbf_, \mathbf_\>\) is an orthonormal basis of \(\mathbb^3\) and
Hence \(\theta\) is given by \(\cos \theta = \frac, \sin \theta = \frac>\), so \(\theta = \frac<\pi>\).
Let \(V\) be an \(n\)-dimensional inner product space. A subspace of \(V\) of dimension \(n - 1\) is called a hyperplane in \(V\). Thus the hyperplanes in \(\mathbb^3\) and \(\mathbb^2\) are, respectively, the planes and lines through the origin. Let \(Q : V \to V\) be an isometry with matrix
\[M_B(Q) = \left[ \begin -1 & 0 \\ 0 & I_ \end \right] \nonumber \]
for some orthonormal basis \(B = \<\mathbf_, \mathbf_, \dots, \mathbf_\>\). Then \(Q(\mathbf_) = -\mathbf_\) whereas \(Q(\mathbf) = \mathbf\) for each \(\mathbf\) in \(U = span \mathbf< \<\mathbf_, \dots, \mathbf_\>>\). Hence \(U\) is called the fixed hyperplane of \(Q\), and \(Q\) is called reflection in \(U\). Note that each hyperplane in \(V\) is the fixed hyperplane of a (unique) reflection of \(V\). Clearly, reflections in \(\mathbb^2\) and \(\mathbb^3\) are reflections in this more general sense.
Continuing the analogy with \(\mathbb^2\) and \(\mathbb^3\), an isometry \(T : V \to V\) is called a rotation if there exists an orthonormal basis \(\<\mathbf_, \dots, \mathbf_\>\) such that
\[M_B(T) = \left[ \begin I_r & 0 & 0 \\ 0 & R(\theta) & 0 \\ 0 & 0 & I_s \end \right] \nonumber \]
in block form, where \(R(\theta) = \left[ \begin \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end \right]\), and where either \(I_\) or \(I_\) (or both) may be missing. If \(R(\theta)\) occupies columns \(i\) and \(i + 1\) of \(M_(T)\), and if \(W = span\mathbf <\<\mathbf_, \mathbf_\>>\), then \(W\) is \(T\)-invariant and the matrix of \(T : W \to W\) with respect to \(\<\mathbf_, \mathbf_\>\) is \(R(\theta)\). Clearly, if \(W\) is viewed as a copy of \(\mathbb^2\), then \(T\) is a rotation in \(W\). Moreover, \(T(\mathbf) = \mathbf\) holds for all vectors \(\mathbf\) in the \((n - 2)\)-dimensional subspace \(U = span \mathbf<\<\mathbf_, \dots, \mathbf_, \mathbf_, \dots, \mathbf_\>> \), and \(U\) is called the fixed axis of the rotation \(T\). In \(\mathbb^3\), the axis of any rotation is a line (one-dimensional), whereas in \(\mathbb^2\) the axis is \(U = \<\mathbf<0>\>\).
With these definitions, the following theorem is an immediate consequence of Theorem [thm:032367] (the details are left to the reader).
Let \(T : V \to V\) be an isometry of a finite dimensional inner product space \(V\). Then there exist isometries \(T_, \dots, T\) such that
\[T = T_k T_ \cdots T_2 T_1 \nonumber \]
where each \(T_\) is either a rotation or a reflection, at most one is a reflection, and \(T_T_ = T_T_\) holds for all \(i\) and \(j\). Furthermore, \(T\) is a composite of rotations if and only if \(\det T = 1\).
This page titled 10.4: Isometries is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform.