Contacts

Find the operator's matrix in the projection on the axis. Projection operators. Consider some examples of linear operators

Matrix linear operator

Let be a linear operator, with space and finite-dimensional, and.

Let us ask arbitrarily bases: in and in .

We will deliver a task: for an arbitrary vector to calculate the coordinates of the vector in the base.

Introducing the vector matrix-string consisting of the images of the base vectors, we get:

Note that the latter in this chain equality takes place just due to the linearity of the operator.

Spread the system of vectors by base:

,

where the matrix column is a column of the vector coordinates in the base.

We will finally have:

So, In order to calculate the vector coordinate column in the selected basis of the second space, it is sufficient to multiply the column of the vector coordinates in the selected basis of the first space on the left side of the matrix consisting of the columns of the coordinate coordinate of the image of the first space in the base of the second space.

The matrix is \u200b\u200bcalled The matrix of the linear operator in the specified pair of bases.

The matrix of the linear operator will arise to signify the same letter as the operator himself, but without cursiva. Sometimes we will use such a designation: , dropping often references to bases (if it does not harm accuracy).

For linear transformation (i.e., when ) You can talk about it matrix in this base.

As an example, consider the design matrix of the design operator from Example No. 1.7 (belonging to its transformation of the space of geometric vectors). As a basis, choose the usual basis.

Consequently, the matrix of the design operator to the plane in the base has the form:

Note that if we considered the design operator as a mapping in, understood under the last space of all geometric vectors lying in the plane, then, taking as the basis of the basis, we obtain such a matrix:

Considering an arbitrary size matrix as a linear operator that displays an arithmetic space into an arithmetic space, and choosing a canonical basis in each of these spaces, we obtain that the matrix of this linear operator in such a pair of bases has the most matrix that determines this operator - that is, This case, the matrix and the linear operator are the same (in the same way as when the canonical basis is selected in the arithmetic vector space, the vector and column of its coordinate in this basis can be identified). But it would be a rough mistake identify vector as such and linear operator as suchwith their presentation in one or another basis (as a column or matrix). And vector, and linear operator essence geometric, invariant objects, defined independently of any basis. So, when we, for example, draw a geometric vector as a directional segment, then it is defined completely invariant, i.e. We are when we draw it, there is no way to bases, coordinate systems, etc., and we can operate it purely geometrically. Another thing is that for conveniencethis operating, for the convenience of computing with vectors, we build a certain algebraic device, entering the coordinate systems, bases and the associated purely algebraic technique of computing over vectors. Figuratively speaking, a vector, like a "naked" geometric object, "dresses" into various coordinate views depending on the choice of the base. But a person can put on the most diverse dress, from which his essence as a person does not change, but it is true that not any dress comes to a particular situation (you will not go to the beach in a concert trink), and not everybody Wide. So, not any basis is suitable for solving this task, as well as a purely geometric solution may be too complicated. We will see in our course, as for solving such, it would seem, a purely geometric task, as the classification of second-order surfaces, a rather complicated and beautiful algebraic theory is built.

Understanding the differences in the geometric object from its presentation in one or another basis is the basis of the perception of linear algebra. And the geometric object is not the geometric vector at all. So, if we set the arithmetic vector then it can be identified with the column of its coordinates in the canonical basis For (see first semester):

But we introduce another basis in the vectors and (check that it is really a basis!) And, using the transition matrix, we recount our coordinates:

We got a completely different column, but it represents the same arithmetic vector in another basis.

Saying about the vectors and linear operators. What for the vector is its coordinate representation, the topics for the linear operator is its matrix.

So (repeat again), it is necessary to clearly distinguish themselves invariant, geometric, objects, which are the vector and linear operator, and their presentation in one or another basis (Speech, of course, is about finite-dimensional linear spaces).

Now we will now become the task of converting the matrix of the linear operator when moving from one pair of bases to another.

Let be - a new pair of bases in Isotic.

Then (denoting the operator's matrix in the pair of "stroked" bases) we get:

But on the other side,

,

from where, due to the uniqueness of the decomposition of the vector of the base

,

For linear transformation, the formula takes a simpler view:

Matrices and bound by this ratio are called similar.

It is easy to see that the determinants of such matrices coincide.

We will introduce the concept linear operator rank.

By definition, this is the number equal to the dimension of the image of this operator:

Let us prove the following important statement:

Approval 1. 10 The rank of the linear operator coincides with the rank of its matrix, regardless of the choice of bases.

Evidence. First of all, we note that the image of a linear operator is a linear shell of the system, where - the basis in space.

Really,

what would be numbers, but this means that is the specified linear shell.

The dimension of the linear shell, as we know (see paragraph 1.2) coincides with the rank of the corresponding vectors.

We previously proved (paragraph 1.3), that if the system of vectors are decomposed on some basis in the form

then subject to the independence of the system columns of the matrix linearly independent. You can prove more strong approval (this proof we omit): the rank of the system is equal to the margin of the matrix, Moreover, this result does not depend on the choice of the base, since the multiplication of the matrix on a non-degenerate transition matrix does not change its rank.

Insofar as

,

Since, obviously, the ranks of such matrices coincide, this result does not depend on the choice of a particular basis.

The statement is proved.

For linear conversion some finite-dimensional linear space we can introduce and the concept determinant given conversion As the determinant of his matrix in an arbitrarily fixed basis, for the matrix of linear conversion in various bases are similar and have, therefore, the same determinants.

Using the concept of the matrix of the linear operator, we prove the following important relationship: for any linear conversion - dimensional linear space

Choose an arbitrarily basis in space. Then the core consists of those and only those vectors, the columns of the coordinates of the essence of the solution of a homogeneous system

namely, the vector then and only if the column is the solution of the system (1).

In other words, the kernel isomorphism takes place on the system of solutions of the system (1). Consequently, the dimension of these spaces coincide. But the dimension of the system of solutions of the system (1) is equal as we already know, where - the rank of the matrix. But we just proved that

1. Design operators and idempotents rings

Let the vector space V equal to the direct sum of the subspaces W and L :. By definition of direct sum, this means that each VV vector is unambiguously imagine as V \u003d W + L, WW. LL.

Definition 1. If, so that V \u003d W + L, then the mapping that matches each vector VV component (projection) WW is called the space projector V to the W space. They are also called the design operator, or projection operator.

Obviously, if ww, then (w) \u003d w. From here it follows that it has the following wonderful property 2 \u003d p.

Definition 2. Element E ring K is called idempotent (i.e., a similar unit), if e 2 \u003d e.

In the ring of integers there are only two idempotent: 1 and 0. A business in the ring of matrices. For example, matrices - idempotentials. Design operator matrices are also idempotentials. The operators corresponding to them are called idempotent operators.

Consider now the direct sum of N subspace V:

Then, similar to the case of the direct sum of two subspaces, can obtain N design operators, ... ,. They have a property \u003d\u003d 0 with ij.

Definition 3. Idmpnotes E I and E J (Ij) are called orthogonal if E i E j \u003d E j e i \u003d 0. Consequently, orthogonal idempotents.

From the fact that IV \u003d V, and from the rule of the addition of linear operators it follows that

This decomposition is called the decomposition of a unit in the amount of idempotents.

Definition 4. Indempotent E is called minimal if it cannot be submitted as an amount of idempotents other than E and 0.

2. Canonical decomposition of the presentation

Definition 5. The canonical decomposition of the representation T (G) is its decomposition of the type T (G) \u003d N 1 T 1 (G) + N 2 T 2 (G) + ... + NT T T (G), in which the equivalent irreducible representations T i (G ) Combined together, with Ni - the multiplicity of entering the irreducible representation t i (g) into the decomposition T (G).

Theorem 1. Canonical decomposition of the presentation is determined using the projection operator of the type

I \u003d 1, 2, ..., t, (31)

where | G | - order group G; M i is the degrees of representations t i (g), where i \u003d 1, 2, ..., t; I (g), i \u003d 1, 2, ..., t - the characters of irreducible representations T I (G). In this case, M i is determined by the formula

3. Projection operators associated with matrices of irreducible representations of groups

With the help of formulas (31), you can only get a canonical decomposition of the presentation. In general, it is necessary to use the matrices of irreducible representations that allow you to build the appropriate design operators.

Theorem 2. Let the matrix elements of the irreducible representation T R (g) of the group G. Operator of the type

it is a design operator and is called the Wigner operator. In the expression (33) M R - the dimension of the representation T R (G).

4. Decomposition of the presentation in the direct amount of irreducible representations using the Wigner operator

Denote by M module associated with the representation of T. Let an irreducible representation of T 1, T 2, ..., T T T T T T T T T T T T, according to the previously described (see § 4), correspond to the irreducible submodules M 1, m 2, ... M t. Decomposition module m

called the canonical decomposition of the module M. Denote Nimi \u003d Li, so that

Irreducible submodules of modules L I denote

; i \u003d 1, 2, ..., t. (36)

These modules we need to find.

Suppose that the task is solved. Therefore, in each of the models M i (s) (s \u003d 1, 2, ..., ni) found an orthonormal base in which the operator is represented by the matrix T I (G) of the irreducible representation of the T, obtained as a result of the action (according to rule from § 3 ) Operator on the base according to the formula

J \u003d 1, 2, ..., m i. (37)

In this expression, it can be assumed that M i is the dimension of the irreducible representation t i (i \u003d 1, 2, ..., t), and - the elements of the base with the number G from the irreducible submodule M i. Now Introduce the elements of the base L I at a fixed I as follows:

On the right in the expression (38) there are bases of modules M I (1), M I (2), ... ,. If i change from 1 to T, then we obtain the desired base of the entire module M consisting of M 1 n 1 + m 2 n 2 + ... + m t n t elements.

Consider now the operator

acting in the module M (J fixed). According to Theorem 2, the design operator. Therefore, this operator leaves without change all the basic elements (S \u003d 1, 2, ..., N i) located in the j-M of the expression column (38), and turns into zero all other database vectors. Denote by m ij vector space, stretched to an orthogonal system of vectors in the j-M of the expression column (38). Then we can say that it is a design operator to the space M IJ. The operator is known, since the diagonal elements of the matrices of irreducible representations of groups, as well as the operator T (G).

Now you can solve our task.

We choose the N I of arbitrary basic vectors in M: and make a design operator on them. The resulting vectors lie in the space M ij and are linearly independent. They are not necessarily orthogonal and normalized. I finish the resulting system of vectors according to rule from § 2. The resulting system of vectors will denote e ij (s) in accordance with the designations taken under the assumption that the task is solved. As already designated, here j is fixed, and S \u003d 1, 2, ..., n i. Denote by e if (s) (f \u003d 1, 2, ..., j-1, j + 1, ..., m i), the remaining elements of the module database M i dimension n i m i. Denote by the next operator:

From the ratios of orthogonality for the matrices of irreducible representations, it follows that this operator makes it possible to obtain E Ig s according to the formula

I \u003d 1, 2, ..., t. (41)

All this can be expressed in the form of the next algorithm.

In order to find the module database from the elements converted by irreducible representations of T I contained in the presentation of T associated with the module M, it is necessary:

According to formula (32), find the dimension of the subspaces M ij corresponding to the J-component of the irreducible representation T i.

Find using the design operator (39) all subspaces M ij.

In each subspace M Ij, select an arbitrary orthonormal base.

Using formula (41), find all the elements of the base transmitted by the remaining components of the irreducible representation of T i.

Dirac and Ket-vectors are wonderful in that with the help of them you can record various types of works.

The product of the scab-vector on the ketter is called a scalar product or an internal work. In fact, it is a standard matrix product according to the rule "Row to Column". Its result is a complex number.

The product of the ket-vector on another ket-vector gives no number, but another ket-vector. It also seems to be a vector column, but with the amount of component equal to the product of the original vectors. Such a work is called the tensor product or the product of the mackerel.

Similarly, for the work of two screensing vectors. We get a big vector string.

The latter remains with multiplying the ket-vector on the scon-vector. That is, it is necessary to multiply the column on the string. Such a product is also called tensor or external work. The result is the matrix, that is, the operator.

Consider an example of using such operators.

Take some arbitrary Ermite operator A. According to the postulates, it corresponds to some observed value. Own vectors of the Hermitian operator form a basis. The most common status vector can be decomposed on this basis. That is, to submit the sum of basic vectors with certain complex coefficients. This fact is known as the principle of superposition. I rewrite expression through the amount sign.

But the coefficients in the decomposition of the vector of the basic are probability amplitudes, that is, the scalar product of the status vector with the corresponding basis vector. We write this amplitude to the right of the vector. The expression under the sum of the amount can be viewed as the multiplication of the ket-vector on the complex number - the amplitude of the probability. On the other hand, it can be considered as a product of the matrix obtained by multiplying the ket-vector on the scon-vector, and the source ket-vector. Ket-vector can be taken out of the amount of the amount per bracket. On the right and on the left of the equality sign will be the same PSI vector. This means that the whole amount does nothing with the vector and, accordingly, is equal to a single matrix.

This formula itself is very useful when manipulating expressions with the works of scroll and ketters. After all, the unit can be inserted into any place of the work.

Let's see what matrices are included in the amount and obtained by the tensor product of the basic ket-vector with its Hermitian pairing. Again, for clarity, we will draw an analogy with conventional vectors in three-dimensional space.

We choose single basic vectors EX EY and EZ, coinciding towards the coordinate axes. The tensor product of the EX vector on its pairing will be submitted to the next matrix. Take an arbitrary vector v. What will happen when multiplying this matrix on the vector? This matrix simply reset all the components of the vector in addition to x. The result was a vector directed along the x axis, that is, the projection of the original vector on the basic vector ex. Outowing our matrix is \u200b\u200bnothing but a projection operator.

The remaining two projection operators for the basic vectors EY and EZ are represented by similar matrices and perform a similar function - zero all except one components of the vector.

What happens when the projection operators are summed? For example, PX and PY operators will add. Such a matrix will reset only the Z-component of the vector. The final vector will always lie in the X-Y plane. That is, we have a projection operator on the X-Y plane.

Now it is clear why the sum of all projection operators for basic vectors is equal to a single matrix. In our example, we will get the projection of the three-dimensional vector on the three-dimensional space itself. Single matrix in essence and there is a projector of the vector itself.

It turns out the task of the projection operator is equivalent to the task of the subspace of the source space. In the case under consideration of the three-dimensional Euclidean space, it can be a one-dimensional line set by one vector or a two-dimensional plane asked for a pair of vectors.

Returning to the quantum mechanics with its state vectors in the Hilbert space, it can be said that the projection operators set the subspace and project the status vector in this Hilbert subspace.

We give the main properties of projection operators.

  1. The consistent application of the same projection operator is equivalent to one projection operator. Typically, this property is written as p 2 \u003d p. Indeed, if the first operator did the vector in the subspace, then the second nothing will do nothing. The vector will already be in this subspace.
  2. Projection operators are the Hermitic operators, respectively, in quantum mechanics they correspond to the observed values.
  3. The eigenvalues \u200b\u200bof the projection operators of any dimension is only a number one and zero. There is a vector in subspace or not. Due to such binarity described by the projection operator, the observed value can be formulated as a question, the answer to which will be "yes" or "no". For example, is there a spin of the first electron in the singlet state up the z axis? This question can be put in accordance with the project operator. Quantum mechanics makes it possible to calculate the likelihoods for the answer "Yes" and "No" response.

In the future, we will also talk about projection operators.



Did you like the article? Share it