Contacts

Linely dependent and linearly independent matrix lines. Linear independence. §4.9. Rank matrix

Rows and columns matrix can be considered as string matrices and correspondingly, column matrices. Therefore, above them, as over any other matrices, you can perform Linear operations. The reduction to the addition operation is that the strings (columns) must be the same length (height), but this condition is always executed for strings (columns) of one matrix.

Linear operations on lines (columns) make it possible to draw up lines (columns) in the form of expressions α 1 A 1 + ... + α SAS, where a 1, ..., as - an arbitrary set of strings (columns) of the same length (height) , and α 1, ..., α s - valid numbers. Such expressions are called linear combinations of strings (columns).

Definition 12.3. Rows (columns) and 1, ..., and s called linearly independent If equality

α 1 A 1 + ... + α s a s \u003d 0, (12.1)

where 0 in the right part is the zero string (column), possibly only at α 1 \u003d ... \u003d as \u003d 0. Otherwise, when there are such valid numbers α 1, ..., α s, not equal to zero at the same time, that equality is performed (12.1), these lines (columns) are called linearly dependent.

The following statement is known as a criterion of linear dependence.

Theorem 12.3. Rows (columns) A 1, ..., A S, S\u003e 1 are linearly dependent if and only if at least one (one) of them is a linear combination of the rest.

◄ Proof We carry out for strings, and for columns it is similar.

Necessity. If the lines A 1, ..., as are linearly dependent, then, according to definition 12.3, there are such valid numbers α 1, ..., α s, not zero at the same time, that α 1 A 1 + ... + α sas \u003d 0. Select the nonzero coefficient αα i. For a certainty, let it be α 1. Then α 1 A 1 \u003d (-α 2) a 2 + ... + (-α s) as and, therefore, a 1 \u003d (-α 2 / α 1) A 2 + ... + (-α s / α 1) as, i.e. Row A 1 is represented as a linear combination of the remaining lines.

Adequacy. Let, for example, a 1 \u003d λ 2 a 2 + ... + λ s a s. Then 1a 1 + (-λ 2) a 2 + ... + (- λ s) a s \u003d 0. The first coefficient of the linear combination is one of the unit, i.e. He is nonzero. According to definition 12.3, strings A 1, ..., A s linearly dependent.

Theorem 12.4. Let the rows (columns) and 1, ..., a s are linearly independent, and at least one of the rows (columns) b 1, ..., b L is their linear combination. Then all lines (columns) a 1, ..., a s, b 1, ..., b l are linearly dependent.

Let, for example, b 1 are a linear combination A 1, ..., A S, i.e. B 1 \u003d α 1 A 1 + ... + α s s s, α i ∈ Cr, i \u003d 1, s. To this linear combination, add strings (columns) B 2, ..., BL (at l\u003e 1) with zero coefficients: B 1 \u003d α 1 A 1 + ... + α SAS + 0B 2 + ... + 0b l. According to Theorem 12.3, strings (columns) A 1, ..., A S, B 1, ..., B i linearly dependent.

Note that the strings and columns of the matrix can be viewed as arithmetic sizes vectors. m. and n., respectively. Thus, the sizes matrix can be interpreted as a tackle m. n.-Momes or n. m.- dimensional arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear dependence and linear independence of rows and columns of the matrix.

4.8.1. Definition. Line
called linear combination of string With coefficients
If equality is true for all elements of this line:

,
.

4.8.2. Definition.

Strings
called linearly dependentif there is their non-trivial linear combination equal to the zero line, i.e. There are such not all equal zero numbers


,
.

4.8.3. Definition.

Strings
called linearly independentunless their trivial linear combination is equal to the zero line, i.e.

,

4.8.4. Theorem. (Criterion of linear dependency rows of the matrix)

In order for the lines to be linearly dependent, it is necessary and enough to at least one of them be a linear combination of the rest.

Evidence:

Necessity. Let strings
linearly dependent, then there are their non-trivial linear combination, equal to the zero line:

.

Without a general limitation, suppose that the first of the coefficients of the linear combination is different from zero (otherwise you can renumbers the strings). Dividing this ratio on , get


,

that is, the first line is a linear combination of the rest.

Adequacy. Let one of the lines, for example, , is a linear combination of the rest, then

that is, there is a non-trivial linear row combination
equal to the zero line:

so lines
linearly dependent on what was required to prove.

Comment.

Similar definitions and approvals can be formulated for the columns of the matrix.

§4.9. Rank matrix.

4.9.1. Definition. Minor order Matrians size
called the determinant of order with elements located at the intersection of some of her Rows I. columns.

4.9.2. Definition. Different from zero minor order Matrians size
called basis minorif all minors of the order matrix
equal zero.

Comment. The matrix may have several basic minors. Obviously, they will all be one order. It is also possible when the matrix size
minor order Different from zero, and minors order
Does not exist, that is
.

4.9.3. Definition. Rows (columns) forming the basic minor are called basis lines (columns).

4.9.4. Definition. Rankthe matrices are called the order of its basic minor. Rank matrix denotes
or
.

Comment.

Note that due to the equality of the rows and columns of the determinant, the rank of the matrix does not change during its transposition.

4.9.5. Theorem. (Invariance of the grade of the matrix relative to elementary transformations)

The rank of the matrix does not change in its elementary transformations.

Without proof.

4.9.6. Theorem. (On the base minor).

Basis lines (columns) are linearly independent. Any string (column) of the matrix can be represented as a linear combination of its baseline strings (columns).

Evidence:

Conduct proof for lines. Proof of approval for columns can be carried out by analogy.

Let the rank of the matrix sizes
raven , but
- Basis Minor. Without a restriction on the community, suppose that the base minor is located in the left upper corner (otherwise, you can bring the matrix to this view using elementary transformations):

.

We first prove the linear independence of the basic lines. Proof will spend from nasty. Suppose the basic lines are linearly dependent. Then, according to Theorem 4.8.4, one of the rows can be represented as a linear combination of other baselines. Consequently, if you find the specified linear combination from this line, then we will get a zero string, which means that Minor
it is zero, which contradicts the determination of the basic minor. Thus, we obtained a contradiction, therefore, the linear independence of the basic strings is proved.

We now prove that any string of the matrix can be represented as a linear combination of basic lines. If the number of the lines under consideration from 1 to r., then, obviously, it can be represented as a linear combination with a coefficient of 1 at a row and zero coefficients with the rest of the lines. Show now that if the line number from
before
It can be represented as a linear combination of basic lines. Consider minor matrix
obtained from basic minor
by adding string and arbitrary column
:

We show that this minor
from
before
and for any number of columns from 1 to .

Indeed, if the column number from 1 to r., we have a determinant with two identical columns, which is obviously zero. If the column number from r.+1 BE , and line number from
before
T.
it is a minor of the original matrix of greater order than the base minor, and this means that it is zero from the definition of the basic minor. Thus, it is proved that Minor
equal to zero for any line number from
before
and for any number of columns from 1 to . Decorating it on the last column, we get:

Here
- Related algebraic additions. notice, that
because there is
is a basic minor. Consequently, elements of the string k. may be represented as a linear combination of the corresponding elements of basic rows with coefficients that do not depend on the column number :

Thus, we proved that an arbitrary line of the matrix can be represented as a linear combination of its baseline strings. Theorem is proved.

Lecture 13.

4.9.7. Theorem. (About the rank of a nondegenerate square matrix)

In order for the square matrix to be nondegenerate, it is necessary and enough that the ring of the matrix is \u200b\u200bequal to the size of this matrix.

Evidence:

Necessity. Let a square matrix size n. is nondegenerate then
Therefore, the determinant of the matrix is \u200b\u200ba basic minor, i.e.

Adequacy. Let be
then the order of the base minor is equal to the size of the matrix, therefore, the basic minor is the determinant of the matrix .
by definition of basic minor.

Corollary.

In order for the square matrix to be nondegenerate, it is necessary and enough for its lines linearly independent.

Evidence:

Necessity.Since the square matrix is \u200b\u200bnondegenerate, its rank is equal to the size of the matrix
that is, the determinant of the matrix is \u200b\u200ba basic minor. Consequently, by Theorem 4.9.6 on the base minor, the matrix strings are linearly independent.

Adequacy.Since all lines of the matrix are linearly independent, its rank is not less than the size of the matrix, which means
consequently, according to the previous theorem 4.9.7 Matrix is nondegenerate.

4.9.8. The method of bustling funds to find the rank of the matrix.

Note that this method has been partially implicitly described in the proof of the theorem on the Basis Minor.

4.9.8.1. Definition. Minor
called border in relation to minor
if it is obtained from minor
by adding one new String And one new column of the original matrix.

4.9.8.2. The procedure for finding the rank of the matrix by the method of bustling minors.

    We find any current minor matrix different from zero.

    Calculate all the fundamental minors.

    If all of them are zero, the current minor is basic, and the grade rank is equal to the order of the current minor.

    If there is at least one different from zero among the busty minors, then it relies on the current and the procedure continues.

We will find with the help of the method of bustling minors rank matrix

.

Easy to specify the current minor of the second order other than zero, for example,

.

Calculate the fundamental minors:




Therefore, since all third-order funds are equal to zero, Minor
is basic, that is

Comment. From the considered example, it can be seen that the method is quite laborious. Therefore, in practice, the method of elementary transformations is much more often used, which will be discussed below.

4.9.9. Finding the grade of the matrix by the method of elementary transformations.

Based on Theorem 4.9.5, it can be argued that the ring of the matrix does not change during elementary transformations (that is, the ranks of equivalent matrices are equal to). Therefore, the ring of the matrix is \u200b\u200bequal to the rank of a stepped matrix obtained from the initial elementary transformations. Rank Same stepped matrixObviously, equal to the number of its non-zero lines.

We define the rank of the matrix

the method of elementary transformations.

We give the matrix to step above:

The amount of nonzero strings of the resulting stepped matrix is \u200b\u200bthree, therefore,

4.9.10. Rank system of linear vectors.

Consider the system vectors
some linear space. If it is linearly dependent, it can select the linearly independent subsystem in it.

4.9.10.1. Definition. Ranking system vectors
linear space called the maximum number of linearly independent vectors of this system. Rank system vectors
denotes how
.

Comment. If the system of vectors are linearly independent, its rank is equal to the number of system vectors.

We formulate the theorem showing the connection of the concepts of rank of the system of linear space vectors and the grade of the matrix.

4.9.10.2. Theorem. (About the rank of linear space vector system)

The rank of the system of linear space vectors is equal to the margin of the matrix, the columns or rows of which are the coordinates of the vectors in some basis of the linear space.

Without proof.

Corollary.

In order for the system of linear space vectors to be linearly independent, it is necessary and enough that the rank of the matrix, columns or rows of which are the coordinates of the vectors in some basis, was equal to the number of system vectors.

Proof is obvious.

4.9.10.3. Theorem (about the dimension of the linear shell).

The dimension of the linear shell vectors
linear space equal to the rank of this system of vectors:

Without proof.

where - some numbers (some of these numbers or even everyone can be zero). This means the presence of the following equals between column elements:

From (3.3.1) implies that

If equality (3.3.3) is right then and only if, the lines are called linearly independent. The ratio (3.3.2) shows that if one of the rows is linearly expressed in the rest, the lines are linearly dependent.

It is easy to see and reverse: if the lines are linearly dependent, then there is a string that will be a linear combination of the remaining lines.

Let, for example, in (3.3.3), then .

Definition. Suppose that in the matrixa, some minor of the rth order and let the minor (R + 1), of the order of the same matrix, contains the entire minor inside. We will say that in this case, the Ministry of Equipment of Minor (or is bordering for).

Now we prove an important lemma.

Lemma About the bordering miners. If the minor of the order R matrix A \u003d is zero, and all the funds of the minors are zero, then any string (column) of the matrix A is a linear combination of its rows (columns) constituting.

Evidence. Without breaking the generality of reasoning, we will assume that the minor of the R-th order is different from zero standing in the upper left corner of the matrix A \u003d:



.

For the first k lines of the matrix A, the statement of the lemma is obvious: sufficiently in a linear combination to include the same line with a coefficient equal to one, and the remaining - with coefficients equal to zero.

We now prove that the remaining lines of the matrix are linearly expressed through the first k lines. To do this, we construct the minor (R + 1) -go order by adding K-th row to Minor () and l.-to column ():

.

The resulting minor is zero at all k and l. If, it is zero as containing two identical columns. If, the obtained minor is a bustling minor for and, therefore, is zero by the condition of the lemma.

Spread Minor on the elements of the latter l.-to column:

Believing, we get:

(3.3.6)

Expression (3.3.6) means that k-I line Matrixa and linearly expressed through the first R rows.

Since when transposing the matrix, its funds are not changed (due to the properties of the determinants), then everything is proven fairly and for columns. Theorem is proved.

Corollary I. Any string (column) of the matrix is \u200b\u200ba linear combination of its baseline strings (columns). Indeed, the base minor matrix is \u200b\u200bdifferent from zero, and all the borders of it are equal to zero.

Corollary II. The determinant of the N-th order then and only then is zero when it contains linearly dependent lines (columns). The sufficiency of the linear dependence of strings (columns) for the equality of the determinant zero is proven earlier as the property of determinants.

We prove the need. Let the square matrix of N-th order, the only minor of which is zero. It follows that the rank of this matrix is \u200b\u200bless than N, i.e. There is at least one line, which is a linear combination of basic lines of this matrix.

We prove another theorem about the rank of the matrix.

Theorem. Maximum number The linearly independent lines of the matrix are equal to the maximum number of its linearly independent columns and equal to the rank of this matrix.

Evidence. Let the rank of the matrix A \u003d equal to r. Then any of its K basis lines are linearly independent, otherwise the base minor would be zero. On the other hand, any R + 1 and more lines are linearly dependent. Supported nasty, we could find Minor about more than R, different from zero to a consequence of the 2 previous lemma. The latter contradicts the fact that the maximum order of minors other than zero is equal to r. All proven for strings is true for columns.

In conclusion, we present another method of finding the rank of the matrix. The rank of the matrix can be defined if you find a minor of the maximum order other than zero.

At first glance, it requires calculations, although the final, but perhaps, a very large number of minors of this matrix.

The following theorem allows, however, to contribute to this significant simplification.

Theorem. If minor matrix is \u200b\u200bdifferent from zero, and all the focusing minors are zero, then the rag of the matrix is \u200b\u200bR.

Evidence. It is enough to show that any subsystem of the matrix lines at S\u003e R will be in the conditions of the theorem linearly dependent (it will follow that R is the maximum number of linearly independent lines of the matrix or any of its minors of the order more than K is zero).

Suppose nasty. Let the lines are linearly independent. In the lemma about the bordering Miners, each of them will be linearly expressed through the lines in which the minor and which, in view of what is different from zero, linearly independent:

Now consider the following linear combination:

or

Using (3.3.7) and (3.3.8), we get

,

what contradicts linear independence of the lines.

Consequently, our assumption is incorrect and, therefore, any S\u003e R rows in the conditions of the theorem are linearly dependent. Theorem is proved.

Consider the rule of calculation of the grade of the matrix - the method of focusing minors based on this theorem.

When calculating the grade of the matrix, one should move from minorors of smaller orders to miners of large orders. If MINOR R-th order is already found, different from zero, it is required to calculate only the minors (R + 1) -go order, binding minor. If they are zero, the ring of the matrix is \u200b\u200bR. This method is also applied if we not only calculate the rank of the matrix, but also determine which columns (lines) are the base minor matrix.

Example. Calculate the method of focusing minors rank matrix

Decision. Minor of the second order, standing in the upper left corner of the matrix A, differ from zero:

.

However, all the focusing minors of the third order are zero:

; ;
; ;
; .

Consequently, the rank of the matrix A is to two :.

The first and second lines, the first and second columns in this matrix are basic. The remaining lines and columns are their linear combinations. In fact, the following equalities are valid for strings:

In conclusion, we note the justice of the following properties:

1) the rank of the work of the matrices is not more than the rank of each of the factors;

2) rank of the works of an arbitrary matrix and on the right or on the left to a nondegenerate square matrix q is equal to the rank of the matrix A.

Polynomial matrices

Definition. A polynomial matrix or sewage is called a rectangular matrix, the elements of which are polynomials from one variable with numeric coefficients.

Over-mapping you can make elementary transformations. These include:

Permutation of two lines (columns);

Multiplication of the string (column) by a number other than zero;

Adjust to one line (column) of another line (column) multiplied by any polynomial.

Two-samples and identical sizes are called equivalent: if the matrix K can be processed using a finite number of elementary transformations.

Example. Prove the equivalence of matrices

, .

1. We change the first and second columns in the matrix:

.

2. From the second line, I will read the first multiplied by ():

.

3. I will multiply the second string on (-1) and note that

.

4. Subscribe from the second column the first, multiplied by, we get

.

The set of all seats of these sizes is divided into non-cycle classes of equivalent matrices. Matrixes equivalent to each other form one class, not equivalent - the other.

Each class of equivalent matrices is characterized by canonical, or normal, sizes-sizes.

Definition. Canonical, or normal, -atsya sizes is called - Satchase, in which the main diagonals are set by polynomials, where R is a smaller of the numbers M and N ( ), and not equal zero polynomials have senior coefficients equal to 1, and each next is a polynomial to share to the previous one. All elements outside the main diagonal are equal to 0.

It follows from the definition that if among polynomials there are polynomials of zero degree, they are at the beginning of the main diagonal. If there are zeros, they stand at the end of the main diagonal.

The matrix of the previous example is canonical. The matrix

also canonical.

Each class has a sole canonical selection, i.e. Each sewage is equivalent to the only canonical matrix, which is called the canonical form or normal form of this matrix.

The polynomials standing on the main diagonal of the canonical form of this session are called invariant multipliers of this matrix.

One of the methods for calculating invariant multipliers consists in bringing this session to canonical form.

So, for the matrix of the previous example, invariant multipliers are

From what has said it follows that the presence of the same set of invariant multipliers is a necessary and sufficient condition for equivalence.

Summary to canonical formation is reduced to the definition of invariant multipliers.

, ; ,

where r - rank -matsy; - The greatest common divisor of the Minor of the K-th order, taken with the senior coefficient equal to 1.

Example. Let Dana -matza

.

Decision. Obviously, the greatest common divisor of the first order, i.e. .

We define the minors of the second order:

, etc.

Already this data is enough to conclude:, therefore,.

Determine

,

Hence, .

Thus, the canonical form of this matrix is \u200b\u200bthe following sewage:

.

The matrix polynomial is called the expression

where - variable; - Square matrices of order n with numeric elements.

If, then S is called the degree of matrix polynomial, n is the order of the matrix polynomial.

Any quadratic selection can be represented as a matrix polynomial. Fairly, obviously, the opposite statement, i.e. Any matrix polynomial can be represented as a certain square -matrum.

The validity of these statements clearly implies from the properties of operations over matrices. Let us dwell on the following examples:

Example. Submit a polynomial matrix

in the form of a matrix polynomial can be as follows

.

Example. Matrix polynomial

can be represented in the form of the following polynomial matrix (sewage)

.

This interchangeability of matrix polynomials and polynomial matrices plays a significant role in the mathematical apparatus of methods of factor and component analysis.

Matrix polynomials of the same order can be added, deduct and multiplied in the same way as conventional polynomials with numeric coefficients. It should, however, remember that multiplication of matrix polynomials, generally speaking, not commutative, because Not commutative multiplication of matrices.

Two matrix polynomials are called equal, if their coefficients are equal, i.e. Appropriate matrices with the same variable degrees.

The amount (difference) of two matrix polynomials is called such a matrix polynomial in which the coefficient at each degree of variable is equal to the amount (difference) of the coefficients at the same extent in the polynomials and.

To multiply the matrix polynomial on the matrix polynomial, each member of the matrix polynomials multiply to each member of the matrix polynomial, fold the resulting works and bring similar members.

The degree of matrix polynomial - the work is less or equal to the sum of the degrees of the factors.

Operations on matrix polynomials can be carried out using the operations on the appropriate seats.

To fold (subtract) matrix polynomials, it is enough to add (subtract) the corresponding seals. The same refers to multiplication. - Satisfying the works of matrix polynomials are equal to the product of the factors.

On the other hand and can be written as

where in 0 is a nondegenerate matrix.

When dividing on there is a definitely definite right private and right residue

where the degree R 1 is less than the degree, or (division without the residue), as well as the left private and left residue then and only when, where

The concepts of linear dependence and linear independence are determined for rows and columns equally. Therefore, the properties associated with these concepts formulated for columns, of course, are valid for lines.

1. If the column system includes a zero column, it is linearly dependent.

2. If there are two equal columns in the column system, it is linearly dependent.

3. If there are two proportional columns in the column system, it is linearly dependent.

4. The system of columns is linearly dependent if and only if at least one of the columns is a linear combination of the rest.

5. Any columns included in the linearly independent system form a linearly independent subsystem.

6. The column system containing the linearly dependent subsystem is linearly dependent.

7. If the column system is linearly independent, and after attaching to it, the column turns out to be linearly dependent, then the column can be decomposed on columns, and moreover, i.e. The decomposition coefficients are definitely.

We prove, for example, the last property. Since the column system is linearly dependent, then there are numbers not all equal 0 that

In this equality. In fact, if, then

So, a non-trivial linear column combination is equal to a zero column, which contradicts the linear independence of the system. Therefore, and then, i.e. The column is a linear combination of columns. It remains to show the uniqueness of such a presentation. Suppose nasty. Let there be two decompositions and, and not all the plaintiffs are respectively equal to each other (for example,). Then from equality

We obtain (\\ alpha_1- \\ beta_1) a_1 + \\ ldots + (\\ alpha_k- \\ beta_k) a_k \u003d o

consistently, the linear combination of columns is equal to a zero column. Since not all of its coefficients are zero (at least), then this combination is nontrivial, which contradicts the condition of linear independence of columns. The resulting contradiction confirms the uniqueness of the decomposition.

Example 3.2. Prove that two non-zero columns and linearly depended then and only if they are proportional, i.e. .

Decision. In fact, if the columns are linearly dependent, then there are numbers that are not equal to zero at the same time, which. And in this equality. Indeed, assuming that, we obtain a contradiction, since the column is nonzero. So. Therefore, there is a number such that. The need is proved.

On the contrary, if, then. Received a non-trivial linear combination of columns equal to zero column. So columns are linearly dependent.

Example 3.3. Consider all sorts of systems formed from columns

Explore each system on linear dependence.
Decision. Consider five systems containing one column. According to claim 1, comments 3.1: Systems are linearly independent, and a system consisting of one zero column is linearly dependent.

Consider systems containing two columns:

- each of the four systems and is linearly dependent, since it contains a zero column (property 1);

- the system is linearly dependent, since the columns are proportional (Property 3) :;

- Each of the five systems and is linearly independent, since the columns are disproportionate (see approval of Example 3.2).

Consider systems containing three columns:

- each of the six systems and is linearly dependent, since it contains a zero column (property 1);

- systems are linearly dependent, since they contain a linearly dependent subsystem (property 6);

- systems and linearly dependent, since the latter column is linearly expressed in the rest (property 4): and, accordingly.

Finally, the systems of four or five columns are linearly dependent (by property 6).

Rank matrix

In this section, we consider another important numerical characteristics of the matrix associated with how its row (columns) depend on each other.

Definition 14.10 Let the matrix of the size and the number not exceeding the smallest of the numbers and: . Choose arbitrarily strings of the matrix and columns (numbers rows may differ from column numbers). The determinant of the matrix made up of elements on the intersection of selected rows and columns is called minor of the order of the matrix.

Example 14.9. Let be .

Minor of the first order is any element of the matrix. So 2, - Minors of the first order.

Second-order minors:

1. Take rows 1, 2, columns 1, 2, we get minor ;

2. Take lines 1, 3, columns 2, 4, we get minor ;

3. Take rows 2, 3, columns 1, 4, we get minor

Minors of the third order:

rows here can be selected only in one way,

1. Take columns 1, 3, 4, we get minor ;

2. Take columns 1, 2, 3, we get minor .

Proposition 14.23 If all the minors of the order matrix are zero, then all minors of the order, if such exist, are also equal to zero.

Evidence. Take arbitrary minor order. This is the determinant of the order matrix. Spread it on the first line. Then in each decomposition term, one of the multipliers will be Minor of the order of the original matrix. By condition, the minor order is equal to zero. Therefore, minor of the order will be zero.

Definition 14.11 The rag of the matrix is \u200b\u200bcalled the largest of the orders of Minders of the matrix other than zero. Rank zero matrix is \u200b\u200bconsidered to be zero.

Unified, standard, designation of the grade matrix is \u200b\u200babsent. Following the textbook, we will denote it.

Example 14.10 The matrix of Example 14.9 has a rank 3, since there is a minor of the third order other than zero, and the fourth order minors are not.

Rank matrix equal to 1, since there is a nonzero minor of the first order (the element of the matrix), and all the minors of the second order are zero.

The rank of a non-degenerate square matrix of order is equal, since its determinant is a minor of order and a non-degenerate matrix differs from zero.

Proposition 14.24. When transposing the matrix, its rank does not change, that is, .

Evidence. The transposed minor of the initial matrix will be minor of the transposed matrix, and vice versa, any minor is a transposed minor of the original matrix. When transposed, the determinant (minor) does not change (Proposition 14.6). Therefore, if all the minors of the order in the initial matrix are zero, then all the minors of the same order are also zero. If minor order in the original matrix differs from zero, then in there is a minor of the same order other than zero. Hence, .

Definition 14.12. Let the rank of the matrix equal. Then any minor order, different from zero, is called the base minor.

Example 14.11 Let be . The determinant of the matrix is \u200b\u200bzero, since the third line is equal to the sum of the first two. Minor of the second order, located in the first two lines and the first two columns, is equal . Consequently, the rank of the matrix is \u200b\u200btwo, and the funds considered is basic.

Basis Minor is also minor, located, say, in the first and third lines, the first and third columns: . The basic will be Minor in the second and third lines, the first and third columns: .

Minor in the first and second lines, the second and third columns is zero and therefore will not be basic. The reader can check independently what other second-order minors will be basic, and which is not.

Since the columns (lines) of the matrix can be folded, multiplied by numbers, form linear combinations, you can enter the definition of linear dependence and linear independence of the column system (strings) of the matrix. These definitions are similar to the same definitions 10.14, 10.15 for vectors.

Definition 14.13 The column system (strings) is called linearly dependent if there is a set of coefficients, of which at least one is different from zero, which is a linear combination of columns (strings) with these coefficients will be zero.

Definition 14.14. The column system (strings) is linearly independent, if from the equality zero of the linear combination of these columns (strings) it follows that all coefficients of this linear combination are zero.

The following offer is also true, similar to 15.6.

Proposition 14.25 The column system (strings) is linearly dependent if and only if one of the columns (one of the rows) is a linear combination of other columns (lines) of this system.

We formulate the theorem called theorem on Basis Minor.

Theorem 14.2. Any column of the matrix is \u200b\u200ba linear combination of columns passing through the basic minor.

The proof can be found in textbooks on linear algebra, for example, in ,.

Proposition 14.26. The rag of the matrix is \u200b\u200bequal to the maximum number of its columns forming a linearly independent system.

Evidence. Let the rank of the matrix equal. Take columns passing through the basic minor. Suppose these columns form a linearly dependent system. Then one of the columns is a linear combination of others. Therefore, in the basic minor, one column will be a linear combination of other columns. According to proposals 14.15 and 14.18, this basic minor should be zero, which is contrary to the definition of the base minor. Consequently, the assumption that columns passing through the base minor are linearly dependent, not correct. So, the maximum number of columns forming a linearly independent system is more or equal.

Suppose that columns form a linearly independent system. Let's make up the matrix. All minor matrices are miners of the matrix. Therefore, the basic minor matrix has an order no more. According to the Basis Minor Theorem, a column that does not pass through the base minor matrix is \u200b\u200ba linear combination of columns passing through the base minor, that is, the columns of the matrix form a linearly dependent system. This contradicts the selection of columns forming the matrix. Consequently, the maximum number of columns forming a linearly independent system cannot be greater. It means that it is equal as stated.

Proposition 14.27. The rag of the matrix is \u200b\u200bequal to the maximum number of its rows forming a linearly independent system.

Evidence. At the proposal of 14.24, the rank of the matrix during transposition does not change. Rows of the matrix become its columns. The maximum number of new columns of the transposed matrix, (former rows of the original) generators of the linearly independent system, equal to the wings of the matrix.

Proposal 14.28. If the matrix determinant is zero, then one of its columns (one of the rows) is a linear combination of the remaining columns (strings).

Evidence. Let the order of the matrix equal. The determinant is the only minor of a square matrix having an order. Since it is zero, then. Consequently, the column system (strings) is linearly dependent, that is, one of the columns (one of the strings) is a linear combination of the remaining.

Results of proposals 14.15, 14.18 and 14.28 give the following theorem.

Theorem 14.3. The matrix determinant is zero if and only if one of its columns (one of the rows) is a linear combination of the remaining columns (strings).

Finding the grade of the matrix by calculating all its minorors requires too big computational work. (The reader can check that in square Matrix Fourth about 36 minors of the second order.) Therefore, another algorithm is applied to find the rank. For its description, a number of additional information will be required.

Definition 14.15 We call the elementary transformations of the matrix test actions on them:

1) permutation of rows or columns;
2) multiplication of a string or column one different from zero;
3) Adding to one of the rows another string multiplied by the number or addition to one of the columns of another column multiplied by the number.

Proposition 14.29. In case of elementary transformations, the grade of the matrix does not change.

Evidence. Let the rank of the matrix equal, the matrix, obtained as a result of the execution of the elementary transformation.

Consider the permutation of strings. Let - minor matrix, then in the matrix there is a minor, which or coincides with, or differs from it to permutive lines. Conversely, any minor of the matrix can be compared to minor matrix or coinciding C, or differing from it the order of rows. Therefore, from the fact that in the matrix all the minors of the order are zero, it follows that in the matrix, too, all the minors of this order are zero. And since in the matrix there is a minor order, other than zero, then in the matrix, too, there is a minor order, different from zero, that is.

Consider the multiplication of the string by a number other than zero. Minor from the matrix corresponds to minor from the matrix or the coincident C, or differing from it only one line, which is obtained from the minor string by multiplying the number other than zero. In the latter case. In all cases, or at the same time equal to zero, or at the same time different from zero. Hence, .



Did you like the article? Share it