1. Systems of Linear Equations
A linear equation in the variables x1,....xn is an equation that can be written in the form:
a1x1+a2x2+......+anxn=b
a1,....an is the cofficients
A system of linear equations or linear system is a collection of one or more linear equations involving the same variables.
The set of all possible solutions is called the solution set of the linear system.
Two linear systems are called equivalent if they have the same solution set.
Matrix is the essential information of a linear system can be recorded compactly in a rectangular array.
If m and n are positive integers, an m×n matrix is a rectangular array of numbers with m rows and n columns.
矩阵的表达也是以行列的顺序进行的。
2. Row Reduction and Echelon Forms
Nonzero row or column in a matrix means a row or column that contains at least one nonzero entry.
Leading entry of a row refers to the leftmost nonzero entry.
A rectangular matrix is in echelon form if it has the following three properties:
1. All nonzero rows are above any rows of all zeros
2. Each leading entry of a row is in a column to the right of the leading entry of the row above it.
3. All entries in a column below a leading entry are zeros.
梯形矩阵应有如下性质:
1. 所有的非零行都在零行上面。
2. 最左边的首个非零元素,应该比上面行的首项系数更靠右。
3. 首项系数所在列,下面的元素都应该是0。
4. 越往下,应该是0越多。
If a matrix in echelon form satisfies the following additional conditions, then it is in reduced echelon form
1. The leading entry in each nonzero row is 1
2. Each leading 1 is the only nonzero entry in its column
Theorem 1: uniqueness of Reduced Echelon Form
Each matrix is row equivalent to one and only one reduced echelon matrix.
(通过矩阵的变换,可以把每一个矩阵都换成梯形矩阵的样子,而且这个样子具有唯一性)
A pivot is a nonzero number in a pivot position that is used as needed to create zeros via row operations.
A pivot position in a matrix A is a location that corresponds to a leading 1 in the reduced echelon form of A.
A pivot column is a column of A that contains a pivot position.
3. Vector Equations
A matrix with only one column is called vector.
For example, the set of all vectors with two entries is denoted by ?2, whereas the ? stands for the real numbers that appear as entries in the vectors, and the exponent 2 indicated that the vectors each contain two entries.
Algebraic Properties of ?n
For all u, v, w in ?n and all scalar c and d:
1. u+v = v+u
2. (u+v) + w = u +(v+w)
3. u+0 = 0+u
4. u+(−u) = −u+u = 0
5. c(u+v) = cu+cv
6. (c+d)u = cu+du
7. c(du) = (cd)*u
4. The Matrix Equation Ax=b
定义1:
If A is an m×n matrix, with columns a1...an, and if x is in ?n, then the product of A and x, denoted by Ax, is the linear conbination of the columns of A using the corresponding entries in x as weights.
Theorem 3:
If A is an m×n matrix, with columns a1...an, and if b is in ?n, the matrix equation Ax=b has the same solution set as the vector equation a1x1+a2x2+......+anxn=b, which, in turn, has hte same solution set as the system of linear equations whose augmented matrix is [a1 a2 ... an b]
Theorem 4:
Let A be an m×n matrix. Then the following statements are logically equivalent. This is, for a particular A, either they are all true or all false.
1. For each b in ?m, the equation Ax=b has a solution.
2. Each b in ?m is a linear conbination of the columns of A.
3. The columns of a span ?m
4. A has a pivot position in every row.
5. Solution Sets of Linear Systems
A system of linear equation is said to be homogeneous if it can be written in the form of Ax=0, where A is an m×n matrix and 0 is the zero vector in ?n
This zero solution is usually called the trivial solution
Theorem 6:
Suppose the equation Ax=b is consistent for some given b, and let p be a solution. Then the solution set of Ax=b is the set of all vectors of the form w=p+vh, where vh is any solution of the homogeneous equation Ax=0.
6. Applications of Linear Systems
略
7. Linear Independence
Definition:
An indexed set of vectors {v1,...vp} in ?n is said to be linearly independent if the vector euqation x1v1+x2v2+...+xpvp=0 has only the trivial solution.
The set {v1,...vp} is said to be linearly dependent if there exist weights c1,...cp, not all zero, such that c1v1+c2v2+...+cpvp=0
Theorem 7:
An indexed set S={v1,...vp} of two or more vectors is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others. In fact, if S is linearly dependent and v1≠0, then some vj is a linear combination of the preceding vectors, v1,...,vj–1.
Theorem 8:
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1,...vp} in ?n is linearly dependent if p>n.
Theorem 9:
If a set S={v1,...vp} in ?n contains the zero vector, then the set is linearly dependent.
8. Introduction to Linear Transformations
A transformation (or a function or a mapping) T from ?n to ?m is a rule that assigns to each vector x in ?n a vector T(x) in ?m.
Definition:
A transformation (or mapping) T is linear if:
1. T(u+v) = Tu + Tv for all u, v in the domain of T;
2. T(cu) = c(Tu) for all u nad all scalars c.
9. The Matrix of A Linear Transformation
Theorem 10:
Let T: ?n → ?m be a linear transformation. Then there exists a unique matrix A such that:
T(x) = Ax for all x in ?n,
In fact, A is the m×n matrix whose jth column is the vector T(ej), where ej is the jth column of the identity matrix in ?n:
A = [T(e1)......T(en), this matrix is called the standard matrix for the linear transformation T.