User:JasonWiki/Drafts/201710/Vector Space

From Wikipedia, the free encyclopedia

Basis and dimension[edit]

A vector v in R2 (blue) expressed in terms of different bases: using the standard basis of R2 v = xe1 + ye2 (black), and using a different, non-orthogonal basis: v = f1 + f2 (red).

Independent set of vecotrs[edit]

The vectors in a subset of a vector space V, denoted as S = /{v_1, v_2, /dots, v_k/} with finite number of distinct vectors, are said to be linearly independent, if the equation


Bases allow one to represent vectors by a sequence of scalars called coordinates or components. A basis is a (finite or infinite) set B = {bi}iI of vectors bi, for convenience often indexed by some index set I, that spans the whole space and is linearly independent. "Spanning the whole space" means that any vector v can be expressed as a finite sum (called a linear combination) of the basis elements:

(1)

where the ak are scalars, called the coordinates (or the components) of the vector v with respect to the basis B, and bik (k = 1, ..., n) elements of B. Linear independence means that the coordinates ak are uniquely determined for any vector in the vector space.

For example, the coordinate vectors e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), to en = (0, 0, ..., 0, 1), form a basis of Fn, called the standard basis, since any vector (x1, x2, ..., xn) can be uniquely expressed as a linear combination of these vectors:

(x1, x2, ..., xn) = x1(1, 0, ..., 0) + x2(0, 1, 0, ..., 0) + ... + xn(0, ..., 0, 1) = x1e1 + x2e2 + ... + xnen.

The corresponding coordinates x1, x2, ..., xn are just the Cartesian coordinates of the vector.

Every vector space has a basis. This follows from Zorn's lemma, an equivalent formulation of the Axiom of Choice.[1] Given the other axioms of Zermelo–Fraenkel set theory, the existence of bases is equivalent to the axiom of choice.[2] The ultrafilter lemma, which is weaker than the axiom of choice, implies that all bases of a given vector space have the same number of elements, or cardinality (cf. Dimension theorem for vector spaces).[3] It is called the dimension of the vector space, denoted by dim V. If the space is spanned by finitely many vectors, the above statements can be proven without such fundamental input from set theory.[4]

The dimension of the coordinate space Fn is n, by the basis exhibited above. The dimension of the polynomial ring F[x] introduced above is countably infinite, a basis is given by 1, x, x2, ... A fortiori, the dimension of more general function spaces, such as the space of functions on some (bounded or unbounded) interval, is infinite.[nb 1] Under suitable regularity assumptions on the coefficients involved, the dimension of the solution space of a homogeneous ordinary differential equation equals the degree of the equation.[5] For example, the solution space for the above equation is generated by ex and xex. These two functions are linearly independent over R, so the dimension of this space is two, as is the degree of the equation.

A field extension over the rationals Q can be thought of as a vector space over Q (by defining vector addition as field addition, defining scalar multiplication as field multiplication by elements of Q, and otherwise ignoring the field multiplication). The dimension (or degree) of the field extension Q(α) over Q depends on α. If α satisfies some polynomial equation

with rational coefficients qn, ..., q0 (in other words, if α is algebraic), the dimension is finite. More precisely, it equals the degree of the minimal polynomial having α as a root.[6] For example, the complex numbers C are a two-dimensional real vector space, generated by 1 and the imaginary unit i. The latter satisfies i2 + 1 = 0, an equation of degree two. Thus, C is a two-dimensional R-vector space (and, as any field, one-dimensional as a vector space over itself, C). If α is not algebraic, the dimension of Q(α) over Q is infinite. For instance, for α = π there is no such equation, in other words π is transcendental.[7]

  1. ^ Roman 2005, Theorem 1.9, p. 43
  2. ^ Blass 1984
  3. ^ Halpern 1966, pp. 670–673
  4. ^ Artin 1991, Theorem 3.3.13
  5. ^ Braun 1993, Th. 3.4.5, p. 291
  6. ^ Stewart 1975, Proposition 4.3, p. 52
  7. ^ Stewart 1975, Theorem 6.5, p. 74


Cite error: There are <ref group=nb> tags on this page, but the references will not show without a {{reflist|group=nb}} template (see the help page).