Galois Theory - E. Artin

Galois Theory - E. Artin

(Parte 1 de 7)


Number 2

GALOIS THEORY Lectures delivered at the University of Notre Dame

DR. EMIL ARTIN Professor of Mathematics, Princeton University

Edited and supplemented with a Section on Applications

DR. ARTHUR N. MILGRAM Associate Professor of Mathematics, University of Minnesota

Second Edition With Additions and Revisions

Copyright 1942, 1944 UNIVERSITY OF NOTRE DAME

Second Printing, February 1964 Third Printing, July 1965

Fourth Printing, August 1966 New composition with corrections

Fifth Printing, March 1970 Sixth Printing, January 197 1

Printed in the United States of America by NAPCO Graphie Arts, Inc., Milwaukee, Wisconsin

(The sections marked with an asterisk have been herein added to the content of the first edition)

A. Fields1
B. Vector Spaces1
C. Homogeneous Linear Equations2
D.Dependence and Independence of Vectors4
E. Non-homogeneous Linear Equations9
F.* Determinants1
I FIELD THEORY<......... 21
A. Extension Fields21
B. Polynomials2
C. Algebraic Elements25
D. Splitting Fields30

E. Unique Decomposition of Polynomials

into Irreducible Factors3
F. Group Characters34
G.* Applications and Examples to Theorem 1338
H. Normal Extensions41
Finite Fields49
Roots of Unity, .. 56
K. Noether Equations57
L. Kummer’s Fields59
M. Simple Extensions64
N.Existence of a Normal Basis,...........6
Q.Theorem on Natural Irrationalities67
By A. N. Milgram.,69
A. Solvable Groups69
B. Permutation Groups70
C.Solution of Equations by Radicals72
D.The General Equation of Degree n74
E.Solvable Equations of Prime Degree76

1 APPLICATIONS F.Ruler and Compass Construction....................80


A. Fie’lds--* A field is a set of elements in which a pair of operations called multiplication and addition is defined analogous to the operations of multipl:ication and addition in the real number system (which is itself an example of a field). In each field F there exist unique elements called o and 1 which, under the operations of addition and multiplication, behave with respect to a11 the other elements of F exactly as their correspondents in the real number system. In two respects, the analogy is not complete:1) multiplication is not assumed to be commutative in every field, and 2) a field may have only a finite number of elements.

More exactly, a field is a set of elements which, under the above mentioned operation of addition, forms an additive abelian group and for which the elements, exclusive of zero, form a multiplicative group and, finally, in which the two group operations are connected by the distributive law. Furthermore, the product of o and any element is de- fined to be o. If multiplication in the field is commutative, then the field is called a commutative field.

If V is an additive abelian group with elements A, B,,

B. Vector Spaces. F a field with elements a, b, . . . , and if for each a c F and A V the product aA denotes an element of V, then V is called a (left) vector space over F if the following assumptions hold: 1) a(A + B) = aA + aB 2) (a + b)A = aA + bA 3) a(bA) = (ab)A 4) 1A = A

The reader may readily verify that if V is a vector space over F, then oA = 0 and a0 = 0 where o is the zero element of F and 0 that of V.

For example, the first relation follows from the equations: aA = (a + o)A = aA + oA

Sometimes products between elements of F and V are written in the form Aa in which case V is called a right vector space over F to distinguish it from the previous case where multiplication by field elements is from the left. If, in the discussion, left and right vector spaces do not occur simultaneously, we shall simply use the term “vector space.”

If in a field F, aij, i = 1,2,, m, j = 1,2, . . . , n are m . n ele-
a,, xi + a,, x2 ++ alnxn = 0.

C. Homogeneous Linear Equations. ments, it is frequently necessary to know conditions guaranteeing the existence of elements in F such that the following equations are satisfied:

aml~l + amzx2 ++ amnxn = 0.
homogeneous equations, and a set of elements, xi, x2,, xr,

The reader Will recall that such equations are called linear of F, for which a11 the above equations are true, is called

a solution of the system. If not a11 of the elements xi, xg,, xn

3 are o the solution is called non-trivial; otherwise, it is called trivial.

THEOREM 1. A system of linear homogeneous equations always has a non-trivial solution if the number of unknowns exceeds the num- ber of equations. The proof of this follows the method familiar to most high school students, namely, successive elimination of unknowns. If no equations in n > 0 variables are prescribed, then our unknowns are unrestricted and we may set them a11 = 1. We shall proceed by complete induction. Let us suppose that

n > m, and denote the expression a,ixi ++ ainxn by L,, i = 1,2,. . .,m.
We seek elements xi,,x,, not a11 o such that L, = L, = . . . = Lm = o.
If aij= o for each i and j, then any choice of xi ,, xr, Will serve as

each system of k equations in more than k unknowns has a non-trivial solution when k < m. In the system of equations (1) we assume that a solution. If not a11 aij are o, then we may assume that ail f o, for the order in which the equations are written or in which the unknowns are numbered has no influence on the existence or non-existence of a simultaneous solution. We cari find a non-trivial solution to our given system of equations, if and only if we cari find a non-trivial solution to the following system:

L, = 0 L, - a,,a,;lL, = 0 Lm - amia,;lL, = 0

hence, L, = L, == Lm = o. Conversely, if (1) is satisfied, then

For, if xi,. . . ,x,, is a solution of these latter equations then, since L, = o, the second term in each of the remaining equations is o and, the new system is clearly satisfied. The reader Will notice that the

m-l equations, when viewed as equations in x2,, xn, exists then
taking xi = - ai;‘( ai2xz + ar3x3 ++ alnxn) would give us a

new system was set up in such a way as to “eliminate” x1 from the last m-l equations. Furthermore, if a non-trivial solution of the last solution to the whole system. However, the last m-l equations have a solution by our inductive assumption, from which the theorem follows.

in the form xxjaij = o, j = 1,2,,n, the above theorem would still

Remark: If the linear homogeneous equations had been written hold and with the same proof although with the order in which terms are written changed in a few instances.

In a vector space V over a field F, the vectors A,,, An are
called dependent if there exist elements xi,, x”, not a11 o, of F such
that xiA, + x2A, ++ xnAn = 0. If the vectors A,, . . . ,An are

D. Dependence and Independence of Vectors. not dependent, they are called independent.

The dimension of a vector space V over a field F is the maximum number of independent elements in V. Thus, the dimension of V is n if there are n independent elements in V, but no set of more than n independent elements.

A system A,,,A, of elements in V is called a

generating system of V if each element A of V cari be expressed

linearly in terms of A,,, Am, i.e.,A = Ca.A. for a suitable choice

ofa,, i = l,..., m,inF.

THEOREM 2. In any generating system the maximum number of independent vectors is equal to the dimension of the vector space.

Let A,,,A,,, be a generating system of a vector space V of

dimension n. Let r be the maximum number of independent elements in

sumek,,,Ar independent. By the definition of dimension it follows that
r < n. For each j, A,,,-A,. A,+j are dependent, and in the relation

the generating system. By a suitable reordering of the generators we may asa,A, + a,A, + ..*+ arAr + a,+j A,+j = 0

of A,,,Ar. Thus,
A,+j = - ar+y[a,A, + a,A, ++ arAr].
It follows that A,,,Ar is also a generating system since in the
a11 be replaced by linear expressions in A,,, Ar.
Now, let B,,,B, be any system of vectors in V where t > r,
then there exist aij such that Bj =iglaij Ai, j = 1,2,, t, since the
Ai’ s form a generating system. If we cari show that B,,, B, are

expressing this, a ,+j# o, for the contrary would assert the dependence linear relation for any element of V the terms involving Ar+j, j f o, cari dependent, this Will give us r > n, and the theorem Will follow from- this together with the previous inequality r < n. Thus, we must ex-- hibit the existence of a non-trivial solution out of F of the equation xiB, + x2B, + . . . + xrB, = 0.

the linear equationsiir xj aij = o, i = 1,2,, r, since these ex-

TO this end, it Will be sufficient to choose the xi’s SO as to satisfy pressions Will be the coefficients of Ai when in E x. B. the Bj ‘s are rj=l J J replaced by 2 aij Ai and terms are collected. A solution to the equai=l

tions 2 xjaij = 0, i = 1,2,i=l

, r, always exists by Theorem 1.

Remark: Any n independent vectors A,,, A,, in an n dimen-

sional vector space form a generating system. For any vector A, the

vectors A, A,,,A,, are dependent and the coefficient of A, in the

dependence relation, cannot be zero. Solving for A in terms of

Al>“‘>A,, exhibits A,,,An as a generating system.

A subset of a vector space is called a subspace if it is a subgroup of the vector space and if, in addition, the multiplication of any element in the subset by any element of the field is also in the subset.

ments of the form a, A, ++ asAS clearly forms a subspace of V.

If Ai,...,AS are elements of a vector space V, then the set of a11 ele- It is also evident, from the definition of dimension, that the dimension of any subspace never exceeds the dimension of the whole vector space.

An s-tuple of elements ( a,,, as ) in a field F Will be called

a row vector. The totality of such s-tuples form a vector space if-- we define

a, = b,, i = 1,, s,

a) (a,,a, ,..., as) = (b,,b, ,..., bS)ifandonlyif

B> (alta2,...,as) + (bl,b2,...,bs) = (a1 + b,,a, + b,, . . ..aS+ bs), y) b(a,,a, ,..., as) = (ba,,ba, ,..., baS),forban element of F. When the s-tuples are written vertically, they Will be called column vectors.

THEOREM 3. The row (column) vector space F” of a11 n-tuples from a field F is a vector space of dimension n over F.

The n elements

(Parte 1 de 7)