More algebraic geometry questions

  • Thread starter Hurkyl
  • Start date
  • Tags
    Geometry
In summary, integration leads to the theory of de Rham cohomology, which is the analogue of integration in any theory. There are many different theories of cohomology in algebraic geometry, usually with sheaf coefficients, but with various different topologies, such as Zariski topology, or etale topology. There are several constructions of cohomology, the most intuitive being Cech cohomology used by Serre in his famous paper Faisceaux algebriques coherents, and more generally, derived functor cohomology, introduced by Grothendieck, and discussed in his famous Tohoku paper, "Sur quelques points d'algebre homologique
  • #1
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,983
28
Is there any useful analogue of integration in algebraic geometry?
 
Physics news on Phys.org
  • #2
Integration leads to the theory of de Rham cohomology, i.e. the vector space of closed forms modulo exact forms (e.g., 1-forms such that path integration is locally path independent modulo those for which it is globally path independent). Thus cohomology is the analogue of integration in any theory, in particular in algebraic geometry. Indeed as Hermann Weyl said in his book "On the concept of a Riemann surface", cohomology is an abstract form of integration. In the standard text "Intersection theory" by Fulton, he even uses an integration sign to denote evaluating a Todd class on the fundamental cycle of a variety.
There are many different theories of cohomology in algebraic geometry, usually with sheaf coefficients, but with various different topologies, such as Zariski topology, or etale topology, in which an "open set" is a covering map onto an actual open set in the space considered.
There are several constructions of cohomology, the most intuitive being Cech cohomology used by Serre in his famous paper Faisceaux algebriques coherents, and more generally, derived functor cohomology, introduced by Grothendieck, and discussed in his famous Tohoku paper, "Sur quelques points d'algebre homologique".

A comprehensive source is the book of Godement, Topologie algebrique, or the book of Hartshorne on algebraic geometry, but I like the short book by my friend George Kempf, Algebraic Varieties. George was a laconic master of the theories of Grothendieck, and managed to provide a very through but concise introduction in about 140 pages to algebraic geometry including sheaf cohomology, both derived functor and Cech version, starting from absolute zero.
 
Last edited:
  • #3
Another wonderful book also by George Kempf is "Abelian Integrals", available from the University of Mexico Autonoma, in Mexico City. Write to:

Sra. Gabriela Sangines
Depto. de Publicaciones
Instituto de Matematicas, UNAM
Circuito Exterior
Ciudad Universitaria
Mexico 04510 D.F.

and send $21 for each book including shipping.

for Abelian Integrals, by George Kempf, no. 13, in the series
MONOGRAFIAS DEL INSTITUTO DE MATEMATICAS, of UNAM.

This is a book on an abstract algebraic treatment of jacobian varieties of curves, using Grothendieck's ideas, i.e. cohomology and classifying spaces of line bundles. Even the title tells you you are getting the algebraic version of integration. It is unmatched by any other book in existence, to my knowledge. It was written by a master, for an audience of students and faculty in Mexico. Every effort was made to render it understandable to beginners. This book is not well known, because it is not available from a major publisher. This is a real bargain, and an invaluable treatment of its subject. I know about it because I knew George and also have friends on the faculty of Univ of Mexico.

My most recent (joint) research project was to use some of the ideas in this book to extend its results on the determinantal structure of Jacobian theta divisors, to a more general class of abelian varieties known as Prym varieties.

You have asked a question I have genuinely enjoyed answering.

best wishes,

roy
 
Last edited:
  • #4
Integration leads to the theory of de Rham cohomology

Sigh, my initial thoughts on the topic led me to what little I know about homology, so I was afraid this was going to be your answer. :frown: Well, I guess I got to learn this stuff sometime, might as well try poking into it again.
 
  • #5
mathwonk said:
A comprehensive source is the book of Godement, Topologie algebrique, or the book of Hartshorne on algebraic geometry, but I like the short book by my friend George Kempf, Algebraic Varieties. George was a laconic master of the theories of Grothendieck, and managed to provide a very through but concise introduction in about 140 pages to algebraic geometry including sheaf cohomology, both derived functor and Cech version, starting from absolute zero.

140 pages to get to Derived Functors, jeez he must be going some. Have you seen Neeman's proof of Grothendieck duality reduced to 3 pages most of which is about something else?
 
  • #6
JUST REMEMBER, de rham cohomology is merely path integration, and you ALREADY KNOW THAT! so just forge ahead!

remember all there is nothing new under the sun, only new names for old ideas. if you KNow CALCULUS, You already know all the important concepts, so just persist and learn the new names for what you already know.

keep the faith!
 
  • #7
Recall three tenets of basic advice from our forebears;
1) never give up,
2) never give up.
3) never give up,

or was it

1) location,
2) location,
3) location?

I,forget, maybe I was talking to a realtor.

anyway,

you are my favorite correspondents\, best regards,

roy
 
  • #8
By the way, I am 62 yeaRS OLD AND have been studying cohomology for 40 years, so you are WAY ahead of me. I do not know if that is encouraging or not, but I hope it is. Learning is really fun at any age and stage.
 
  • #9
Ok the nurse is back with my medication, so here is an example, probably familiar to all, of the connection between integration and cohomology. Now completely technically, a cocycle is an animal that lives over the space and spits out a number when it sees a loop, i.e. a homology cycle. Moreover it spits out the same number if the loops are homotopic, i.e. deformable into each other. Further, if a new loop is made by following one loop by another, then the number for the big loop is the sum of the numbers for each of the two conjoined loops.

Thus the spitting out numbers process defines a "homomorphism (addition preserving map) from the group of homotopy classes of loops to the group of numbers. Since this last group is abelian we actually have a homomorphism from the abelianization of the group of homotopy classes of loops. That is called the first homology group. So a cocycle is a function on homology cycles.

Now where do such things come from? Well the only guy I know that always spits out a number when it sees a loop is a path integral, i.e. a differential one form. So a differential one form is a cocycle, and hence should define a cohomology element. But we need the form to be "closed" so that it will have the same integral over homotopic loops, and hence also over homologous loops. I.e. a form is called closed, as usual , if it is locally df for some function f. Then two loops are homologous if and only if every closed form has the same integral over both of them.

But also a closed form is not just locally, but globally of form df, if and only if it has integral zero over all loops. The upshot is we can define cohomology without mentioning path integrals as follows:

Consider the vector space of all closed differential one forms, and mod out by the subspace of all "exact" one forms, i.e. those globally of form df. This is called the first deRham cohomology group and measures the global topology of the space. E.g. on the once punctured plane, this is a one dimensional space generated by the angle form "dtheta" (not globally df because theta = f is not globally defined).

This little gadget detects how many times a loop winds around the origin and can be used to quickly prove the fundamental theorem of algebra for example.

Now let's do some "sheaf cohomology": In complex analysis, we all know that path integration is done by residue calculus, which has nothing to do with paths, and only to do with "principal parts" of Laurent series. So all this makes sense purely algebraically. So consider the space of all "principal parts on a compact Riemann surface (i.e. a compact real 2 manifold with a complex structure, like the zero locus of a general polynomial in two variables, extended into the projective plane).

Then quotient out by the subspace of principal parts coming from global rational functions, or global meromorphic functions (they are the same). This quotient space measures the principal parts that do not come from global meromorphic functions, i.e. the failure of the Mittag Leffler theorem on the given Riemann surface. This quotient group is called the first cohomology group with coefficients in the "sheaf" of holomorphic, or regular algebraic, functions on the Riemann surface, or algebraic curve.

It has a definition via "derived functors" if you like, well really the same definition, but just a fancier name for it. I.e. the "obvious" map from the sheaf of meromorphic functions to the sheaf of principal parts, has kernel the sheaf O of holomorphic functions, and both previous sheaves are "flabby" so this is a flabby resolution of the sheaf "O" of holomorphic functions, so computes both H^0(O) and H^1(O).


yatta Yatta,..., then one gets the Riemann Roch theorem, which is just a measurement of the failure of Mittag Leffler. I.e. we know the "residue theorem" says that a meromorphic differential always has sum of its residues equal to zero, and vice versa, any collection of principal parts with sum of residues equal to zero is the principal parts of a global meromorphic function. Then Riemann Roch says that given any set of local principal parts, they come from a global meromorphic function if and only if when multiplied by every regular differential form, the sum of the residues is zero.

This can also be fancied up as a statement aboiut sheaves or line bundles or whatever, to the effect that the analytic euler characteristic of a line bundle equals a universal polynomial in the chern classes of the line bundle and the Riemann surface. A mouthful way of saying it equals the (signed) number of points in the principal part, plus 1- the topological genus of the surface.

I.e. h^0(L) - h^0(K-L) = deg(L) + 1-g, where K is the sheaf of differential forms, and g is the genus, and h^0 is the dimension of the space of global holomorphic (or regular) sections of the given bundle. To prove this, the previous defition of H^1 can be generalized to define H^1(L) for any L, and then one proves that h^0(L) - h^1(L) = deg(L) + 1-g, and finally one proves that h^0(K-L) = h^1(L), thus eliminating the highewr cohomology from the theorem. Of course Riemann did it by integration theory and proving the converse of the residue theorem. But we do it by "cohomology", i.e. quotient groups, and algebra.
 
  • #10
Is that Amnon Neeman?
 
  • #11
This seems to be entirely about loops, though. Is there any analogue to non-closed paths?


For example, if I take an affine variety of dimension 1 over a field, I can formally define a path by its endpoints, and the definite integral of a regular function by the fundamental theorem of calculus. I can do something similar for an affine variety of arbitrary dimension: a path is specified by an isomorphic embedding of the 1 dimensional affine variety, and its endpoints.
 
  • #12
I guess what I really need to do is mull over it some more! A path is just a (special) linear functional on differential forms... what can I do with that?
 
  • #13
Well loops are the case of paths that capture global topology, rather than local topology. it depends what aspect of paths you want to capture. one use of arbitrary paths for instance is to define fundamental groups and covering spaces.

The algebraic analogue of that is to define covering spaces as mappings which induce isomorphism on completions of local rings, i.e. "etale" mappings. Then algebraic fundamental groups are defined via etale mappings. Etale mappings also yield a new more subtle "topology" than the zariski one, where the open sets are etale maps onto open sets, and intersections are pullback maps (fiber products), and one gets a new definition of cohomology, etale cohomology. so in a way it is still cohomology.
 
  • #14
by the way i almost never think in terms of real algebraic geometry so i was puzzled at first as to your path analogy with 1 dimensional affine varieties.
 
  • #15
you may not be interested in this, but i wrote notes on the classical riemann roch theorem via path integrals (riemann's approach) and its generalization to higher dimensiona via cohomology (hirzebruch's approach) this summer. Is there some way you can teach me to make them available on this forum? they are 43 pages long.
 
  • #16
There is a limit on the size and format (well, extension) if files you may upload, however if you've somewhere to host them then a link would do fine.
 
  • #17
Thank you. I need to start putting stuff on ym webpage, I have those notes and a 300-400 page book on graduate algebra that is going wasted too.
 
  • #18
Now, this is not on topic but someone (Matt?) said something about defining tangent spaces the "right way" via "dual numbers", which set me thinking about how to explain them as classical differential operators.

The ring of dual numbers is the 2 dimensional algebra k[e] where e^2 = 0, i.e. power series of length 2, items of form a+be where e^2 = 0.



Then if X is an algebraic variety over an algebraically clsoed field k, the tangent space at p equals the set of maps of spec(k[e]) into X such that the unique closed point of spec(k[e]) maps to p.

But what the heck does this mean? and what does it have to do with classical tangent spaces?

Well assume our variety X equals affine n space, i.e. speck[X1,...,Xn]. then a "point" p = (a1,...,an) of X is defined by evaluating at that point, so is equivalent to a map from k[X1,...,Xn] to k taking f to f(p), i.e. taking Xj to aj. for all j. I.e. points of X are the same as k algebra maps to k.


Analogously, a tangent vector v = (v1,...,vn) at p defines a map taking a function f in k[X1,...,Xn] to the pair (f(p), Dvf(p)) i.e. to the dual number f(p) + Dvf(p) e.

Coversely any k algebra map from k[X1,...,Xn] to the dual numbers has form f goes to f(p) +Dvf(p) e for some vector v and some point p. Hence the tangent space to X at p equals the set of k algebra maps from k[X1,...,Xn] to the dual numbers taking f to f(p).


Equivalently it equals the set of maps of spec(k[e]) to X taking the unique closed point of spec(k[e]) to p. So tangent vectors to X are k algebra maps to k[e].

My point is all definitions are ultimately the same as the classical ones. So no one need feel he is deprived, no matter how classically trained.

Is that right Matt?
 
  • #19
It looks right to me, I just prefer the algebraist's tangent space, because, well, I'm an algebraist I suppose (cause, effect or effect, cause?) and it just makes actually calculating the tangent space a lot easier (and I'm thinking especially in working out the Lie algebra of some algebraic group).
 
  • #20
would you be interested in showing us how to work out the lie algebra of say SO(3)? I know the basic rules of differential geometry ("frenet formulas"?) imply it should be skew symmetric matrices.
 
  • #21
OK, it goes something like this:

Let V be a variety defined by the vanishing of some set of polynomials, f_i, then the tangent vectors v in the tanget space T(x) at the point x, are the the elements of k^n (k the underlying field, n the dimension of V) satisfying

f_i(x+dv)=0

in the space of dual numbers.

For SO(n), the defining polys may be taken to be encoded as:

XX^t=1 (we'll ignore the S of the SO bit, since it isn't important) for X an element of M_n(k) (n differen't from above, sorry).

ie T(1) = {V | (1+dV)(1+dV)^t=1}, ie, since d^2=0, V+V^t=0

ie the skew symmetric matrices are the lie algebra.

It's just saying that as we want things to be o(d^2) in the usual analytic case and we're doing algebra why not just declare them to be 0 anyway? convergence! who cares!?


good book: cater seagal macdonald lectures on lie groups and lie algebras LMS student texts
 
  • #22
Wow! That is really cool! Much faster than deriving the serret frenet formulas.
Thank you.
 
  • #23
Of course the algebraic definition also preceded the limit one historically. I.e. according to Descartes, the line y = f(a) + m(x-a) through (a,f(a)) with slope m is tangent to y = f(x) at (a,f(a)) if and only if the line meets the curve "doubly there".

I.e. iff the equation f(x) = f(a) + m(x-a), or equivalently f(x)-f(a) -m(x-a) = 0, has a double root of x=a, iff (x-a)^2 divides the lhs.

Now by the root factor theorem, since x=a satisfies f(x)-f(a) =0, x-a must divide the lhs, giving say [f(x)-f(a)]/(x-a) = m(x) for some polynomial m(x).

Then (x-a)^2 divides f(x)-f(a) -m(x-a) iff (x-a) divides m(x)-m, iff m = m(a).

I.e. the slope of the tangent line is the value m(a) of the polynomial [f(x)-f(a)]/(x-a) = m(x), at x=a. Taking the limit to compute m(a) is just a trick for cases when the division is not possible.

Of course this also amounts to expanding in a Taylor series and setting (x-a)^2 = 0 and taking the coefficient of (x-a).

Fermat also took derivatives by simply expanding in a "Taylor series" and setting the higher terms (above the linear ones) equal to zero. I.e. d^2 = 0.

So actually your method seems to be the original one for computing derivatives. That makes it not only historically the right one, but also the really "classical" definition! i.e. pre - Newton.
 
Last edited:
  • #24
Didn't know any of that. Just goes to show something, but not sure what.
 
  • #25
Getting sl(n) from SL(n) is a little harder, but it basically boils down to trusting that:

det(1+dX)=1+dtr(X)

which after a couple of minutes thought you can "see" in your head.

As for gl(n) being M(n) (ie all nxn matrices) i think i still prefer to remember that the invertible matrices are open so, any small enough (and d makes everything small enough) perturbation by dX means 1+dX is still invertible.
 
  • #26
one way to see det(1+dX)=1+dtr(X) is notice it holds for diagonal matrices, and remember that diagonalizable matrices are dense in all matrices, so it holds everywhere.
 
  • #27
But what if the underlying field is finite, so that the diagonal matrices were a closed set in the zariski topology?
 
  • #28
well if its true for all matrices over the algebraic closure wouldn't it be true for those in a finite subset?
 
  • #29
good point. i'd have also "accepted" the rebuttal:
who the hell does algebraic geometry over a non-algebraically closed field, which was my reaction as soon as i'd posted the the observation.
 
  • #30
besides it is a "universal" statement about matrcies having nothing to do with what ring the coefficients are in.

prove it over the rationals, then restrict to the integers, then specialize to a quotient of the integers.

Or in more general settings than Zp, just write your finite field as a quotient of some other infinite ring.

does that work?
 
  • #31
one way to see det(1+dX)=1+dtr(X)

This turns out to be fairly straightforward if you recall the messy definition of determinant: the sum of all possible products taking one element from each column (with each product multiplied by 1 or -1 as appropriate).

The only product that doesn't contain two terms of the form df is the product of the diagonal entries, and it's fairly easy to see that their product is simply the RHS (plus some additional terms that have two factors of the form df)


besides it is a "universal" statement about matrcies

How does this work?
 
  • #32
I just meant it only has to do with properties of addition and multiplications, true in any ring at all. and you just proved that. But to make it more rpecise you could prove it for matrices with variable entries in a polynomila ring with integer coefficients i guess and then it is true for any specialization of the values.

There is general principle of this type though. bourbaki calls it prolongation of identities. a polynomial over the integers which is zero when the arguments take values in any field of characteristic zero is in fact identically zero. see bourbaki, algebre, chapIV, paragraph 2, no.5.
 
  • #33
The reason I ask is because I have seen a proof that went something like:

"Statement <foo> is clearly true for all nice things of type <bar>. Since <foo> is a formal statement about <bar>s, it must be true for all <bar>."

I was with a couple people far smarter than I at the time, and none of them had any clue on this either... your statement that the property was "universal" for matrices beared an uncanny resemblance to this, so I was hoping you could have shed some light on it.

(Incidentally, the proof was of something in the appendix of Fulton's Intersection Theory... I've since returned it to the library so I can't dig up the exact statement)
 
  • #34
Hurkyl said:
This turns out to be fairly straightforward if you recall the messy definition of determinant: the sum of all possible products taking one element from each column (with each product multiplied by 1 or -1 as appropriate).

The only product that doesn't contain two terms of the form df is the product of the diagonal entries, and it's fairly easy to see that their product is simply the RHS (plus some additional terms that have two factors of the form df)

Yep, that's what I meant about it becomes quite clear after a few minutes. I don't think I've ever actually formally shown this to be true.
 
  • #35
And as for the formal statement thing, I think the way to explain it might be (but I'm sort of guessing).

Suppose we have some statemtent along the lines of:

A matrix M with entries in some field is invertible iff it has non-zero determinant.
Now that isn't true if we replace field with ring. For the non-zero making it invertible it is important that we are in a field. However if we were to write it as
M (over a field) is invertibl iff the the determinant is invertible is true for all matrices (over a ring). We are not using all of the hypothesis so it's true in greater generality. Or at least that's how I read your statement.
 

Similar threads

Replies
14
Views
2K
Replies
4
Views
1K
Replies
8
Views
2K
Replies
3
Views
2K
Replies
3
Views
1K
Replies
17
Views
288
Back
Top