“Ramified” means that around the point in question the projection map has the same behaviour as does the projection from the parabola to its abscissa.
C. Herbert Clemens, A Scrapbook of ℂ Curves
(Source: link.springer.com)
“Ramified” means that around the point in question the projection map has the same behaviour as does the projection from the parabola to its abscissa.
C. Herbert Clemens, A Scrapbook of ℂ Curves
(Source: link.springer.com)

Test functions and [tempered] distributions require the notion of topological vector space … distributions can be traced back to Green’s functions in the 1830’s to solve ordinary differential equations … the 1936 work of Sergei Sobolev on hyperbolic PDE’s.
Laurent Schwartz introduced the term “distribution” by analogy with a distribution of electrical charge, possibly including not only point charges but also dipoles and so on.
(^ if there is a dipole, there must be a notion of subtraction, hence the need for a vector, and to speak of this very conceptually, use a TVS)

“the Yang-Mills equations are nonlinear, therefore there is little hope of finding a closed-form solution.” Such a statement seems plausible. Linear differential equations with constant coefficients are the only differential equations for which a general solution is given in closed form.
As often occurs in life, however, the exceptions to the rule are sometimes more interesting than the rules themselves. Let us digress from quantum physics to the motion of water, where British shipbuilder John Scott Russell noticed a solitary wave in a canal in August 1834.
Neither Airy nor Stokes accepted this observation, yet in 1895 Korteweg and de Vries found an equation for a wave travelling in shallow water in one direction: u̇ + 6•u•uₓ + uₓₓₓ = 0. The KdV equation is easily solved by restricting from two independent space-time dimensions (x,t) to a single dimension x−λt — a frame matching the speed λ of a travelling wave.
Mikhail Ilʹich Monastyrskiĭ, Riemann, Topology, and Physics
(Source: link.springer.com)

Hadamard knew in 1898 that negative curvature and simply connectedness for surfaces embedded in 3-space force uniqueness of geodesics joining two points—implying that any segment of geodesic is also a shortest path.
But there is a long way toward the modern statement: “on any complete abstract Riemannian manifold of ≥0 curvature of any dimension, curvature is the quotient of its universal covering by a discrete group of isometries.”
Marcel Berger, Riemannian Geometry during the Second Half of the Twentieth Century
——-

Hamiltonian mechanics is the feminine side of classical physics. Its masculine side is Lagrangian mechanics, formulated in terms of velocities (tangent vectors) rather than momenta (cotangent vectors).
Lagrangian mechanics focusses on the difference of kinetic – potential energies; Hamiltonian mechanics focusses on their sum.
Richard Montgomery, reviewing a book by Stephanie Frank Singer and recalling lectures by Shing-Shen Chern
(Source: people.ucsc.edu)

the cotangent bundle (differential forms) is the feminine side of calculus-on-manifolds; the tangent bundle (vector-fields) is the masculine side.
Shing-Shen Chern, via Richard Montgomery

“Michael Jordan has always got to be beating someone at something.”— Kareem Abdul-Jabar, as interviewed by Claudia Dreifus

It’s impossible to get far in reading 20th-century mathematics without encountering the word
cohomology. Cohomology & schemes are the subject of Hartshorne’s classic, where you can find out (Appendix C) that the Weil conjectures were resolved by defining a thing called l-adic cohomology.
(Cohomology even showed up in economics, information theory and computer theory — although here it’s clear that the influence of this ide has been less pervasive, and unclear why.)
Schemes are like varieties = cycles. And cohomology is a way (ok, apparently various ways!) of “calculating” shape.
So what does it mean to “define” “a” cohomology “theory”? What does it mean that Dror Bar-Natan is fascinated by Khovanov homology? That Dale Husemöller wants to interpolate beween different cohomology “theories”—crystalline, étale, Hochschild, and so on?
Mathematicians drop the word “theory” like rappers drop the “N” bomb.
One starts with simplicial homology
This “theory” takes triangular decompositions of a space into triangles and returns a
chain complexwith a (everywhere ∂²=0)boundary map.Why do
chain complexescome up in this topic? To algebraicise the geometric idea here, you set up maps from the higher-dimensional things to lower-dimensional things. (In case ofsimplicialhomology, the “things” are . In keeping with Eilenberg & Mac Lane’s fundamental rule of homology, ∂² needs to always zero out.(We are doing this in a “formal” sense—meaning that an entry might be like
that blue <small>(solid)</small> pyramid over there, for example in the sentence−3 of that blue (solid) pyramid over there (hey, they came from the (3,5) position of the filtered complex − 2 of that blue (solid) pyramid over there + 5 of that blue (solid) pyramid over there.
(Hey — you already knew that mathematical sentences get too long — just like you knew that topological (not algebraic) functions can get so wiggly that it’s not worth trying to explicitly describe them.
The fact that these homology sentences can be so horrid, and yet the cancellation criterion ∂²=0 is so simple, is–I think–why mathematicians regard the Eilenberg-MacLane condition as “deep”.
According to Matt H, the boundary map is what makes a homology-theory be different from other homology-theories.
You can have twists in the homology-theory
At some point people figured out that you can
coverthe possible topological types …………… and thus be able to say something aboutX, again without having to go into painfully boring detail. —- For applications people, this may mean that the things we want to talk about fall ………… or it may not.(If the space is
Xyou will see people writeBG(X)– the letterBhere means Eilenberg-MacLane topological type.

“the most important part of a principal components analysis is naming the axes”—
William Kruskal
(source: I heard this second-hand but I don’t know if it’s written down anywhere)
(For those not in-the-know,
Principal components are composite dimensions, like
5×faculty pay + 3×library size + … ÷ 10or3×vote on bill 3 + 8×vote on bill 12 + … ÷ 100The hope is that,
by using linear algebra
, you can present many things as fewer things. [beta vs p examples]
The reasons this hope might have some possibility of working are two: (1) covariation and (2) small contributions. If two of your data fields do similar things (1), you can combine them into one dimension which acts pretty much the same. (in mathematical terms, by rotating the basis). If many of your data columns make a small contribution, why not smush them into one composite dimension (eg
irrelevant_1 + irrelevant_2 + … + irrelevant_N)You want dimension names to look like this
but how do you get them from reality = messy data which wasn’t gathered well, isn’t necessarily defined how you want, isn’t defined the
You lie.
Product idea: instead of giving people what they want, lie—saying you’re giving them what they want—then deliver something else.
— isomorphismes (@isomorphisms)May 3, 2017

Notice how some of the historical error bars do not contain the future “right answer”.
These historical data (thank you to C. Amsler et al. for compiling them from across many articles!) of particle physics measurements show not only
- the epistemic nature of probability estimates and confidence intervals,
but also the difference between
- probability as computed within one experiment and
- overall, actual, total, legitimately objective certainty.
Since this is physics we dont’ have to worry about the usual social-science problems like the property in question not existing, or not having experimental data and thus needing to infer from examples
Picture by C. Amsler et al. (Particle Data Group), Physics Letters B667, 1 (2008) and
2009 partial update for the 2010 edition
Cut-off date for this update was January 15, 2009.
