Quantcast

Hamiltonian mechanics is the feminine side of classical physics. Its masculine side is Lagrangian mechanics, formulated in terms of velocities (tangent vectors) rather than momenta (cotangent vectors).

Lagrangian mechanics focusses on the difference of kinetic – potential energies; Hamiltonian mechanics focusses on their sum.

Richard Montgomery, reviewing a book by Stephanie Frank Singer and recalling lectures by Shing-Shen Chern

(Source: people.ucsc.edu)




the cotangent bundle (differential forms) is the feminine side of calculus-on-manifolds; the tangent bundle (vector-fields) is the masculine side.

Shing-Shen Chern, via Richard Montgomery




isomorphismes:

“In 1611, [Barbara Müller] the first wife of astronomer Johannes Kepler (1571–1630) died of cholera in Prague. Kepler immediately began a methodical search for a replacement. Though short, unhealthy, and the son of a poor mercenary, Kepler had an MA in theology from Tübingen, succeeded Tycho Brahe as imperial mathematician of the Holy Roman empire, and had recently become famous for explaining how eyeglasses can correct myopia (Ad Vitellionem Paralipomena, 1604), documenting a supernova (De Stella Nova, 1606), and demonstrating that the orbit of Mars is an ellipse (Astronomia Nova, 1609). … Relentlessly courting, Kepler investigated 11 possible replacements in the two years after [Barbara]’s death. In a letter to Baron Strahlendorf written shortly after marrying candidate number five in 1613, Kepler described this methodical mate search. Friends urged Kepler to choose candidate number four, a woman of high status and tempting dowry, but she rejected him for having toyed with her too long, so Kepler was free to settle with his most-preferred number five [Susanna Reuttinger]. Kepler chose well: [Susanna], though not of the highest rank or dowry, was well-educated, bore him seven children, and provided the domestic infrastructure for Kepler to publish four more major works laying the empirical foundations for Newton’s law of gravity, and, incidentally, to save his mother from being burned at the stake as a witch in 1620.”

Simple Heuristics that Make Us Smart by Gerd Gigerenzer




isomorphismes:

“Michael Jordan has always got to be beating someone at something.”

— Kareem Abdul-Jabar, as interviewed by Claudia Dreifus




isomorphismes:

It’s impossible to get far in reading 20th-century mathematics without encountering the word cohomology. Cohomology & schemes are the subject of Hartshorne’s classic, where you can find out (Appendix C) that the Weil conjectures were resolved by defining a thing called l-adic cohomology.

hartshorne on weil conjectureshartshorne on weil conjectures

(Cohomology even showed up in economics, information theory and computer theory — although here it’s clear that the influence of this ide has been less pervasive, and unclear why.)

Schemes are like varieties = cycles. And cohomology is a way (ok, apparently various ways!) of “calculating” shape.

So what does it mean to “define” “a” cohomology “theory”? What does it mean that Dror Bar-Natan is fascinated by Khovanov homology? That Dale Husemöller wants to interpolate beween different cohomology “theories”—crystalline, étale, Hochschild, and so on?

Khovanov homology


Mathematicians drop the word “theory” like rappers drop the “N” bomb.

One starts with simplicial homology

triangular homology

This “theory” takes triangular decompositions of a space into triangles and returns a chain complex with a (everywhere ∂²=0) boundary map.

Why do chain complexes come up in this topic? To algebraicise the geometric idea here, you set up maps from the higher-dimensional things to lower-dimensional things. (In case of simplicial homology, the “things” are . In keeping with Eilenberg & Mac Lane’s fundamental rule of homology, ∂² needs to always zero out.

the sequence is exact

(We are doing this in a “formal” sense—meaning that an entry might be like that blue <small>(solid)</small> pyramid over there, for example in the sentence −3 of that blue (solid) pyramid over there (hey, they came from the (3,5) position of the filtered complex − 2 of that blue (solid) pyramid over there + 5 of that blue (solid) pyramid over there.

(Hey — you already knew that mathematical sentences get too long — just like you knew that topological (not algebraic) functions can get so wiggly that it’s not worth trying to explicitly describe them.

lagrangian of the standard model of particle physics, written out by matilde marcolli

The fact that these homology sentences can be so horrid, and yet the cancellation criterion ∂²=0 is so simple, is–I think–why mathematicians regard the Eilenberg-MacLane condition as “deep”.


According to Matt H, the boundary map is what makes a homology-theory be different from other homology-theories.

You can have twists in the homology-theory

K3 surface

K3 surface


At some point people figured out that you can cover the possible topological types …………… and thus be able to say something about X, again without having to go into painfully boring detail. —- For applications people, this may mean that the things we want to talk about fall ………… or it may not.

(If the space is X you will see people write BG(X)the letter B here means Eilenberg-MacLane topological type.




isomorphismes:

“the most important part of a principal components analysis is naming the axes”

William Kruskal

(source: I heard this second-hand but I don’t know if it’s written down anywhere)


(For those not in-the-know,

Principal components are composite dimensions, like 5×faculty pay + 3×library size + … ÷ 10 or 3×vote on bill 3 + 8×vote on bill 12 + … ÷ 100

The hope is that,

by using linear algebra

, you can present many things as fewer things. [beta vs p examples]

The reasons this hope might have some possibility of working are two: (1) covariation and (2) small contributions. If two of your data fields do similar things (1), you can combine them into one dimension which acts pretty much the same. (in mathematical terms, by rotating the basis). If many of your data columns make a small contribution, why not smush them into one composite dimension (eg irrelevant_1 + irrelevant_2 + … + irrelevant_N)

You want dimension names to look like this “perceptual map” of aspirin, tylenol, bayerbufferin, anacin, and excedrin

but how do you get them from reality = messy data which wasn’t gathered well, isn’t necessarily defined how you want, isn’t defined the

You lie.




isomorphismes:
“Notice how some of the historical error bars do not contain the future “right answer”.
These historical data (thank you to C. Amsler et al. for compiling them from across many articles!) of particle physics measurements show not...

isomorphismes:

Notice how some of the historical error bars do not contain the future “right answer”.

These historical data (thank you to C. Amsler et al. for compiling them from across many articles!) of particle physics measurements show not only

  • the epistemic nature of probability estimates and confidence intervals,

but also the difference between

  • probability as computed within one experiment and
  • overall, actual, total, legitimately objective certainty.

Since this is physics we dont’ have to worry about the usual social-science problems like the property in question not existing, or not having experimental data and thus needing to infer from examples

Picture by C. Amsler et al. (Particle Data Group), Physics Letters B667, 1 (2008) and 

2009 partial update for the 2010 edition 
Cut-off date for this update was January 15, 2009.


hi-res




isomorphismes:
“Carl Crow’s map of Shanghai I came to the story of Crow through Hua Hsu story about expertise about “China”. Hsu was lecturing as well about the origins of pleasure—how marketing shapes desire, and (defensively) how criticism—to...

isomorphismes:

Carl Crow’s map of Shanghai

I came to the story of Crow through Hua Hsu story about expertise about “China”. Hsu was lecturing as well about the origins of pleasure—how marketing shapes desire, and (defensively) how criticismto praise and to blame—can serve as a counterweight to the mind-share that corporations vie for.

This image—of the city of Shanghai commissioning a rich ex-pat—a guy who literally wrote I Speak For The Chinese and The Chinese Are Like That—to define, for their foreign targets, the meaning of Shanghai.


hi-res




Cartesian functions send {A}→{B} with exactly one tail a↦ per a∈{A} connecting to each head ↦b∈{B}.

In other words B has to be equal size or smaller than A.

image

This is true mapping rings to rings, groups to groups, sets to sets, vector spaces to vector spaces,
image
… it’s just a property of arrows really.

image

image

When mathematicians want to talk about “one-to-many” (using the database lingo) or “multimaps” (some stupid word I heard on Wikipedia which absolutely nobody anywhere ever thought was a good term), though, they’re not left outside.

If you’ve got a bundle of arrows ⇶ with tails from {a₀, a₁, a₂, a₃} ⇶ {b₁₄}, then that’s a bundle of tails all heading to the same place. If you “grab them all by the head”

So when mathematicians want to talk about a multimap, they use a preimage ƒ⁻¹. Let’s say the kernel for example–it’s “everything that gets thrown in the trash”—so if multiple things get thrownin the trash, 

(linear subspace / quotient / ring morphism kernel)

So this is how they can associate a bunch of stuff, to one point. For example every point on a manifold gets a tangent space. Maybe this is a vector space for example–which is a lot bigger than just one point.

That would be a problem for 1-to-≥1 functions, so the mathematicians need to turn the arrows around. That’s why they define the projection map π:E→B to send a ton of things e∈E onto that one point b∈B i.e. p∈M.




isomorphismes:

What are sites and sheaves?

  • Internal Sieve is a downward-closed collection of open subsets of a topological space
  • “Downward-closed” means all smaller subsets wholly contained in any part of the sieve, also are in the sieve–and smaller subsets wholly contained in those are also in the sieve, and so on.
  • External Sieve is the following:
  • Map the category of open sets on 𝒳𝐗𝒪𝓞𝑿  𝓞(𝐗) onto the category of functors from 𝓞(𝐗) to Set. 𝓞(𝐗) → {𝓞(𝐗)^opp, Set} (category of Set-valued presheaves on 𝐗)
  • an individual open set ↦ Hom_{𝓞(𝐗) } (—, the individual open set)
  • That is: embedding some stuff 𝐗 in a gigantic category (presheaves)
  • The embedding relates the stuff we started with 𝐗, their internal organisation 𝓞(𝐗) and their relationships to each other Hom(𝐗)

(por mimrir)