For those not in the know, here’s what mathematicians mean by the word “measurable”:
- The problem of measure is to assign a ℝ size
≥ 0to a set. (The points not necessarily contiguous.) In other words, to answer the question:
How big is that?
- Why is this hard? Well just think about the problem of sizing up a contiguous ℝ subinterval between
- It’s obvious that
.2long and that
[0, .8]has a length of
- I don’t know what the length of
[¼√2, √π/3]is but … it should be easy enough to figure out.
- But real numbers can go on forever:
- Most of them (the transcendentals) we don’t even have words or notation for.
- So there are a potentially infinite number of digits in each of these real numbers — which is essentially why the real numbers are so f#cked up — and therefore ∃ an infinitely infinite number of numbers just between 0% and 100%.
Yeah, I said infinitely infinite, and I meant that. More real numbers exist in-between
1than there are atoms in the universe. There are more real numbers just in that teensy sub-interval than there are integers (and there are ∞ integers).
In other words, if you filled a set with all of the things between
1, there would be infinity things inside. And not a nice, tame infinity either. This infinity is an infinity that just snorted a football helmet filled with coke, punched a stripper, and is now running around in the streets wearing her golden sparkly thong and brandishing a chainsaw:
Talking still of that particular infinity: in a set-theoretic continuum sense, ∃ infinite number of points between Barcelona and Vladivostok, but also an infinite number of points between my toe and my nose. Well, now the simple and obvious has become not very clear at all!
So it’s a problem of infinities, a problem of sets, and a problem of the continuum being such an infernal taskmaster that it took until the 20th century for mathematicians to whip-crack the real numbers into shape.
- It’s obvious that
- If you can define “size” on the
[0,1]interval, you can define it on the
[−535,19^19]interval as well, by extension.
If you can’t even define “size” on the
[0,1]interval — how do you think you’re going to define it on all of ℝ? Punk.
- A reasonable definition of “size” (measure) should work for non-contiguous subsets of ℝ such as “just the rational numbers” or “all solutions to
cos² x = 0" (they’re not next to each other) as well.
Just another problem to add to the heap.
- Nevertheless, the monstrosity has more-or-less been tamed. Epsilons, deltas, open sets, Dedekind cuts, Cauchy sequences, well-orderings, and metric spaces had to be invented in order to bazooka the beast into submission, but mostly-satisfactory answers have now been obtained.
It just takes a sequence of 4-5 university-level maths classes to get to those mostly-satisfactory answers.
One is reminded of the hypermathematicians from The Hitchhiker’s Guide to the Galaxy who time-warp themselves through several lives of study before they begin their real work.
That doesn’t cover the measurement of probability spaces, functional spaces, or even more abstract spaces. But I don’t have an equally great reference for those.
Oh, I forgot to say: why does anyone care about measurability? Measure theory is just a highly technical prerequisite to true understanding of a lot of cool subjects — like complexity, signal processing, functional analysis, Wiener processes, dynamical systems, Sobolev spaces, and other interesting and relevant such stuff.
It’s hard to do very much mathematics with those sorts of things if you can’t even say how big they are.