Posts tagged with distance

[Karol] Borsuk’s geometric shape theory works well because … any compact metric space can be embedded into the “Hilbert cube” `[0,1] × [0,½] × [0,⅓] × [0,¼] × [0,⅕] × [0,⅙] ×  …`

A compact metric space is thus an intersection of polyhedral subspaces of n-dimensional cubes …

We relate a category of models A to a category of more realistic objects B which the models approximate. For example polyhedra can approximate smooth shapes in the infinite limit…. In Borsuk’s geometric shape theory, A is the homotopy category of finite polyhedra, and B is the homotopy category of compact metric spaces.

—-Jean-Marc Cordier and Timothy Porter, Shape Theory

(I rearranged their words liberally but the substance is theirs.)

in `R` do: `prod( factorial( 1/ 1:10e4) )` to see the volume of Hilbert’s cube → 0.

Topology gets appropriate for qualitative rather than quantitative properties, since it deals with closeness and not distance.

It is also appropriate where distances exist, but are ill-motivated.

These approaches have already been used successfully, for analyzing:

• • physiological properties in Diabetes patients
• • neural firing patterns in the visual cortex of Macaques
• • dense regions in ℝ⁹ of 3×3 pixel patches from natural [black-and-white] images
• • screening for CO₂ adsorbative materials
Michi Johanssons (@michiexile)

(Source: blog.mikael.johanssons.org)

## Why is the slope of perpendicular lines flipped over and switched signs?

Oh! This one only took me 17 years or so to figure out. This was a “fact” I had committed to memory in school but never thought about why.

` `

From The Symplectization of Science by Mark Gotay and James Isenberg:

There are some connections to circles and homogeneous coordinates (`v/‖v‖`) but let’s leave those for another time.

Gotay & Isenberg’s exposition using the metric makes it clear that the
`/‖v‖` part of the definition of `cosine` isn’t where the right-angle concept comes from. It comes from the `v``₁ w``₁ + v``₂ w₂`.

$\large \dpi{150} \bg_white \!\!\!\! \text{Given two vectors named } {\color{Golden} \vec{v}},{\color{DarkBlue} \vec{w}} \text{ made up of } \\ \\ {\color{Golden} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} =\vec{v}} \text{ and } {\color{DarkBlue} \begin{pmatrix} w_1 \\ w_2 \end{pmatrix} =\vec{w}} \\ \\ \text{the metric \scriptsize{(which is going to encode geometric information)}\Large\ is:} \\ \\ \boxed{g = {\color{Golden} v_1} \cdot {\color{DarkBlue} w_1} + {\color{Golden} v_2} \cdot {\color{DarkBlue} w_2}}$

` `

So if the slope of my starting line is `m`, why is the slope of its perpendicular line `−1/m`?

First I could draw some examples.

I drew these with http://www.garrettbartley.com/graphpaper.html which is a good place to count out the “rise over run” and “negative run over rise” `Δx` & `Δy` distances to make sure they really do look perpendicular.

The length and the (affine or “shift”) positioning of perpendicular line segments doesn’t matter to their perpendicularity. So to make life easier on myself I’ll centre everything on zero and make the segments equal length.

The metric formula is going to work if let’s say my first vector `v` is `(+1,+1)` (one to the right and one up) and my second vector goes one down and one to the right. Then the metric would do:

`+1 • +1` (horizontal) `+ +1 • −1 `(vertical)

which cancels.

` `

What if it were a slope of 9.18723 or something I don’t want to think about inverting?

This is a case where it’s probably easier to think in terms of abstractions and deduce, rather than using imagination in the conventional way.

If I went over `+a` steps to the right and `+b` steps to the up (slope=`b/a`), then the metric would do:

`a•? + b•¿`

What is that missing? If I plugged in `(?←−b, ¿←a)` or `(?←b, ¿←−a)`, the metric would definitely always cancel.

And in either of those cases, the slope of the question marks (second line) would be `−a/b`.

So the multiplicative inverse (flipping) corresponds to swapping terms in the metric so that the two parts anti-match. And the additive inverse (sign change) means the anti-matched pairs will “fold in” to zero each other (rather than amplifying=doubling one another).

"The Chinese Proof" of the Pythagorean theorem (the little orange square is `a²`, the medium orange square is `b²`, and the large orange square is `c²`).

Harald Hanche-Olsen:

The righthand picture above appears in the Chou pei suan ching 周髀算經 (ca. 1100 B.C.), for the special (3,4,5) pythagorean triple….

…the earliest known proof of Pythagoras is given by Zhoubi suanjing (The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven) (c. 100 B.C.E.-c. 100 C.E.)

[T]his proof, with the exclamation `Behold!’, is due to the Indian mathematician Bhaskara II (approx. 1114-1185) …

Jöran Friberg … presented convincing evidence that the … Babylonians were aware of the Pythagoras theorem around 1800 B.C.E.

Online Zhoubi Suanjing:

It’s wrong to say that faith and science are opposites,

• not only because that’s playing into the presentist viewpoint of American fundamentalists fighting to teach creationism in science class versus `/r/atheism`, but because
• scientists don’t choose their research programmes at random. They “have a hunch” — or an aesthetic sense impels them. But staking your career on  the belief that a particular line of investigation will be fruitful, both in a scientific sense and in a value-to-humanity sense, requires stronger language than merely “I think so” or “I have a hunch”. I think it’s fair to say that scientists have faith in their research programmes.

I’ll give an example of a research programme that I have faith in. Mostly unjustified faith, but I believe it nonetheless. (I could be wrong, of course — but still I can’t approach the world with no beliefs whatever — although some views of rationality would suggest that this unlivable mental life would be the most honourable way to live.)

1. I believe there’s something wrong with economic theory. Call it a dark age on the way to enlightenment, call it an obsession with equilibrium-and-optimisation, call it the undue influence of Milton Friedman essays on the deeper, unspoken beliefs of economists vis-à-vis effect of careful studies or creative mathematics. ∃ many ways to describe the malaise | muddle | distraction | not even really sure what to call it.

This is not based on "Economists didn’t foresee the financial crisis!" or a critique of the Washington Consensus. It’s not about Objectivists or people who don’t understand what a model is, but rather at real, non-crazy economists. It’s more based on statements like "Economics is in a terrible state"—Ariel Rubinstein. Or questions like: since information and search costs and other such things dominate the f**k out of the normal incentives-based thinking we use to armchair-speculate—then what is even the use of the partial-equilibrium intuitions or DSGE or anything like that?

I also don’t think this idea would necessarily change the focus to more sociological or historical or cultural issues (like economists ignoring how utility functions come to be, or larger questions about history and culture and family norms … I actually think a lot of economists are already prepared to focus on those issues, they just need to make them mathematically tractable). Rather my gut instinct tells me that this research programme is “far upstream”—redirecting the river by diverting the water long before it becomes a rushing channel (sometimes called the MSNBC channel) that’s too powerful to redirect.
2. I don’t know enough about sheaf theory or cohomology to say for certain whether they can be used for this or that. It’s just my spider sense tingling when I look at the ideas there. Most of the applications I’ve read about are to either physics problems or logic, or to higher mathematics itself (algebraic topology, algebraic geometry, topological analysis, … stuff that’s named as (adjective = way of thinking + noun = subject matter)).

That said I think there’s something to be found here in terms of new viewpoints on economic questions.
3. Consider the Leontief input-output matrix (Cosma Shalizi recently wrote a lot about it in his book review of Red Plenty on Crooked Timber blog).

Mathematically savvy people know that every graph can be encoded as a matrix, and furthermore with the right base corpus and some knowledge of “characters” we can do one-directional graphs.

4. What’s the [putative] application to economics? Well instead of thinking about all this stuff we can’t observe or interpret yet—utility curves, willingness-to-pay outside the lab, valuations, etc.

(we don’t even know experimentally if there is such a thing as a valuation—and it’s kind of dubious—yet we go on as if these things exist because they’re axiomatic keystones of the only tractable theory around). Instead of continuing to rely on the theoretical stuff handed down from Bentham, let’s think about all the things we can measure—like transactions—and ask how we can use mathematics to make theories about those things and possibly infer back to the stuff we really want to know, like is capitalism making the world a better place.
5. Transactions are one place to start. Prices (like the billion price project) are another. And the web now generates huge amounts of text—maybe we can do something with that. But let’s start by going back to the Leontief matrix.
6. In the formulation I learned in school, there’s a fixed time unit—like a year—and each dimension corresponds to an exactly comparable item class—so like a three button shirt and a four button shirt would be separate dimensions, but once we finally get down to a dimension, everything along that dimension is equivalence-classed.
7. I can see three things missing from that picture.

First of all, I want to be able to “zoom in” to different timescales and have my matrix change in the sensible way. In other words I want a mathematical object that operates on multiple timescales at once, with a coherent, consistent translation between the Leontief matrix of October 17th between `19:29` and `21:13 GMT`, and the Leontief matrix of `1877 A.D.` I believe things floating around sheaf theory are the place to look for that.
8. Second of all, I want neighbourhood relationships (and even distances) between the items—so that a three-button blue blouse is “closer to” a four-button blue blouse than it is to a ferret named Bosco the Great sold at the Petco in Moravia, Illinois. So something from algebraic topology is necessary here.

Maybe a tie-in to “lumpy human capital”—the most important kind of good because it’s what humans use to sustain themselves and help others. It’s acknowledge to be “lumpy” in that ten years of studying economic theory doesn’t prepare you to be a laundress or even necessarily to trade OTC derivatives. But we also know that in terms of neighbourhood relationships, economic theory studying is “closer to” finance than to farming. (Although most economists are not as close to finance as seems to be generally thought.)
9. Both of those two points are more just æsthetic problems or issues with foundations. Like philosophical gripes could be solved, in the same way that a transition from cardinal utility to ordinal utility, even though I don’t think the outcomes of the ordinal utility theory were very different.
10. Third, I want my matrix to be time-varying or dynamical. New trade partners come into existence, some businesses shutter their doors and file their dissolution papers, others are broken up and sold in parts, and even with an existing vendor I am not going to do the same business each year. Some of these numbers are available in `XBRL` format because public companies sometimes do business with each other.
11. Fourth, and here is where I think it would be possible to get new ideas of things to measure. If I have some kind of dynamical, multi-level, “coloured” graph of all the trades in all the currencies and all the goods types in the world over the right number corpus, then I have a different mathematical conception of the world economy.

I can draw boundaries like you would see in a cell complex and denote “a community” or “a municipality” or “a neighbourhood” or “a province” and when I perturb those boundaries some rationality conditions need to hold.

Taking this viewpoint and applying only the maths that’s already been invented, people have already found a lot of invariants on graphs—cohomology invariants, generalisations of Gauss’ divergence theorem, different calcula on the interesting objects (like fox calculus)—and applying those theories to the conception of the super-duper Leontief matrix, we might find new things to measure, or new ways to make different sense of some measurements we already have.

If you remember this Perelman quote about calculating how fast Christ would have to run on the water to not sink in, or various nifty cancellations in the vacuum states of a gnarly physics theory — that is the kind of thing I’m thinking could be useful in theorising new invariants to measure from an überdy-googly Leontief trade matrix.

Or from www.math.upenn.edu/~ghrist/preprints/ATSN.pdf we learn "The Euler characteristic `χ` of any compact triangulable space is independent of the particular [finite] simplicial structure imposed, as well as independent of the topological type.”

Yum. Tell me more.

For example we know some Gaussian-divergence ∑ relations that happen within the grey box of a firm—all the internal transactions have to add up to what’s written on the accounting statements. But what about applying this logic to a group of three firms that circularly trade with each other and also each has a composite edge (with different weights) adding up all of their trades to “outside the cycle”?

Seems like some funky abstract nonsense could simplify problems like that and, crucially, tell us invariants that give us new ideas of what to measure.
12. Fifth, this is not really related. I think the concept of symplecticity from physics nicely captures the essence of what tradeoffs are about.

But I’m still looking into this—I won’t definitely say that, it just seems like another fruit-lined avenue.
13. There are tie-ins to categories, causal diagrams, and other stuff wherein I may be just lumping together a lot of seemingly-related ideas.

So I’m not sure if looking at a super-duper Leontief matrix like described above would have nice tie-ins to causal graphs / structural equations à la Judea Pearl, but hey it might. At least one tie-in I can already think of is that all the goods actually transacted doesn’t tell you enough because there are threats and possible counterfactuals and CV’s that are sent in but get ignored or rejected, or smiles and pats on the back which are a kind of transaction that influences the economic outcomes without being tied directly to money or a goods transaction.
14. Why go for even more abstraction, even “more” maths, when so many of the critiques of economics say it’s become too mathematical? Simple answer. More abstract mathematics requires fewer assumptions. So conclusions drawn using those tools are more likely to actually hold true in the real world. For example, is it more plausible that someone’s utility increases linearly, or monotonically with good X? Monotonically of course is much more realistic, although we could infer much more if linear were the case. But what’s the point of making easier inferences if they’re wrong because the assumptions don’t hold? Hence the interest in more general, more abstract mathematics.

Now, realistically? The scale of investigating this “hunch” in terms of concrete steps that lead A → B → C → D are way beyond what I will probably accomplish. Even if I dropped all side interests and all work, it would take at least a couple years to get publishable material out of these hunches.

But that’s exactly my point about science. I was told by a Zaazen practitioner that this is kind of a Zen-like paradox. In order to investigate the premise that there are useful applications of sheaves & cohomology to economic theory, I first have to accept the premise that there are probably useful applications of sheaves & cohomology to economic theory.

Glancing at the text above you can probably tell that my thoughts on this issue are formless, probably mischaracterising the mathematics I’ve only heard about but don’t yet understand. My mental conception of these things, if it could be understood via a perfect future theory of mental representation and fMRI snapshots of my mind thinking about this stuff, would be some mixture of formless and inaccurate.

So the important decisions (decisions of major direction, not adjustments or effort) are made amongst the formless, but can only be harvested as a form. Like the beginner’s mind, with its vagueness and formlessness, giving way to the expert’s mind, its definition, choateness, and exactitude. (Form and formlessness being complementary in the QM | vNA sense.) I think that’s Zen as well.

## Distance between Words

Which pair is more different?

• `keyboard | keyb`ard`
• `keyboard | keybpard`
• `keyboard | keebored`

Of course in mathematics we get to decide among many definitions of size and there is no “correct” answer. Just what suits the application.

I can think of two approaches to defining distance measures between words:

• sound-based — `d(Hirzbruch, Hierzebrush) < d(Hirzbruch, Hirabruc)`
• keyboard-based — `d(u,y) < d(u,o)`

Reading on online fora (including YCombinator, tisk tisk) the only distance functions I hear about are the ones with Wikipedia pages: Hamming distance and Levenshtein distance.

These are defined in terms of how many word-processing operations are required to correct a mis-typed word.

• How many letters do I need to insert?
• How many letters do I need to delete?
• How many letter-pairs do I need to swap?
• How many `vim` keystrokes do I need?

and so on—those kinds of ideas.

#### inter-letter interaction effects

If we could get conditional probabilities of various kinds of errors — like

• Am I more likely to mis-type `ous` while writing
• `varoius`
• `precarious`
• `imperious`
• ? There could be some kind of finger- or hand-based reason, like if I’ve just been using right-handed fingers near my `ous` fingers, or that I have to angle my hand weirdly in order to hit the previous couple strokes in some other word?
• Am i more likely to mis-type `reflexive` as `reflexible` when the document topic is gymnastics?
• Am i more likely to make a typo in google if I’m typing fast?
• What if you can catch me mis-placing my hand on the homerow/ `how dp upi apwaus fomd tjos crazu stiff?` That’s almost like just one error. (It’s certainly less distance from the real sentence than a random string of characters of equal length.)
• Or if I click the mouse in the wrong place before correcting my spelling? `d(Norschwanstein, Ndorschwanstein)` or `d(rehabilitation, rehabitatiilon)`
• Am i more likely to isnert a common cliche rather than what i actually mean after a word that begins a common cliche/

#### A Bit Of  Forensics

EDIT: Once I got about halfway throguh this article, I stopped correcting my typoes, so you can see the kind that I make. I was typing on a flat keyboard, asymmetrically holding a smallish non-Mac laptop (bigger than an Eee) with my elbows out, head down — except when I type fast and interchange letters, with perfect posture, “playing the piano” with my ten finger muscles rather than moving my wrists — at an ergonomic keyboard with a broken M. I actually don’t recall which way i wrote this article. I may hav eeven written it in shifts.

Here are some nice ones as well. Look at the comments section. By the posting times (and text) you can see that the debate was feverish—no time for corrections and the correspondents were steamed up emotionally. Their typoes really have personalities—for example Kien makes a lot of errors with his right middle finger moving up. (`did → dud`, `is → us, promoted → promotied, inquisition → iquisition`, `mean → meaqn`,` Church → Chruch`,` because → becuase`,` Copernican → Ceprican`, `your → you`, `clearly → cleary`) but also some errors of spelling with no sound-distance (`Pythagoras → Pythagorus`) and uses both the sounds `disingenious` and `disingenuous`. Letter-switching, ilke I do, is common; a few fat-fingers (`meaqn`) or forgotten letters, but this `iou` stuff seems unusual and possibly characteristic of something.

Other participants make different sorts of errors, or at least with different frequencies (they’re relatively more likely to omit or switch letters than to use the wrong letter, for example). But let’s just focus on Ken because so many errors of the typoes are localised to that right middle finger. I wonder if Ken has a problem with that finger? Or maybe his keyboard is shaped in such a way that it’s difficult to correctly strike those keys specifically? (Maybe certain ergonomic keyboards would fit this — or an Eee Pc with the elbows out and “pigeon-toed” hands. But why would the errors then be localised to the right middle finger? It’s more mobile than pinky & ring fingers and we’re not taught to stick it to the homerow like the index finger.) I rule out the theory that his right hand hovers above the keyboard rather than sitting on the homerow because then he should make similar errors with `yuiop` and maybe `bnm,.hjkl;` as well. Also, notice that he doesn’t make comparable errors with `ewr` as with `iou`. How do we know he sits symmetrically? I have a tough time deciphering why there are more errors with that finger on a first read-through.

We could find more of Ken’s writing here and see how he types when he’s less agitated. I bet there are no `Ceprican`'s there but `Pythagorus` would still be. As for `Chruch`? Hmmm. Don’t know.

#### Big Data vs Models

Now the big-data-ists (the other half of Leo Breiman’s partition of statistical modellers -vs- data miners) would probably say “Google has a jillion search results including measurements of people correcting themselves and including time series of the letters people type — so just throw some naive Bayes at that pile and watch it come to the correct answer!” Maybe they’re right.

If someone wants to mess around with this stuff with me — leave me a comment. We could grab tweets and analyse typoes within differnet text-…[by which tool] was used to send the tweet. For example the Twitter website means it was keyboard-typed, certain mobile devices have Swype, other errors we might be able to guess tha tis …[that it’s] a T9 mobile keyboard.

• Could we tell if a person is left-handed by their keyboard mistkaes?
• Could we guess their education level/
• Could we tell what tweeting platform they used by their errors rather than by
• Could we tell where they’re from? Or any other stalky information that advertisers/HR want to know but web browsers want to hide about themselves? (Say goodbye to mandatory drug testing in the workplace, say hello to your boss getting an email when a statistics company that monitors your twitter feed guesses you smoked pot last night based on the spelling and timing of your Facebook posts.)

[G]eometry and number[s]…are unified by the concept of a coordinate system, which allows one to convert geometric objects to numeric ones or vice versa. …

[O]ne can view the length ❘AB❘ of a line segment AB not as a number (which requires one to select a unit of length), but more abstractly as the equivalence class of all line segments that are congruent to AB.

With this perspective, ❘AB❘ no longer lies in the standard semigroup ℝ⁺, but in a more abstract semigroup (the space of line segments quotiented by congruence), with addition now defined geometrically (by concatenation of intervals) rather than numerically.

A unit of length can now be viewed as just one of many different isomorphisms Φ: ℒ → ℝ⁺ between and ℝ⁺, but one can abandon … units and just work with directly. Many statements in Euclidean geometry … can be phrased in this manner.

(Indeed, this is basically how the ancient Greeks…viewed geometry, though of course without the assistance of such modern terminology as “semigroup” or “bilinear”.)
Terence Tao

(Source: terrytao.wordpress.com)

## Manifolds, Star Fox, and Self-versus-Other

Branes, D-branes, M-theory, K-theory … news articles about theoretical physics often mention “manifolds”.  Manifolds are also good tools for theoretical psychology and economics. Thinking about manifolds is guaranteed to make you sexy and interesting.

Fortunately, these fancy surfaces are already familiar to anyone who has played the original Star Fox—Super NES version.

In Star Fox, all of the interactive shapes are built up from polygons.  Manifolds are built up the same way!  You don’t have to use polygons per se, just stick flats together and you build up any surface you want, in the mathematical limit.

The point of doing it this way, is that you can use all the power of linear algebra and calculus on each of those flats, or “charts”.  Then as long as you’re clear on how to transition from chart to chart (from polygon to polygon), you know the whole surface—to precise mathematical detail.

Regarding curvature: the charts don’t need the Euclidean metric.  As long as distance is measured in a consistent way, the manifold is all good.  So you could use hyperbolic, elliptical, or quasimetric distance. Just a few options.

` `

Manifolds are relevant because according to general relativity, spacetime itself is curved.  For example, a black hole or star or planet bends the “rigid rods" that Newton & Descartes supposed make up the fabric of space.

In fact, the same “curved-space” idea describes racism. Psychological experiments demonstrate that people are able to distinguish fine detail among their own ethnic group, whereas those outside the group are quickly & coarsely categorized as “other”.

This means a hyperbolic or other “negatively curved" metric, where the distance from 0 to 1 is less than the distance from 100 to 101.  Imagine longitude & latitude lines tightly packed together around "0", one’s own perspective — and spread out where the “others” stand.  (I forget if this paradigm changes when kids are raised in multiracial environments.)

Experiments verify that people see “other races” like this. I think it applies also to any “othering” or “alienation” — in the postmodern / continental sense of those words.

` `

The manifold concept extends rectilinear reasoning familiar from grade-school math into the more exciting, less restrictive world of the squibbulous, the bubbulous, and the flipflopflegabbulous.

B*tchin’ six dimensional 6-cube. The rainbow colours and glass panes really help this visualisation.

## Examples of 6-dimensional things

If it’s hard to envision 6 dimensions, consider this: the possible tunings of a guitar constitute a 6-dimensional space. You can tune to EADGBE (standard), DADGAB, drop-D, DADGAD, GCCGCC, BEBEBE, CGCFGE, and many others.

(If you consider notes an octave apart to be equivalent, then we’re talking about a quotient space, each distance being topologically on a loop. But that’s just one system of musical valuation — and like the winding number of a complex number, it’s totally apparent that high octaves do not sound exactly the same as low sounds. And doing a 720° is more impressive than a 360°. If the abstract “loop” is unwound, there is a highest note (“1”) and a lowest note (“0”) that can effectively be played on each string (dimension).)

` `

You can also think about 6-D as being the six columns in a table or array. For example the { RBI, on-base percentage, fielding errors, stolen bases, sacrifice flies, and home-runs } for a number of baseball players.

Or you can think about six security prices moving in parallel, from bell to bell at the NYSE.

Again the lowest price is called “0” and the highest is called “1”. This renaming places the jumping Brownian motions inside a secteract. So instead of six 1-D paths it’s one 6-D path:

` `

Enough examples of 6-dimensional things. Back to the 6-cube itself.

Let’s make one.

The bounds of the secteract (its “corners”? Or should I say its 6-corners.) come from filling in each of six slots with either 0 or 1.

There are 64 ways to do this. (two options for each of six slots = 2^6.) For example (0,0,0,0,0,1) is one, (0,0,0,0,1,0) is another, and (0,1,1,0,1,0) is a third out of the 64.

The R programming language was nice enough to write out all of the vertices for me without my having to type much. Here they are:

`> booty=c(0,1) > expand.grid(booty,booty,booty,booty,booty,booty) #rockin everywhere`

```   Var1 Var2 Var3 Var4 Var5 Var6
1     0    0    0    0    0    0
2     1    0    0    0    0    0
3     0    1    0    0    0    0
4     1    1    0    0    0    0
5     0    0    1    0    0    0
6     1    0    1    0    0    0
7     0    1    1    0    0    0
8     1    1    1    0    0    0
9     0    0    0    1    0    0
10    1    0    0    1    0    0
11    0    1    0    1    0    0
12    1    1    0    1    0    0
13    0    0    1    1    0    0
14    1    0    1    1    0    0
15    0    1    1    1    0    0
16    1    1    1    1    0    0
17    0    0    0    0    1    0
18    1    0    0    0    1    0
19    0    1    0    0    1    0
20    1    1    0    0    1    0
21    0    0    1    0    1    0
22    1    0    1    0    1    0
23    0    1    1    0    1    0
24    1    1    1    0    1    0
25    0    0    0    1    1    0
26    1    0    0    1    1    0
27    0    1    0    1    1    0
28    1    1    0    1    1    0
29    0    0    1    1    1    0
30    1    0    1    1    1    0
31    0    1    1    1    1    0
32    1    1    1    1    1    0
33    0    0    0    0    0    1
34    1    0    0    0    0    1
35    0    1    0    0    0    1
36    1    1    0    0    0    1
37    0    0    1    0    0    1
38    1    0    1    0    0    1
39    0    1    1    0    0    1
40    1    1    1    0    0    1
41    0    0    0    1    0    1
42    1    0    0    1    0    1
43    0    1    0    1    0    1
44    1    1    0    1    0    1
45    0    0    1    1    0    1
46    1    0    1    1    0    1
47    0    1    1    1    0    1
48    1    1    1    1    0    1
49    0    0    0    0    1    1
50    1    0    0    0    1    1
51    0    1    0    0    1    1
52    1    1    0    0    1    1
53    0    0    1    0    1    1
54    1    0    1    0    1    1
55    0    1    1    0    1    1
56    1    1    1    0    1    1
57    0    0    0    1    1    1
58    1    0    0    1    1    1
59    0    1    0    1    1    1
60    1    1    0    1    1    1
61    0    0    1    1    1    1
62    1    0    1    1    1    1
63    0    1    1    1    1    1
64    1    1    1    1    1    1

```

And there you have it: an electronic realisation of a secteract. Just as real as a Polyworld life-form.

## Noncommutative distances between industries

The distance from your house to the grocery must be the same as the distance back, but 20th-century mathematicians speculated about circumstances where this might not be the case.

Very small-scale physics is non-commutative in some ways and so is distance in finance.

But non-commutative logic isn’t really that exotic or abstract.

• Imagine you’re hiring. You could hire someone from the private sector, charity sector, or public sector. It’s easier for v managers to cross over into b | c than for c | b managers to cross over into v.

So private is close to public, but not the other way around. Or rather, v is closer to b than b is to v.  δv, | < δb| . (same for δ| vc |)

• Perhaps something similar is true of management consulting, or i-banking? Such is the belief, at least, of recent Ivy grads who don’t know what to do but want to “keep their options open”.

This might be more of a statement about average distance to other industries ∑ᵢ δ| consulting, xᵢ | being low, rather than a comparison between δ| consulting, x |   and   δ| x, consulting | . Can you cross over from energy consulting to actual energy companies just as easily as the reverse?

• Imagine you’re want a marketing consultant. Maybe some “verticals” are more respected than others? So that a firm from vertical 1 could cross over into vertical 2 but not vice versa.
• Is it easier for sprinters to cross over into distance running, or vice versa? I think distance runners have a more difficult time getting fast. If it’s easier for one type to cross over, then δ| sprinter, longdist |    δ| longdist, sprinter |.
• It’s easier to roll things downhill than uphill. So the energy distance δ | top, bottom |  <  δ | bottom, top |.
• It’s usually cheaper to ship one direction than the other. Protip: if you’re shipping PACA (donated clothes) from the USA to Central America, crate your donation on a Chiquita vessel returning to point of export.

Noncommutative distance, homies. (quasimetric) And I didn’t invoke quantum field theory or Alain Connes. Just business as usual.