Quantcast

Posts tagged with decision theory

If people are rational and self-interested, why do they incriminate themselves after being Mirandised?

After minute 31 an experienced Virginia Beach interrogator-cum-3L explains how he convinces criminals to confess, against their interest, even after advising them that “Anything you say may be used in court”.

Especially after minute 34, 36, 38, 39, 40, 45, 47 he explains how he has outsmarted several criminal archetypes over 28 years.

Also check the interrogator’s view (at min 45) on cultural prejudice and presumption of guilt in Virginia Beach criminal court.




Mirror symmetry is an example of a duality, which occurs when two seemingly different systems are isomorphic in a non-trivial way. The non-triviality of mirror symmetry involves quantum corrections. It’s like the Fourier transform, where “local” in one domain translates to “global”—something requiring information from over the whole space—in the other domain.


a Fourier spike
image

Under a local/global isomorphism, complicated quantities get mapped to simple ones in the dual domain. For this reason the discovery of duality symmetries has revolutionized our understanding of quantum theories and string theory.

image
image

summer school on mirror symmetry (liberally edited)

 

Thinking about local-global dualities gave me another idea about my model-sketch of knowledge, ignorance & expectation.

  • Under physical limitations, at a fixed energy level, Fourier duality causes a complementary tradeoff between frequency and time domains—not both can be specific. Same with position & momentum, again at a fixed energy level.
  • Under human limitations, at a fixed commitment of effort|time|concentration, you can either dive deep into a few areas of knowledge|skill, or swim broadly over many areas of knowledge|skill.

If I could come up with a specific implementation of that duality it would impose a boundary constraint on that model-sketch. Which would be great as optimal time|effort|concentration|energy could be computed from other parts of decision theory.




Lucas’ “rational expectations” revolution in macroeconomics has been tied to the ending of stagflation in the world’s largest economy, and to the reintroduction of “psychology” into finance and economics. However, I never felt like the models of “expectation” I’ve seen in economics seem like my own personal experience of living in ignorance. I’d like to share the sketch of an idea that feels more lifelike to me.

http://www.olivierlanglois.net/images/voro2.jpg

First, let me disambiguate: the unfortunate term-overlap with “statistical expectation” (= mean = average = total over count = ∑ᵢᴺ•/N = a map from N dimensions to 1 dimension) indicates nothing psychological whatever. It doesn’t even correspond to “What you should expect”.

If I find out someone is a white non-Hispanic Estadounidense (somehow not getting any hints of which state, which race, which accent, which social class, which career track…so it’s an artificial scenario), I shouldn’t “expect” the family to be worth $630,000. I “expect” (if indeed my expectation is not a distribution but rather just one number) them to be worth $155,000. (scroll down to green)

Nor, if I go to a casino with 99% chance of losing €10,000 and 1% chance of winning €1,000,000 (remember the break-even point is €990,000). “On average” this is a great bet. But that ignores convergence to the average, which would be slow. I’d need to play this game a lot to get the statistics working in my favour, and I mightn’t stay solvent (I’d need to get tens of millions of AUM—with lockdown conditions—to even consider this game). No, the “statistical expectation” refers to a long-run or wide-space convergence number. Not “what’s typical”.

Not only is the statistical expectation quite reductive, it doesn’t resemble what I’ve introspected about uncertainty, information, disinformation, beliefs, and expectations in my life.

File:Coloured Voronoi 3D slice.svg

A better idea, I think, comes from the definition of Riemann integration over 2+ dimensions. Imagine covering a surface with a coarse mesh. The mesh partitions the surface. A scalar is assigned to each of the interior regions inscribed by the mesh. The mesh is then refined (no lines taken away, only some more added—so some regions get smaller/more precise and no regions get larger/less precise), new scalars are computed with more precise information about the scalar field on the surface.
a scalar field

NB: The usual Expectation operator 𝔼 is little more than an integral over “possibilities” (whatever that means!).

(In the definitions of Riemann integral I’ve seen the mesh is square, but Voronoi pictures look awesomer & more suggestive of topological generality. Plus I’m not going to be talking about infinitary convergence—no one ever becomes fully knowledgeable of everything—so why do I need the convenience of squares?)

I want to make two changes to the Riemannian-integral mesh.

image
image

 

First I’d like to replace the scalars with some more general kind of fibre. Let’s say a bundle of words and associations.

(You can tell a lot about someone’s perspective fro the words they use. I’ll have to link up “Obverse Words”, which has been in my drafts folder for over a year, once I finish it—but you can imagine examples of people using words with opposite connotation to denote the same thing, indicating their attitude toward the thing.)

http://i780.photobucket.com/albums/yy90/AlexMLeo/felixsbrain.jpg

Second, I’d like to use the topology or covering maps to encode the ignorance somehow. In my example below: at a certain point I knew “Rails goes with Ruby” and “Django goes with Python” and “Git goes with Github” but didn’t really understand the lay of the land. I didn’t know about git’s competitors, that you can host your own github, that Github has competitors, the more complex relationship between ruby and python (it’s not just two disjoint sets), and so on.

When I didn’t know about Economics or Business or Accounting or Finance, I classed them all together. But now they’re so clearly very very different. I don’t even see Historical Economists or Bayesian Econometricians or Instrumental Econometricians or Dynamical Macroeconomists or Monetary Economists or Development Economists as being very alike. (Which must imply that my perspective has narrowed relative to everyone else! Like tattoo artists and yogi masters and poppy farmers must all be quite different to the entire class of Economists—and look even from my words how much coarse generalisation I use to describe the non-econ’s versus refinement among the econ’s.
image
These meshes can have a negative curvature (with, perhaps a memory) if you like. You know when you think that property actuaries are nothing at all like health actuaries that your frame-of-reference has become very refined among actuary-distinguishment. Which might mean a coarse partitioning of all the other people! Like Bobby Fischer’s use of the term “weakies” for any non-chess player—they must all be the same! Or at least they’re the same to me.)

image

Besides the natural embedding of negatively-curved judgment grids, here are some more pluses to the “refinement regions” view of ignorance:

  1. You could derive a natural “conservation law” using some combination of e.g. ability, difficulty, how good your teachers are, and time input to learning, how many “refinements” you get to make. No one can know everything.

    (Yet somehow we all are supposed to function in a global economy together—how do we figure out how to fit ourselves together efficiently?

    And what if people use your lack of perspective to suggest you should pay them to teach you something which “evaluates to valuable” from your coarse refinement, but upon closer inspection, doesn’t integrate to valuable?)
  2. Maybe this can relate to the story of Tony—how we’re always in a state of ignorance even as we choose what to become less ignorant about. It would be nice to be able to model the fact that one can’t escape one’s biases or context or history.
  3. And we could get a fairly nice representation of “incompatible perspectives”. If the topology of your covering maps is “very hard” to match up to mine because you speak dialectics and power structures but I speak equilibria and optima, that sounds like an accurate depiction. Or when you talk to someone who’s just so noobish in something you’re so expert in, it can feel like a very blanket statement over so many refinements that you don’t want to generalise over (and from “looking up to” an expert it can also feel like they “see” much more detail of the interesting landscape.)
  4. Ignorance of one’s own ignorance is already baked into the pie! As is the beginner’s luck. If I “integrate over the regions” to get my expected value of a certain coarse region, my uninformed answer may have a lot of correctness to it. At the same time, the topological restrictions mean that my information and my perspective on it aren’t “over there” in some L2-distance sense, rather they’re far away in a more appropriately incompatible-with-others sense.

In conclusion, I’m sure everyone on Earth can agree that this is a Really Nifty and Cool Idea.

File:ApproximateVoronoiDiagram.png

 

I’ll try to give a colourful example using computers and internet stuff since that’s an area I’ve learned a lot more about over the past couple years.

A tiny portion of Doug Hofstadter’s “semantic network”.  via jewcrew728, structure of entropy

First, what does ignorance sound like?

  • (someone who has never seen or interacted with a computer—let’s say from a non-technological society or a non-computery elderly rich person. I’ve never personally seen this)
  • "Sure, programming, I know a little about that. A little HMTL, sure!”
  • "Well, of course any programming you’re going to be doing, whether it’s for mobile or desktop, is going to use HTML. The question is how.

OK, but I wasn’t that bad. In workplaces I’ve been the person to ask about computers. I even briefly worked in I.T. But the distance from “normal people” (no computer knowledge) to me seems very small now compared to the distance between me and people who really know what’s up.

A few years ago, when I started seriously thinking about trying to make some kind of internet company (sorry, I refuse to use the word “startup” because it’s perverted), I considered myself a “power user” of computers. I used keyboard shortcuts, I downloaded and played with lots of programs, I had taken a C++ course in the 90’s, I knew about C:\progra~1 and how to get to the hidden files in the App packages on a Mac.

My knowledge of internet business was a scatty array of:

  • Mark Zuckerberg
  • "venture capital"
  • programer kid internet millionaires
  • Kayak.com — very nice interface!
  • perl.
    Regular Expressions
    11th Grade
  • mIRC
  • TechCrunch
  • There seem to be way more programming going on to impress other programmers than to make the stuff I wanted!
  • I had used Windows, Mac, and Linux (!! Linux! Dang I must be good)
  • I knew that “Java and Javascript are alike the way car and carpet are alike”—but didn’t know a bit of either language.
  • I used Alpine to check my gmail. That’s a lot of confusing settings to configure! And plus I’m checking email in text mode, which is not only faster but also way more cooly nerdy sexy screeny.
  • Object-Oriented, that’s some kind of important thing. Some languages are Object-Oriented and some aren’t.
  • "Python is for science; Ruby is for web"
  • sudo apt-get install
    Sandwich
  • I had run at least a few programs from the command line.
  • I had done a PHP tutorial at W3CSchools … that counts as “knowing a little PHP”, right?

So I knew I didn’t know everything, but it was very hard to quantify how much I did know, how far I had to go.

image

A mediocre picture of some things I knew about at various levels. It’s supposed to get across a more refined knowledge of, for example, econometrics, than of programming. Programming is lumped in with Linux and rich programmer kids and “that kind of stuff” (a coarse mesh). But statistical things have a much richer set of vocabulary and, if I could draw the topology better, refined “personal categories” those words belong to.

Which is why it’s easier to “quantify” my lack of knowledge by simply listing words from the neighbourhood of my state of knowledge.

Unfortunately, knowing how long a project should take and its chances of success or potential pitfalls, is crucial to making an organised plan to complete it. “If you have no port of destination, there is no favourable wind”. (Then again, no adverse wind either. But in an entropic environment—with more ways to screw up than to succeed—turning the Rubik’s cube randomly won’t help you at all. Your “ship” might run out of supplies, or the backers murder you, etc.)

File:2Ddim-L2norm-10site.png

Here are some of the words I learned early on (and many more refinements since then):

  • Rails
  • Django
  • IronPython
  • Jython
  • JSLint
  • MVC
  • Agile
  • STL
  • pointers
  • data structures
  • frameworks
  • SDK’s
  • Apache
  • /etc/.httpd
  • Hadoop
  • regex
  • nginx
  • memcached
  • JVM
  • RVM
  • vi, emacs
  • sed, awk
  • gdb
  • screen
  • tcl/tk, cocoa, gtk, ncurses
  • GPG keys
  • ppa’s
  • lspci
  • decorators
  • virtual functions
  • ~/.bashrc, ~/.bash_profile, ~/.profile
  • echo $SHELL, echo $PATH
  • "scripting languages"
  • "automagically"
  • sprintf
  • xargs
  • strptime, strftime
  • dynamic allocation
  • parser, linker, lexer
  • /env, /usr, /dev,/sbin
  • GRUB, LILO
  • virtual consoles
  • Xorg
  • cron
  • ssh, X forwarding
  • UDP
  • CNAME, A record
  • LLVM
  • curl.haxx.se
  • the difference between jQuery and JSON (they’re not even the same kind of thing, despite the “J” actually referring to Javascript in both cases)
  • OAuth2
  • XSALT, XPath, XML

http://www.financialiceberg.com/uploads/iceberg340.jpg
http://www.emeraldinsight.com/content_images/fig/1100190504002.png


http://www.preventa.ca/images/im_risk_anatomy.jpg

This is only—as they say—“the tip of the iceberg”. I didn’t know a ton of server admin stuff. I didn’t understand that libraries and frameworks are super crucial to real-world programming. (Imagine if you “knew English” but had a vocabulary of 1,000 words. Except libraries and frameworks are even better than a large vocabulary because they actually do work for you. You don’t need to “learn all the vocabulary” to use it—just enough words to call the library’s much larger program that, say, writes to the screen, or scrapes from the web, or does machine learning, for you.)

The path should go something like: at first knowing programming languages ⊃ ruby. Then knowing programming languages ⊃ ruby ⊃ rubinius, groovy, JRuby. At some point uncovering topological connections (neighbourhood relationships) to other things (a comparison to node.js; a comparison to perl; a lack of comparability to machine learning; etc.)

I could make some analogies to maths as well. I think there are some identifiable points across some broad range of individuals’ progress in mathematics, such as:

  • when you learn about distributions and realise this is so much better than single numbers!

    a rug plot or carpet plot is like a barcode on the bottom of your plot to show the marginal (one-dimension only) distribution of data

    who is faster, men or women?
  • when you learn about Gaussians and see them everywhere
    Central Limit Theorem  A nice illustration of the Central Limit Theorem by convolution.in R:  Heaviside <- function(x) {      ifelse(x>0,1,0) }HH <- convolve( Heaviside(x), rev(Heaviside(x)),        type = "open"   )HHHH <- convolve(HH, rev(HH),   type = "open"   )HHHHHHHH <- convolve(HHHH, rev(HHHH),   type = "open"   )etc.  What I really like about this dimostrazione is that it’s not a proof, rather an experiment carried out on a computer.  This empiricism is especially cool since the Bell Curve, 80/20 Rule, etc, have become such a religion.NERD NOTE:  Which weapon is better, a 1d10 longsword, or a 2d4 oaken staff? Sometimes the damage is written as 1-10 longsword and 2-8 quarterstaff. However, these ranges disregard the greater likelihood of the quarterstaff scoring 4,5,6 damage than 1,2,7,8. The longsword’s distribution 1d10 ~Uniform[1,10], while 2d4 looks like a Λ.  (To see this another way, think of the combinatorics.)
  • when you learn that Gaussians are not actually everywhere
    kernel density plot of Oxford boys' heights.

    histogram of Oxford boys' heights, drawn with ggplot.A (bimodal) probability distribution with distinct mean, median, and mode.
  • in talking about probability and randomness, you get stuck on discussions of “what is true randomness?” “Does randomness come from quantum mechanics?” and such whilst ignorant of stochastic processes and probability distributions in general.
  • (not saying the more refined understanding is the better place to be!)
  • A brilliant fellow (who now works for Google) was describing his past ignorance to us one time. He remembered the moment he realised “Space could be discrete! Wait, what if spacetime is discrete?!?!?! I am a genius and the first person who has ever thought of this!!!!” Humility often comes with the refinement.
  • when you start understanding symbols like ∫ , ‖•‖, {x | p} — there might be a point at which chalkboards full of multiple integrals look like the pinnacle of mathematical smartness—
    http://www.niemanlab.org/images/math-formula-chalkboard.jpg
  • but then, notice how real mathematicians’ chalkboards in their offices never contain a restatement of Physics 103!
    Kirby topology 2012
    http://whatsonmyblackboard.files.wordpress.com/2011/06/21june2011.jpg
    A parsimonious statement like “a local ring is regular iff its  global dimension is finite” is so, so much higher on the maths ladder than a tortuous sequence of u-substitutions.
  • and so on … I’m sure I’ve tipped my hand well enough all over isomorphismes.tumblr.com that those who have a more refined knowledge can place me on the path. (eg it’s clear that I don’t understand sheaves or topoi but I expect they hold some awesome perspectives.) And it’s no judgment because everyone has to go through some “lower” levels to get to “higher” levels. It’s not a race and no one’s born with the infinite knowledge.
 

I think you’ll agree with me here: the more one learns, the more one finds out how little one knows. One can’t leave one’s context or have knowledge one doesn’t have. And all choices are embedded in this framework.




We Are Not Objects

  • @isomorphisms: I don't think "inheritance" from the object-oriented programming paradigm works to describe people in real life, for at least two reasons:
  • @isomorphisms: [1] @ISA versus "does". "Am I" a mathmo? This is like identifying someone with their career title, versus "I do maths" or "I'll be doing maths later today". "Am I" a writer? Or am I writing right now? Or do I write for 7% of my waking hours?
  • @isomorphisms: Something I notice as well talking to bourgeois youths. "Is a" entrepreneur. "Is a" gardener. "Is a" cook. Related to their division of life into career and "on the side".
  • @isomorphisms: Also twitter profiles. Some people list a lot of nouns or titles to describe themselves. I wrote a poem once; I started a business once. Does that make me @ISA poet or @ISA entrepreneur?
  • See also: [isomorphismes.tumblr.com/post/15409646048] -- what E.O. Wilson said about how we're all expected to play to defined roles & expectations -- Behave As Mother; Behave As Wife; Behave As Judge; Behave As Daughter [https://www.psychotherapy.net/article/parents].
  • @isomorphisms: [2] Maybe the more fundamental problem is that I'd want to pass *response functions* rather than properties. The idea that people respond to their circumstances rather than being determined by properties. "Am I" lazy with no ambition? Or don't see opportunities and thus don't work toward "growth"? "Am I" passionate about Ruby? Or did I come across the Ruby language and gradually get more and more into it, as a response to environment?




The seeds of my dissent from economic orthodoxy were pretty much sown for me by my 1st professor on the 1st day of my 1st economics class.

This prof had gone to a great personal trouble to begin our exposure to the dismal science with a very down-to-earth and super-important lesson. She went so far as to spend her own cash on some things from the store, of varying cost, and gave us all at the beginning of the class random items. Some people got candy, some got socks, one or two got things of greater value.

This was a masterful teaching stroke, by someone who cared deeply about her subject and teaching it to newbies: she would have us all participate in voluntary trade within the classroom and end up than we started. Gains from trade—the fundamental point about economics—are really “the only thing we know about welfare”. Sure, some people start off with more—more wealth, more smarts, better looks, genes that will make them grow taller so they can reach the mayonnaise jars from the back—but hey, at least we can make all of them better off and not hurt anyone by allowing them to trade freely.

Right?

We each reported, on a scale of 1 to 10, how satisfied we were with the Stuff we had been randomly given at the beginning of the class, and the prof wrote these scores down on the board. Then we were asked to stand up, walk about the room, and see if anyone would voluntarily exchange Stuff with us. Multiple transactions were allowed, even encouraged—and after a few minutes of cluelessly blitzing with each other, the trading day was closed and we resumed our seats.

The prof asked our scores again, fully expecting that ∀i in the class, utility before utility after.

But one girl reported a lower score.

Instead of taking this as evidence against her belief that transactions are always mutually beneficial—a cornerstone of normative economic theory—the prof instead scolded the girl. "Well, what’d ya do that for?!”

By the way, this was not a prof who prepended test questions with the phrase “According to the theory we learned in class,” which means I still dispute that I got that one about the lobstermongers right! (Since it asked about “What would happen” not “what the theory says would happen”.)

At the time I thought the outburst a bit rude and over the years to come I remembered the episode. (well, obviously) I still think of it as a microcosm of certain intellectual misdeeds by economists. The framework is too important to hold onto; if anyone undermines then you get angry and yell at them! It’s a plausibility war, after all.

Not too far off from real comments by economists: But if you took away the mutually-beneficial assumption, then you’d have no theory at all! (Regardless of whether nullset is the only true theory we have.)

The assumptions about what goes on in transactions are so appealing that even when you see them violated in front of your eyes, they’re still so implausible and—hey—what about all this stuff I learned about indifference curves? If I saw so many graphs with them not overlapping or going backwards, then that has to be the truth, because maths!

Nevermind that people don’t always know what they want, or maybe it’s contradictory or impossible, and even in well-defined classroom experiments they may just, um, do it wrong.

Happy Independence Day. Here’s to hoping you don’t use the independence to shoot yourself in the foot.




lembarrasduchoix asked:

thank you for the introduction to Newcomb’s paradox! Could you do a post on your favorite paradoxes? 

 
The decision theory paradoxes I’m familiar with are:
Ellsberg Paradox— Theorists encode bothsituations with unknown probabilities, such as the chance of extraterrestrial intelligence in the Drake Equation or the chance of someone randomly coming up and killing you, and
situations that are known to have a “completely random” outcome, like fair dice or the runif function in R,
the same way. However the two differ materially and so do behavioural responses to the types of situations. 
Allais Paradox — The difference between 100% chance and 99% chance in people’s minds is not the same as the difference between 56% chance and 55% chance in people’s minds. (In other words, the difference is nonlinear.) At least when those numbers are written on paper.Prospect theory proposes the following [0,1]→[0,1] function describing how "we" perceive probabilities(Remember that it shouldn’t be taken for granted that everybody thinks the same, or that it’s possible to simnply re-map a person’s probability judgment onto another probability. Perhaps the codomain needs to change to something other than [0,1], for example a poset or a von Neumann algebra.)
Newcomb’s Paradox — This one has a self-referential feel to it. At least as of today, the story is well told on Wikipedia. The Newcomb paradox seems to undercut the notion that “more is always preferred to less” — a central tenet of microeconomics. However, I believe it’s really undercutting the way we reason about counterfactuals. I actually don’t like this one as much as the Ellsberg and Allais paradoxes, which teach an unambiguous lesson.
 
Despite the name, they’re notreally paradoxes. They are just evidence that probability + utility theory ≠ what’s going on inside our 10^10 neurons. I don’t think Herb Simon would be surprised at that. (Simon is famous for arguing to economists that “economic agents” — both people and firms — have a finite computational capacity, so we shouldn’t put too much faith in the optimisation paradigm.)
You can find out a lot more about each of these paradoxes by googling. As is my way, I’ve tried to provide the shortest-possible intro on the subject. Twenty-two slides opening the door for you.
I also think it’s interesting how the calculus disproves Zeno’s paradox and how a proper measure-theory-conscious theory of martingales disproves the St. Petersburg paradox. I also think Vitali sets and the Banach-Tarski paradox are compelling arguments against the real numbers. Particularly since everything practical is accomplished with (finite) floats, I’m not sure why people hold on to ℝ in the face of those results.
But personally, I’m more interested in decision theory / choice theory than those pure-maths clarifications.
 
I know I am forgetting several interesting paradoxes which have revolutionised the way people think. (Zeno thought his reasoning was so revolutionary that he concluded, via modus tollens, that the world didn’t actually exist. One of many religions that has come to such a belief, not to mention Neo and Morpheus thought so.)If I’ve neglected one of your favourite paradoxes, please leave a comment below telling us about it.

lembarrasduchoix asked:

thank you for the introduction to Newcomb’s paradox! Could you do a post on your favorite paradoxes? 
 

The decision theory paradoxes I’m familiar with are:

  • Ellsberg Paradox— Theorists encode both
    1. situations with unknown probabilities, such as the chance of extraterrestrial intelligence in the Drake Equation or the chance of someone randomly coming up and killing you, and
    2. situations that are known to have a “completely random” outcome, like fair dice or the runif function in R,
    the same way. However the two differ materially and so do behavioural responses to the types of situations. 
  • Allais Paradox — The difference between 100% chance and 99% chance in people’s minds is not the same as the difference between 56% chance and 55% chance in people’s minds. (In other words, the difference is nonlinear.) At least when those numbers are written on paper.
    image
    Prospect theory proposes the following [0,1]→[0,1] function describing how "we" perceive probabilities
    I tried to edit this to make it more readable, really I should just redo it in R myself.(Remember that it shouldn’t be taken for granted that everybody thinks the same, or that it’s possible to simnply re-map a person’s probability judgment onto another probability. Perhaps the codomain needs to change to something other than [0,1], for example a poset or a von Neumann algebra.)
  • Newcomb’s Paradox — This one has a self-referential feel to it. At least as of today, the story is well told on Wikipedia. The Newcomb paradox seems to undercut the notion that “more is always preferred to less” — a central tenet of microeconomics. However, I believe it’s really undercutting the way we reason about counterfactuals. I actually don’t like this one as much as the Ellsberg and Allais paradoxes, which teach an unambiguous lesson.
 

Despite the name, they’re notreally paradoxes. They are just evidence that probability + utility theory ≠ what’s going on inside our 10^10 neurons. I don’t think Herb Simon would be surprised at that. (Simon is famous for arguing to economists that “economic agents” — both people and firms — have a finite computational capacity, so we shouldn’t put too much faith in the optimisation paradigm.)

You can find out a lot more about each of these paradoxes by googling. As is my way, I’ve tried to provide the shortest-possible intro on the subject. Twenty-two slides opening the door for you.

I also think it’s interesting how the calculus disproves Zeno’s paradox and how a proper measure-theory-conscious theory of martingales disproves the St. Petersburg paradox. I also think Vitali sets and the Banach-Tarski paradox are compelling arguments against the real numbers. Particularly since everything practical is accomplished with (finite) floats, I’m not sure why people hold on to  in the face of those results.

But personally, I’m more interested in decision theory / choice theory than those pure-maths clarifications.

 

I know I am forgetting several interesting paradoxes which have revolutionised the way people think. (Zeno thought his reasoning was so revolutionary that he concluded, via modus tollens, that the world didn’t actually exist. One of many religions that has come to such a belief, not to mention Neo and Morpheus thought so.)
And the guy sliced up a speeding car tyres with a samurai sword. You really can't argue with someone who does that.
If I’ve neglected one of your favourite paradoxes, please leave a comment below telling us about it.


hi-res




Many years ago, I was being driven to the airport and observed something stupid about myself. Then I used science (kind of). I remember this so clearly because it has symbolised other challenges since then.

image

I had a bag of snacks — Doritos or Chex Mix or something — sitting on my lap. I was eating them and talking with the driver. We were discussing business or something. I noticed I was eating the snacks rather quickly. Even after becoming aware of the speed, I found it hard to hold off on eating one for more than 10 seconds. (I probably ate a handful every ≤5 seconds.) The delicious taste of Chex Mix was in my mouth, making me want more.

As I thought about it, I was able to focus on the taste at least, and appreciate it, but I still found it hard to slow down.

image

I decided to do a little experiment on myself. I put the bag of Doritos at my feet instead of in between my legs. The next time I reached for the snacks I had a few more deciseconds to stay my hand—and it worked. The amount of time (or was it the effort?) it took to lean my torso forward gave me enough time (or was it inclination?) to think: “Do I really want another one yet?” and answer “No” more of the time. I started snacking more like every 30-60 seconds.

I decided to take the experiment one step further. (This is part of experimental science, right? You notice the beginnings of a trend and then you test more input values to see if the trend extrapolates.) I put the crisps (or squares, or whatever) behind my car seat. So, I needed to twist my torso, crane my neck, and put my arm into a fairly awkward position — costing more than a second and even more effort than leaning forward. That was enough to reduce my snacking to one every 2-5 minutes.

 

image

Certainly this is far from gold-standard science. But, I was satisfied with the findings (and until now, I didn’t publish them, so there was no-one else to satisfy.)

Years later Richard Thaler coined the wonderful phrase “libertarian paternalism" — and I thought, it doesn’t just have to be about governance. I can nudge myself as well. (Nudge is co-authored with Cass Sunstein, another hero.)

image

Here are some other tricks I’ve used to nudge myself into doing what I really want:

  • shutting my laptop when I leave it
  • putting my laptop in a drawer and closing it
    (both these give me more time to think: Is getting out the computer really what I want to do right now? What am I going to do on the computer? When am I going to be done?)
  • Standing at my desk improves my mood and energy and also makes me spend less time at the computer. (a key challenge is getting a monitor at eye level and a keyboard just below elbow level.)
  • Close my eyes if webpages take a long time to load. (why burn them out / hypnotise myself any more?)
  • If sitting at a computer with a monitor, I aperiodically stand up, walk away, and face away from the computer. (I face a wall, sitting or standing, or look outside, and think about what I actually need to accomplish on the computer.)
  • move email conversations quickly to phone call (in business)
  • send “to-read” Amazon previews to Kindle
  • I use the “Save for Later” extension for Chrome. (Even if I don’t actually read it later, I can believe that illusion for long enough to kick the tab out of my immediate view.)
  • If I open a new tab/window for goofing off when I really shouldn’t, I say the word “No” out loud so I can hear myself. That sometimes helps me close the tab and get back to work, only 2 seconds wasted.
  • Whenever I spend a lot of money on myself (electronics or a trip), I donate to charity. (I guess that’s more about habit formation as self-discipline rather than nudging myself into compliance.)
  • putting snacks / dessert higher up or behind cupboards
  • leaving a nice-looking knife & cutting board out in plain sight
  • leave vegetables and beans out in plain sight
  • Spend time organising my workspace so that more important things to do (or symbols of things I want to do) are in plain sight.
    For example, I might stack “to read” papers out of the way (I’ll find them when I’m bored). But if I decide I need to work out more I might clear my workspace and put my gym card or shorts in plain view.
  • write to-do lists on paper instead of on the computer

I haven’t developed any really good tricks for avoiding procrastinating on the Internet.

but Randall has ... click thru and read the alt text

Partly it’s because of blurred boundaries about what’s worth reading and what’s not. Partly it’s because with three keystrokes I can pop open a Twitter window or tumblr or reddit or facebook or … on-and-on … and make my “strategic” decision from there.

Advices? Similar experiences?




I gave this talk several years ago, but you know what? It’s still pretty decent.

Irrationality in Economics  

The title is misleading. Like many of my titles, it’s meant to grab attention rather than be exactly correct.

I was trying, with this talk, to convince college freshmen to switch from Philosophy to Economics. And you know, Philosophers are always talking about Rationality — is there even such a thing, and if so what does it consist of? Econ provides more than one concrete prescription for Rationality — more on that below.

 

“We are recorders and reporters of the facts—not judges of the behavior we describe.” —Alfred Kinsey 

I actually think that economists and psychologists could do more to prescribe healthy, effective behaviours and thought-strategies for people to follow. But the recommendations should be based on empirics, e.g.

  • "buy experiential goods, not durable goods";
  • "purchase with cash instead of plastic";
  • "beware these 4 common investing mistakes made by novices";
  • put crisps and fudge in a drawer, not in plain sight”

—not on a general model of “optimal” behaviour.

Theorists, though, don’t have the necessary understanding to make normative evaluations. Not yet, at least. But they can approach the deep Utility Theory questions in the spirit of the above quotation. They can model behaviours and thoughts, and inquire as to how they are internally structured — without the prejudice of inherited mathematical aesthetics.

What do I mean by ‘inherited aesthetics’ ? One example is substituting the mathematics of probability for a separate theory of human figuring.

 

I SHOULD HAVE SAID IT LIKE THIS IN THE SLIDES

One parsimonious shortcut economists tried, which didn’t work out, was to use probability mathematics to explain how people think about the future. If we can conceive of people’s beliefs as mathematical probabilities, then regular microeconomics + more maths = a new, better theory of behaviour.

For example, curved preferences over wealth would manifest themselves in probabilistic situations such as lotteries, insurance, betting, investing, employment in risky jobs, and love & sex risks.

But. People don’t think that way. They don’t make accurate calculations about Poisson distributions, Beta distributions, Bayesian priors, Aumann agreement theorems, and so on. I guess evolution either built us for something different or else we’re just misshapen clay with limited resources to Bayes our way to rationality.

I speculate that the way people think about probability — dubbed “subjective probability” by Leonard Savage — is shaped very differently from what mathematicians usually consider “natural” axioms — transitivity, commutativity, reflexivity, independence of irrelevant alternatives, monotonicity, and so on. But who knows? The correct theory doesn’t exist yet.

 

NOT ACTUALLY IRRATIONAL

The word “irrationality” I definitely ab-used.

Economists come up with a theory of how people behave and say it’s “ideal” or “rational”. People don’t actually think like that, so then we say they’re “irrational”? That doesn’t make sense. The theory was just wrong; an incorrect description. They perform sub-optimally according to some guy’s theory of the world, of their value system, and of how they should think. But since we don’t really know how people really think, how they experience the results of their choices, or how we should evaluate discrepant self-reports of how good a decision was, we can’t say what’s rational.

Like so, although it took the Ellsberg Paradox, Allais Paradox, and other results to disprove the accepted theory which naïvely united Probability and Utility, those results are not the point. The point is that we have to conceive a more realistic model of people’s mental models before Economics can draw valid conclusions about what people “should” do.