Posts tagged with regression

Although partial least squares regression was not designed for classification and discrimination, it is … used for these purposes. For example, PLS has been used to:

• • distinguish Alzheimer’s, senile dementia of the Alzheimer’s type, and vascular dementia
• • discriminate between Arabica and Robusta coffee beans
• • classify waste water pollution
• • separate active and inactive compounds in a quantitative structure-activity relationship study
• • differentiate two types of hard red wheat using near-infrared analysis
• • distinguish transsexualism, borderline personality disorder and controls using a standard instrument
• • determine the year of vintage port wine
• • classify soy sauce by geographic region
• • determine emission sources in ambient aerosol studies
Matthew Barker and William Rayens

(Source: enpub.fulton.asu.edu)

## Regression on Complexes III: Modcloth

My father used to tell me that when people complimented him on his tie, it was never because of the tie—it was because of the suit. If he wore his expensive suit, people would say “Nice tie!” But they were just mis-identifying what it was that they thought was nice. Similarly if you’re interviewing candidates and accidentally doing your part to perpetuate the beauty premium to salaries, you aren’t going to think “She was really beautiful, therefore she must be more competent”. You might just notate that she was a more effective communicator, got her point across better, seemed like more of a team player, something like that.

` `

Achen (2002) proposes that regression in the social sciences should stick to at most three independent variables. Schrodt (2009) uses the phrase “nibbled to death by dummies”.

I understand the gripes. These two men are talking about political analysis, where the “macro” variables are shaky to begin with. What does it mean that the Heritage Foundation rated two countries `7` versus `9` points apart on corruption or freedom? Acts of corruption are individual and localised to a geography. Even “ethnofract”, which seems like a valid integral, still maps `∼10⁷` individual variation down to `10⁰`. But this is statistics with fraught macro measures trying to answer questions that are hard to quantify in the first place—like the Kantian peace or center–periphery theories of global political structure.

What about regressions on complexes in more modest settings with more definitive data measurements? Let’s say my client is a grocery store. I want to answer for them how changing the first thing you see in the store will affect the amount purchased of the other items. (In general trying to answer how store layout affects purchases of all items … this being a “first bite”.) Imagine for my benefit also that I’m assisted or directed by someone with domain knowledge: someone who understands the mechanisms that make X cause Y—whether it’s walking, smelling, typical thought patterns or reaction paths, typical goals when entering the store, whatever it is.

I swear by my very strong personal intuition that complexes are everywhere. By complexes I mean highly interdependent cause & effect entanglements. Intrafamily violence, development of sexual preference, popularity of a given song, career choice, are explained not by one variable but by a network of causes.  You can’t just possess an engineering degree to make a lot of money in oil & gas. You also need to move to certain locations, give your best effort, network, not make obvious faux pax on your CV, not seduce your boss’ son, and on and on. In a broad macro picture we pick up that wealth goes up with higher degrees in the USA. Going from G.E.D. to Bachelor is associated with `tripling ± 1` wealth.

I think this statistical path is worth exploring for application in any retail store. Or e-store or vending machine (both of which have a 2-D arrangement). Here as the prep are some photos of 3-D stores:

And for the 2-D case (vending machine or e-store) here are some screen shots from Modcloth, marked up with potential “interaction arrows” that I speculated.

Again, I don’t have a great understanding of how item placement or characteristics really work so I am just making up some possible connections with these arrows here. Think of them as question marks.

• purse, shoes, dress. Do you lead the (potential) customer up the path to a particular combination that looks so perfect? (As in a fashion ad—showing several pieces in combination, in context, rather than a “wide array” of the shirts she could be wearing in this scene.)
• colours. Is it better to put matching colours next to each other? Or does that push customers in one direction when we’d prefer them to spread out over the products?

• variety versus contrastability. Is it better to show “We have a marmalade orange and a Kelly green and a sky blue party dress—so much variety!” or to put three versions of the “little black dress” so the consumer can tightly specify her preferences on it?

And if you are going to put a purse or shoes along with it (now in 3-ary relations) again the same question arises. Is it better to put gold shoes and black shoes next to the “cocktail dress” to show its versatility? Or to keep it simple—just a standard shoe so you can think “Yes” or “No” and insert your own creativity independently, for example “In contrast to the black shoes they are showing me, I can visualise how my gold sparkly shoes would look in their place”? More and more issues of independence, contrast, context, and interdependence the more I think about the design challenge here.

• "random" or "space" or "comparison". You put the flowers next to the shelves to make the shelves look less industrial, more rather part of a “beautiful home”. Strew “interesting books” that display some kind of character and give the shopper the good feelings of intellect or sophistication or depth.

Or, what if you just leave a blank space in the e-store array? Does it waste more time by making the shopper scroll down more? or does it create “breathing room” the way an expensive clothing store stocks few items?
• price comparisons. You stock the really really expensive pantsuit next to the expensive pantsuit not to sell the really-really-expensive one, but to justify the price or lend even more glamour to the expensive one.

• more obvious, direct complements like put carrots and pitas next to hummous so both the hummous looks better and you will enjoy it more. Nothing sneaky in that case.

Did you ever have the experience that you buy something in the store and it read so differently in the store and when you were caught up in the magic of the lifestyle they were trying to present to you, but now it’s hanging up with your stuff it reads so different and doesn’t actually say what you thought it said at the time?

For me if I’m clothes shopping I’m thinking back on what else I own, what outfits I could make with this, how this is going to look on me, how its message fits in with my own personal style. And at the same time, the store is fighting me to define the context.

` `

In the Modcloth example I’m talking mostly about 2- or 3-way interactions between objects. In analogy to simplicial complexes these would be the 1-faces or 2-faces of a skeleton.

But in general in a branded store, the overall effect is closer to let’s say the N-cells or N−1-cells. Maybe it’s not as precise as the painting in http://isomorphismes.tumblr.com/post/16039994007/thoroughly-enmeshed-composition-perturbation or a perfectly crafted poem or TV advertisement, where one change would spoil the perfection.

But clothing stores are definitely holistic to a degree. By which I mean that the whole is more than the sum of the parts. It’s about how everything works together rather than any one thing. And a good brand develops its own je ne sais quoi which, more than the elements individually, evokes some ideal lifestyle.

More on this topic after I finish my reading on Markov basis.

## Regressions 101: “Significance”

###### SETUP (CAN BE SKIPPED)

We start with data (how was it collected?) and the hope that we can compare them. We also start with a question which is of the form:

• how much tax increase is associated with how much tax avoidance/tax evasion/country fleeing by the top 1%?
• how much traffic does our website lose (gain) if we slow down (speed up) the load time?
• how many of their soldiers do we kill for every soldier we lose?
• how much do gun deaths [suicide | gang violence | rampaging multihomicide] decrease with 10,000 guns taken out of the population?
• how much more fuel do you need to fly your commercial jet 1,000 metres higher in the sky?
• how much famine [to whom] results when the price of low-protein wheat rises by \$1?
• how much vegetarian eating results when the price of beef rises by \$5? (and again distributionally, does it change preferentially by people with a certain culture or personal history, such as they’ve learned vegetarian meals before or they grew up not affording meat?) How much does the price of beef rise when the price of feed-corn rises by \$1?
• how much extra effort at work will result in how much higher bonus?
• how many more hours of training will result in how much faster marathon time (or in how much better heart health)?
• how much does society lose when a scientist moves to the financial sector?
• how much does having a modern financial system raise GDP growth? (here ∵ the `X` ~ branchy and multidimensional, we won’t be able to interpolate in Tufte’s preferred sense)
• how many petatonnes of carbon per year does it take to raise the global temperature by how much?
• how much does \$1000 million spent funding basic science research yield us in 30 years?
• how much will this MBA raise my annual income?
• how much more money does a comparable White make than a comparable Black? (or a comparable Man than a comparable Woman?)
• how much does a reduction in child mortality decrease fecundity? (if it actually does)

• how much can I influence your behaviour by priming you prior to this psychological experiment?
• how much higher/lower do Boys score than Girls on some assessment? (the answer is usually “low `|β|`, with low `p`" — in other words "not very different but due to the high volume of data whatever we find is with high statistical strength")

bearing in mind that this response-magnitude may differ under varying circumstances. (Raising morning-beauty-prep time from 1 minute to 10 minutes will do more than raising 110 minutes to 120 minutes of prep. Also there may be interaction terms like you need both a petroleum engineering degree and to live in one of `{Naija, Indonesia, Alaska, Kazakhstan, Saudi Arabia, Oman, Qatar}` in order to see the income bump. Also many of these questions have a time-factor, like the MBA and the climate ones.)

As Trygve Haavelmo put it: using reason alone we can probably figure out which direction each of these responses will go. But knowing just that raising the tax rate will drive away some number of rich doesn’t push the debate very far—if all you lose is a handful of symbolic Eduardo Saverins who were already on the cusp of fleeing the country, then bringing up the Laffer curve is chaff. But if the number turns out to be large then it’s really worth discussing.

In less polite terms: until we quantify what we’re debating about, you can spit bollocks all day long. Once the debate is quantified then the discussion should become way more intelligent, less derailing to irrelevant theoretically-possible-issues-which-are-not-really-worth-wasting-time-on.

So we change one variable over which we have control and measure how the interesting thing responds. Once we measure both we come to the regression stage where we try to make a statement of the form “A 30% increase in effort will result in a 10% increase in wage” or “5 extra minutes getting ready in the morning will make me look 5% better”. (You should agree from those examples that the same number won’t necessarily hold throughout the whole range. Like if I spend three hours getting ready the returns will have diminished from the returns on the first five minutes.)

Avoiding causal language, we say that a 10% increase in `(your salary)` is associated with a 30% increase in `(your effort)`.

` `

The two numbers that jump out of any regression table output (e.g., `lm` in `R`) are `p` and `β`.

• `β` is the estimated size of the linear effect
• `p` is how sure we are that the estimated size is exactly `β`. (As in golf, a low `p` is better: more confident, more sure. Low `p` can also be stated as a high `t`.)

Wary that regression tables spit out many, many numbers (like Durbin-Watson statistic, F statistic, Akaike Information, and more) specifically to measure potential problems with interpreting `β` and `p` naïvely, here are pictures of the textbook situations where `p` and `β` can be interpreted in the straightforward way:

First, the standard cases where the regression analysis works as it should and how to read it is fairly obvious:
(NB: These are continuous variables rather than on/off switches or ordered categories. So instead of “Followed the weight-loss regimen” or “Didn’t follow the weight-loss regimen” it’s someone quantified how much it was followed. Again, actual measurements (how they were coded) getting in the way of our gleeful playing with numbers.)

Second, the case I want to draw attention to: a small statistical significance doesn’t necessarily mean nothing’s going on there.

The code I used to generate these fake-data and plots.

If the regression measures a high `β` but low confidence (high `p`), that is still worth taking a look at. If regression picks up wide dispersion in male-versus-female wages—let’s say double—but we’re not so confident (high `p`) that it’s exactly double because it’s sometimes 95%, sometimes 180%, sometimes 310%, we’ve still picked up a significant effect.

The exact value of `β` would not be statistically significant or confidently precise due to a high `p` but actually this would be a very significant finding. (Try it the same with any of my other examples, or another quantitative-comparison scenario you think up. It’s either a serious opportunity, or a serious problem, that you’ve uncovered. Just needs further looking to see where the variation around double comes from.)

You can read elsewhere about how awful it is that `p`<.05 is the password for publishable science, for many reasons that require some statistical vocabulary. But I think the most intuitive problem is the one I just stated. If your geiger counter flips out to ten times the deadly level of radiation, it doesn’t matter if it sometimes reads 8, sometimes 0, and sometimes 15—the point is, you need to be worried and get the h*** out of there. (Unless the machine is wacked—but you’d still be spooked, wouldn’t you?)

` `
###### FOLLOW-UP (CAN BE SKIPPED)

The scale of `β` is the all-important thing that we are after. Small differences in `β`s of variables that are important to your life can make a huge difference.

• Think about getting a 3% raise (1.03) versus a 1% wage cut (.99).
• Think about twelve in every 1000 births kill the mother versus four in every 1000.
• Think about being 5 minutes late for the meeting versus 5 minutes early.

Order-of-magnitude differences (like 20 versus 2) is the difference between fly and dog; between life in the USA and near-famine; between oil tanker and gas pump; between Tibet’s altitude and Illinois’; between driving and walking; even the Black Death was only a tenth of an order of magnitude of reduction in human population.

Keeping in mind that calculus tells us that nonlinear functions can be approximated in a local region by linear functions (unless the nonlinear function jumps), `β` is an acceptable measure of “Around the current levels of webspeed” or “Around the current levels of taxation” how does the interesting thing respond.

Linear response magnitudes can also be used to estimate global responses in a nonlinear function, but you will be quantifying something other than the local linear approximation.

Tibshirani’s original paper on the lasso.

• Breiman’s Garotte — 1993
• Tibshirani lasso paper submitted — 1994
• Tibshirani lasso paper revised — 1995
• Tibshirani lasso paper accepted — 1996

This is one of those papers that I’m so excited about, I feel like “You should just read the whole thing! It’s all good!” But I realise that’s less than reasonable.

Here is a bit of summary, feel free to request other information and I’ll do my best to adapt it.

The basic question is: I have some data and I want the computer to generate (regress) a linear model of it for me. What procedure should I tell the computer to do to get a good | better | best model?

The first technique, by Abraham de Moivre, applied fruitfully by Gauss in the late 1800’s (so, ok, no computers then — but nowadays we just run `lm` in `R`), was to minimise the sum of squared error (Euclidean distance)
$\large \dpi{200} \bg_white \sqrt{\blacksquare^2 + \blacksquare^2 + \blacksquare^2 + \blacksquare^2 + \blacksquare^2 + \blacksquare^2 + \ldots }$
of a given affine model of the data. (Affine being `linear + one more parameter` for a variable origin, to account for the average value of the data ex observable parameters. For example to model incomes in the USA when the only observed parameters are age, race, and zip code—you would want to include the average baseline US income level, and that would be accomplished mathematically by shifting the origin, a.k.a. the alpha; or autonomous or “vector of ones” regression-model parameter, a.k.a. the affine addition to an otherwise linear model.)

It was noticed by several someones at various points in time that whilst de Moivre’s least-squares (OLS) method is provably (calculus of variations) the optimal linear model given well-behaved data, real data does not always behave.

In the presence of correlation, missing data, wrong data, and other problems, the “optimal” OLS solution is overfit, meaning that the model it makes for you picks up on too many of the problems. Is there a way to pick up on more signal and less noise? More gold and less dross? More of the real stuff and fewer of the impurities?

I can think of 2 ways people have tried to scrape off the corrosion without flaying as well too much of the underlying good-material:

1. Assume simpler models are better. This is the approach taken by ridge regression (a.k.a. Tikhonov regularisation a.k.a. penalisation), the lasso, and the garotte
2. Compare ensembles of models, then choose one in the “middle”. Robust methods, for example, use statistical functions that vary less in theory from flawed situation to flawed situation, than do other statistical functions. Subset selection, hierarchical methods, and  generate a lot of models on the real data and
` `

That’s the backstory. Now on to what Tibshirani actually says. His original lasso paper contrasts 3 ways of penalising complicated models, plus regression on subsets.

The three formulae:

• penalisation & restriction to subsets

• garotte
• lasso

look superficially quite similar. Tibshirani discusses the pro’s, con’s, when’s, and wherefore’s of the different approaches.

` `

(In reading this paper I learned a new symbol, the operator ƒ(x) = (x)⁺. It means
$\large \dpi{200} \bg_white (x)^+ \equiv \begin{cases} x & \text{if } x>0 \\ 0 & \text{otherwise} \end{cases}$
and looks like

). In `R` code, `ifelse(x<0, 0, x)`. Like absolute value but not exactly.)

` `

Back to the lasso. How does such a small change to the penalty function, change the estimated linear model we get as output?

## My sister isn’t “irrational”, her utility function just has large interaction terms.

What happens if, instead of doing a linear regression with sums of monomial terms, you do the complete opposite? Instead of regressing the phenomenon against $\large \inline \dpi{200} \bg_white x_t + y_t + z_t + \epsilon_t$ , you regressed the phenomenon against an explanation like $\dpi{200} \bg_white \sqrt[ \sum \text{powers} ]{{x_t}^{66} \; {y_t}^{13} \; {z_t}^{282} + {x_t}^3 \, {y_t}^9 \, {z_t}^7 + \ldots + {x_t}^{17} \, {z_t}^{1377} }$ ?

I first thought of this question several years ago whilst living with my sister. She’s a complex person. If I asked her how her day went, and wanted to predict her answer with an equation, I definitely couldn’t use linearly separable terms. That would mean that, if one aspect of her day went well and the other aspect went poorly, the two would even out. Not the case for her. One or two things could totally swing her day all-the-way-to-good or all-the-way-to-bad.

The pattern of her moods and emotional affect has nothing to do with irrationality or moodiness. She’s just an intricate person with a complex utility function.

If you don’t know my sister, you can pick up the point from this well-known stereotype about the difference between men and women:

"Men are simple, women are complex.” Think about a stereotypical teenage girl describing what made her upset. "It’s not any one thing, it’s everything.”

I.e., nonseparable interaction terms.

I wonder if there’s a mapping that sensibly inverts strongly-interdependent polynomials with monomials — interchanging interdependent equations with separable ones. If so, that could invert our notions of a parsimonious model.

Who says that a model that’s short to write in one particular space or parameterisation is the best one? or the simplest? Some things are better understood when you consider everything at once.

You’ve run the regression.  You see the t's, the β's, and the p's.  But what do they mean?  Don’t panic.  This book will tell you.

[T]he estimators in common use almost always have a simple interpretation that is not heavily model dependent….  A leading example is linear regression, which provides useful information about the conditional mean function regardless of the shape of this function.  Likewise, instrumental variables estimate an average causal effect for a well-defined population even if the instrument does not affect everyone.

Hooray!

## Anscombes quartet

The four data sets are different, yet they have the same “line of best fit” as computed by ordinary least squares regression.

hi-res

## extrapolation and interpolation

The most important lesson I learned from this book:  regression is reliable for interpolation, but not for extrapolation.  Even further, your observations really need to cover the whole gamut of causal variables, intersections included, to justify faith in your regressions.

Imagine you have two causal variables, A and B, that are causing X.  Maybe your data cover a wide range of observations of A — some high, some low, some in-between.  And you have, too, the whole gamut of observations of B — high, low, and medium.  It might still be the case that you haven’t observed A and B together (not seen $\large \dpi{150} \bg_white A \cap B$).  Or that you’ve only observed them together (not seen $\large \dpi{150} \bg_white A' \cap B'$).  In either case, your regression is effectively extrapolating to the other causal region and you should not trust it.

Let’s keep the math sexy.  Say you meet an attractive member of your favourite sex.  This person A) likes to hunt, and B) is otherwise vegetarian.  Your prejudices $\large \dpi{150} \bg_white \hat{X}$ are that you don’t like hunters ($\large \dpi{150} \bg_white \hat{X} \sim -A$) and you do like vegetarians ($\large \dpi{150} \bg_white \hat{X} \sim B$).  By comparing the magnitudes of these preferences, you deduce that you should not get along with this attractive person, because the bad A part outweighs the good B part.

$\large \dpi{150} \bg_white \hat{X} ( A^+, B^+)<0$

However, since you haven’t observed both A and B positive at once, your preconceptions are not to be trusted.  Despite your instincts $\large \dpi{150} \bg_white \hat{X}$, you go out on a date with Mr or Ms (A>0, B>0) and have a fantastic time.  Turns out there was a positive interaction term in the $\large \dpi{150} \bg_white A \cap B$ range, it also correlates positively with the noise (it wasn’t noise, just unknown knowledge), and you’ve found your soul mate.

$\large \dpi{150} \bg_white X ( A^+, B^+) \gg 0$