Posts tagged with linear transformations

Here’s a physically intuitive reason that rotations ↺

(which seem circular) are in fact linear maps.

If you have two independent wheels that can only roll straight forward and straight back, it is possible to turn the luggage. By doing both linear maps at once (which is what a matrix
$\large \dpi{300} \bg_white \begin{pmatrix} a \rightsquigarrow a & | & a \rightsquigarrow b & | & a \rightsquigarrow c \\ \hline b \rightsquigarrow a & | & b \rightsquigarrow b & | & b \rightsquigarrow c \\ \hline c \rightsquigarrow a & | & c \rightsquigarrow b & | & c \rightsquigarrow c \end{pmatrix}$

or Lie action does) and opposite each other, two straights ↓↑ make a twist ↺.

Or if you could get a car | luggage | segway with split (= independent = disconnected) axles

to roll the right wheel(s) independently and opposite to the left wheel(s)

, then you would spin around in place.

Once you’re comfortable with 2-arrays and 2-matrices, you can move up a dimension or two, to 4-arrays or 4-tensors.

You can move up to a 3-array / 3-tensor just by imagining a matrix which “extends back into the blackboard”. Like a 5 × 5 matrix. With another 5 × 5 matrix behind it. And another 5 × 5 matrix behind that with 25 more entries. Etc.

The other way is to imagine “Tables of tables of tables of tables … of tables of tables of tables.” This imagination technique is infinitely extensible.

$\large \dpi{150} \bg_white \begin{bmatrix} \begin{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} & \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ \\ \begin{bmatrix} j & k \\ l & m \end{bmatrix} & \begin{bmatrix} n & o \\ p & q \end{bmatrix} \end{bmatrix} & \begin{bmatrix} \begin{bmatrix} r & s \\ t & u \end{bmatrix} & \begin{bmatrix} v & w \\ x & y \end{bmatrix} \\ \\ \begin{bmatrix} z & a' \\ b' & c' \end{bmatrix} & \begin{bmatrix} d' & e' \\ f' & g' \end{bmatrix} \end{bmatrix} \\ \\ \begin{bmatrix} \begin{bmatrix} h' & j' \\ k' & l' \end{bmatrix} & \begin{bmatrix} m' & n' \\ o' & p' \end{bmatrix} \\ \\ \begin{bmatrix} q' & r' \\ s' & t' \end{bmatrix} & \begin{bmatrix} u' & v' \\ w' & x' \end{bmatrix} \end{bmatrix} & \begin{bmatrix} \begin{bmatrix} y' & z' \\ a'' & b'' \end{bmatrix} & \begin{bmatrix} c'' & d'' \\ e'' & f'' \end{bmatrix} \\ \\ \begin{bmatrix} g'' & h'' \\ j'' & k'' \end{bmatrix} & \begin{bmatrix} l'' & m'' \\ n'' & o'' \end{bmatrix} \end{bmatrix} \end{bmatrix}$

If that looks complicated, it’s just because simple recursion can produce convoluted outputs. Reading the LaTeX (alt text) is definitely harder than writing it was. (I just cut & paste \begin{bmatrix} stuff \end{bmatrix} inside other \begin{bmatrix} … \end{bmatrix}.)

(The technical difference between an array and a tensor: an array is a block which holds data. A tensor is a block of numbers which (linearly) transform matrices / vectors / tensors. Array = noun. Tensor = verb.)

As the last picture — the most important one — demonstrates, a 4-array can be filled with completely plain, ordinary, pedestrian information like age, weight, height.

Inside each of the yellow or blue boxes in the earlier pictures, is a datum. What calls for the high-dimensional array is the structure and inter-relationships of the infos. Age, height, sex, and weight each belongs_to a particular person, in an object-oriented sense. And one can marginalise, in a statistical sense, over any of those variables — consider all the ages of the people surveyed, for example.

One last takeaway:

• Normal, pedestrian, run-of-the-mill, everyday descriptions of things = high-dimensional arrays of varying data types.

Normal people speak about and conceive of information which fits high-D arrays all the time. “Attached” (in the fibre sense) to any person you know is a huge database of facts. Not to mention data-intensive visual information like parameterisations of the surface of their face, which we naturally process in an Augenblick.

(Source: slideshare.net)

### Linear Transformations will take you on a Trip Comparable to that of Magical Mushroom Sauce, And Perhaps cause More Lasting Damage

Long after I was supposed to “get it”, I finally came to understand matrices by looking at the above pictures. Staring and contemplating. I would come back to them week after week. This one is a stretch; this one is a shear; this one is a rotation. What’s the big F?

The thing is that mathematicians think about transforming an entire space at once. Any particular instance or experience must be of a point, but in order to conceive and prove statements about all varieties and possibilities, mathematicians think about “mappings of the entire possible space of objects”. (This is true in group theory as much as in linear algebra.)

So the change felt by individual ink-spots going from the original-F to the F-image would be the experience of an actual orbit in a dynamical system, of an actual feather blown by a bit of wind, an actual bullet passing through an actual heart, an actual droplet in the Mbezi River pulsing forward with the flow of time. But mathematicians consider the totality of possibilities all at once. That’s what “transforming the space” means.

$\large \dpi{300} \bg_white \begin{pmatrix} a \rightsquigarrow a & | & a \rightsquigarrow b & | & a \rightsquigarrow c \\ \hline b \rightsquigarrow a & | & b \rightsquigarrow b & | & b \rightsquigarrow c \\ \hline c \rightsquigarrow a & | & c \rightsquigarrow b & | & c \rightsquigarrow c \end{pmatrix}$

What do the slots in the matrix mean? Combing from left to right across the rows of numbers often means “from”. Going from top to bottom along the columns often means “to”. This is true in Markov transition matrices for example, and those combing motions correspond with basic matrix multiplication.

So there’s a hint of causation to this matrix business. Rows are the “causes” and columns are the “effects”. Second row, fifth column is the causal contribution of input B to the resulting output E and so on. But that’s not 100% correct, it’s just a whiff of a hint of a suggestion of a truth.

The “domain and image” viewpoint in the pictures above (which come from Flanigan & Kazdan about halfway through) is a truer expression of the matrix concept.

• [ [1, 0], [0, 1] ] maps the Mona Lisa to itself,
• [ [.799, −.602], [.602, .799] ] has a determinant of 1 — does not change the amount of paint — and rotates the Mona Lisa by 37° counterclockwise,
• [ [1, 0], [0, 2] ] stretches the image northward;
• and so on.

MATRICES IN WORDS

Matrices aren’t* just 2-D blocks of numbers — that’s a 2-array. Matrices are linear transformations. Because “matrix” comes with rules about how the numbers combine (inner product, outer product), a matrix is a verb whereas a 2-array, which can hold any kind of data with any or no rules attached to it, is a noun.

* (NB: Computer languages like R, Java, and SAGE/Python have their own definitions. They usually treat vector == list && matrix == 2-array.)

Linear transformations in 1-D are incredibly restricted. They’re just proportional relationships, like “Buy 1 more carton of eggs and it will cost an extra $2.17. Buy 2 more cartons of eggs and it will cost an extra$4.34. Buy 3 more cartons of eggs and it will cost an extra \$6.51….”  Bo-ring.

In scary mathematical runes one writes:

$\large \dpi{200} \bg_white \begin{matrix} y \propto x \\ \textit{---or---} \\ y = \mathrm{const} \cdot x \end{matrix}$

And the property of linearity itself is written:

$\large \dpi{200} \bg_white \begin{matrix} a \cdot f(\cdots, \; \blacksquare , \; \cdots) + b \cdot f( \cdots, \; \blacksquare,\; \cdots) \\ = \\ f( \cdots, \; a \cdot \blacksquare + b \cdot \blacksquare, \; \cdots) \end{matrix} \\ \\ \qquad \footnotesize{\bullet f \text{ is the linear mapping}} \\ \qquad \bullet a, b \in \text{the underlying number corpus } \mathbb{K} \\ \qquad \bullet \text{above holds for any term } \blacksquare}$

Or say: rescaling or adding first, it doesn’t matter which order.



The matrix revolution does so much generalisation of this simple concept it’s hard to imagine you’re still talking about the same thing. First of all, the insight that mathematically abstract vectors, including vectors of generalised numbers, can represent just about anything. Anything that can be “added” together.

And I put the word “added” in quotes because, as long as you define an operation that obeys commutativity, associativity, and distributes over multiplication-by-a-scalar, you get to call it “addition”! See the mathematical definition of ring.

• The blues scale has a different notion of “addition” than the diatonic scale.
• Something different happens when you add a spiteful remark to a pleased emotional state than when you add it to an angry emotional state.
• Modular and noncommutative things can be “added”. Clock time, food recipes, chemicals in a reaction, and all kinds of freaky mathematical fauna fall under these categories.
• Polynomials, knots, braids, semigroup elements, lattices, dynamical systems, networks, can be “added”. Or was that “multiplied”? Like, whatever.
• Quantum states (in physics) can be “added”.
• So “adding” is perhaps too specific a word—all we mean is “a two-place input, one-place output satisfying X, Y, Z”, where X,Y,Z are the properties from your elementary school textbook like identity, associativity, commutativity.

But that’s just vectors. Matrices also add dimensionality. Linear transformations can be from and to any number of dimensions:

• 1→7
• 4→3
• 1671 → 5
• 18 → 188
• and X→1 is a special case, the functional. Functionals comprise performance metrics, size measurements, your final grade in a class, statistical moments (kurtosis, skew, variance, mean) and other statistical metrics (Value-at-Risk, median), divergence (not gradient nor curl), risk metrics, the temperature at any point in the room, EBITDA, not function(x) { c( count(x), mean(x), median(x) ) }, and … I’ll do another article on functionals.

In contemplating these maps from dimensionality to dimensionality, it’s a blessing that the underlying equation is so simple as linear (proportional). When thinking about information leakage, multi-parameter cause & effect, sources & sinks in a many-equation dynamical system, images and preimages and dual spaces; when the objects being linearly transformed are systems of partial differential equations, — being able to reduce the issue to mere multi-proportionalities is what makes the problems tractable at all.

So that’s why so much painstaking care is taken in abstract linear algebra to be absolutely precise — so that the applications which rely on compositions or repetitions or atlases or inversions of linear mappings will definitely go through.



Why would anyone care to learn matrices?

Understanding of matrices is the key difference between those who “get” higher maths and those who don’t. I’ve seen many grad students and professors reading up on linear algebra because they need it to understand some deep papers in their field.

• Linear transformations can be stitched together to create manifolds.
• If you add Fourier | harmonic | spectral techniques + linear algebra, you get really trippy — yet informative — views on things. Like spectral mesh compressions of ponies.
• The “linear basis” and “linear combination” metaphors extend far. For example, to eigenfaces or When Doves Cry Inside a Convex Hull.
• You can’t understand slack vectors or optimisation without matrices.
• JPEG, discrete wavelet transform, and video compression rely on linear algebra.
• A 2-matrix characterises graphs or flows on graphs. So that’s Facebook friends, water networks, internet traffic, ecosystems, Ising magnetism, Wassily Leontief’s vision of the economy, herd behaviour, network-effects in sales (“going viral”), and much, much more that you can understand — after you get over the matrix bar.
• The expectation operator of statistics (“average”) is linear.
• Dropping a variable from your statistical analysis is linear. Mathematicians call it “projection onto a lower-dimensional space” (second-to-last example at top).
• Taking-the-derivative is linear. (The differential, a linear approximation of a could-be-nonlinear function, is the noun that results from doing the take-the-derivative verb.)
• The composition of two linear functions is linear. The sum of two linear functions is linear. From these it follows that long differential equations—consisting of chains of “zoom-in-to-infinity" (via "take-the-derivative") and "do-a-proportional-transformation-there" then "zoom-back-out" … long, long chains of this, can amount in total to no more than a linear transformation.

• If you line up several linear transformations with the proper homes and targets, you can make hard problems easy and impossible problems tractable. The more “advanced-mathematics” the space you’re considering, the more things become linear transformations.
• That’s why linear operators are used in both quantum mechanical theory and practical things like building helicopters.
• You can understand dynamical systems, attractors, and thereby understand love better through matrices.