Quantcast

Posts tagged with matrix










In many showers there are two taps: Hot and Cold. You use them to control both the pressure and the temperature of the water, but you do so indirectly: the pressure is controlled by the sum of the (position of the) two taps, while the temperature is controlled by their difference. Thus, the basis you are given:

Hot = (1,0)
Cold = (0,1)


Isn’t the basis you want:

Pressure = (1,1)
Temperature = (1,−1)

Alon Amit

(image would be the basis isomorphism)

 

other bases:

(Source: qr.ae)




Here’s a physically intuitive reason that rotations ↺

Matrix Transform

(which seem circular) are in fact linear maps.

http://image.shutterstock.com/display_pic_with_logo/581935/107480831/stock-photo-one-blue-suitcase-with-wheels-d-render-107480831.jpg

If you have two independent wheels that can only roll straight forward and straight back, it is possible to turn the luggage. By doing both linear maps at once (which is what a matrix
\begin{pmatrix} a \rightsquigarrow a  & | &  a \rightsquigarrow b  & | &  a \rightsquigarrow c \\ \hline b \rightsquigarrow a  & | &  b \rightsquigarrow b  & | &  b \rightsquigarrow c \\ \hline c \rightsquigarrow a  & | &  b \rightsquigarrow c  & | &  c \rightsquigarrow c   \end{pmatrix}
image
or Lie action does) and opposite each other, two straights ↓↑ make a twist ↺.

Or if you could get a car | luggage | segway with split (= independent = disconnected) axles

http://static.ddmcdn.com/gif/jeep-hurricane-layout.gif

to roll the right wheel(s) independently and opposite to the left wheel(s)

http://web.mit.edu/first/segway/comparison.jpg

, then you would spin around in place.




Once you’re comfortable with 2-arrays and 2-matrices, you can move up a dimension or two, to 4-arrays or 4-tensors.

You can move up to a 3-array / 3-tensor just by imagining a matrix which “extends back into the blackboard”. Like a 5 × 5 matrix. With another 5 × 5 matrix behind it. And another 5 × 5 matrix behind that with 25 more entries. Etc.

The other way is to imagine “Tables of tables of tables of tables … of tables of tables of tables.” This imagination technique is infinitely extensible.

\begin{bmatrix}  \begin{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} & \begin{bmatrix} e & f \\ g & h \end{bmatrix} \\ \\ \begin{bmatrix} j & k \\ l & m \end{bmatrix} & \begin{bmatrix} n & o \\ p & q \end{bmatrix} \end{bmatrix} & \begin{bmatrix} \begin{bmatrix} r & s \\ t & u \end{bmatrix} & \begin{bmatrix} v & w \\ x & y \end{bmatrix} \\ \\ \begin{bmatrix} z & a' \\ b' & c' \end{bmatrix} & \begin{bmatrix} d' & e' \\ f' & g' \end{bmatrix} \end{bmatrix} \\ \\ \begin{bmatrix} \begin{bmatrix} h' & j' \\ k' & l' \end{bmatrix} & \begin{bmatrix} m' & n' \\ o' & p' \end{bmatrix} \\ \\ \begin{bmatrix} q' & r' \\ s' & t' \end{bmatrix} & \begin{bmatrix} u' & v' \\ w' & x' \end{bmatrix} \end{bmatrix} & \begin{bmatrix} \begin{bmatrix} y' & z' \\ a'' & b'' \end{bmatrix} & \begin{bmatrix} c'' & d'' \\ e'' & f'' \end{bmatrix} \\ \\ \begin{bmatrix} g'' & h'' \\ j'' & k'' \end{bmatrix} & \begin{bmatrix} l'' & m'' \\ n'' & o'' \end{bmatrix} \end{bmatrix} \end{bmatrix}

If that looks complicated, it’s just because simple recursion can produce convoluted outputs. Reading the LaTeX (alt text) is definitely harder than writing it was. (I just cut & paste \begin{bmatrix} stuff \end{bmatrix} inside other \begin{bmatrix} … \end{bmatrix}.)

(The technical difference between an array and a tensor: an array is a block which holds data. A tensor is a block of numbers which (linearly) transform matrices / vectors / tensors. Array = noun. Tensor = verb.)

As the last picture — the most important one — demonstrates, a 4-array can be filled with completely plain, ordinary, pedestrian information like age, weight, height.

Inside each of the yellow or blue boxes in the earlier pictures, is a datum. What calls for the high-dimensional array is the structure and inter-relationships of the infos. Age, height, sex, and weight each belongs_to a particular person, in an object-oriented sense. And one can marginalise, in a statistical sense, over any of those variables — consider all the ages of the people surveyed, for example.

One last takeaway:

  • Normal, pedestrian, run-of-the-mill, everyday descriptions of things = high-dimensional arrays of varying data types.

Normal people speak about and conceive of information which fits high-D arrays all the time. “Attached” (in the fibre sense) to any person you know is a huge database of facts. Not to mention data-intensive visual information like parameterisations of the surface of their face, which we naturally process in an Augenblick.

(Source: slideshare.net)










This is trippy, and profound.

The determinant — which tells you the change in size after a matrix transformation 𝓜 — is just an Instance of the Alternating Multilinear Map.

(Alternating meaning it goes + − + − + − + − ……. Multilinear meaning linear in every term, ceteris paribus:

\begin{matrix} a \; f(\cdots  \blacksquare  \cdots) + b \; f( \cdots \blacksquare \cdots) \\ = \shortparallel | \ | \\ f( \cdots a \ \blacksquare + b \ \blacksquare \cdots) \end{matrix}    \\ \\ \qquad \footnotesize{\bullet f \text{ is the multilinear mapping}} \\ \qquad \bullet a, b \in \text{the underlying number corpus } \mathbb{K} \\ \qquad \bullet \text{above holds for any term } \blacksquare \text{ (if done one-at-a-time)} )

 

Now we tripThe inner product — which tells you the “angle” between 2 things, in a super abstract sense — is also an instantiation of the Alternating Multilinear Map.

In conclusion, mathematics proves that Size is the same kind of thing as Angle

Say whaaaaaat? I’m going to go get high now and watch Koyaanaasqatsi.




In the world of linear approximations of multiple parameters and multiple outputs, the Jacobian is a matrix that tells you: if I twist this knob, how does that part of the output change?

image

(The Jacobian is defined at a point. If the space not flat, but instead only approximated by flat things that are joined together, then you would stitch together different Jacobians as you stitch together different flats.)
image

Pretend that a through z are parameters, or knobs you can twist. Let’s not say whether you have control over them (endogenous variables) or whether the environment / your customers / your competitors / nature / external factors have control over them (exogenous parameters).

And pretend that through F are the separate kinds of output. You can think in terms of a real number or something else, but as far as I know the outputs cannot be linked in a lattice or anything other than a matrix rectangle.

In other words this matrix is just an organised list of “how parameter c affects output F”. 

Notan bene — the Jacobian is just a linear approximation. It doesn’t carry any of the info about mutual influence, connections between variables, curvature, wiggle, womp, kurtosis, cyclicity, or even interaction effects.

A Jacobian tensor would tell you how twisting knob a knocks on through parameters h, l, and p. Still linear but you could work out the outcome better in a difficult system — or figure out what happens if you twist two knobs at once.

image

In maths jargon: the Jacobian is a matrix filled with partial derivatives.




You know what’s surprising?

  • Rotations are linear transformations.

I guess lo conocí but no entendí. Like, I could write you the matrix formula for a rotation by θ degrees:

R(\theta) = \begin{bmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{bmatrix}

But why is that linear? Lines are straight and circles bend. When you rotate something you are moving it along a circle. So how can that be linear?

I guess 2-D linear mappings ℝ²→ℝ² surprise our natural 1-D way of thinking about “straightness”.




Well I thought the outer product was more complicated than this.

a × bᵀ    instead of    aᵀ  ×  b

An inner product is constructed by multiplying vectors A and B like Aᵀ × B. (ᵀ is for turned.) In other words, timesing each a guy from A by his corresponding b guy from B.

After summing those products, the result is just one number.  In other words the total effect was to convert two length-n vectors into just one number. Thus mapping a large space onto a small space, Rⁿ→R.  Hence inner.

Outer product, you just do × Bᵀ.  That has the effect of filling up a matrix with the contents of every possible multiplicative combination of a's and b's.  Which maps a large space onto a much larger space — maybe squared as large, for instance putting two Rⁿ vectors together into an Rⁿˣⁿ matrix.

a × bᵀ    instead of    aᵀ  ×  b

No operation was done to consolidate them, rather they were left as individual pieces.

So the inner product gives a “brief” answer (two vectors ↦ a number), and the outer product gives a “longwinded” answer (two vectors ↦ a matrix). Otherwise — procedurally — they are very similar.

(Source: Wikipedia)




Bilinear maps and dual spaces
Think of a function that takes two inputs and gives one output. The + operator is like that. 9+10=19 or, if you prefer to be computer-y about it, plus(9, 10) returns 19.
So is the relation “the degree to which X loves Y”. Takes as inputs two people and returns the degree to which the first loves the second. Not necessarily symmetrical! I.e. love(A→B) ≠ love(B→A). * It can get quite dramatic.

An operator could also take three or four inputs.  The vanilla Black-Scholes price of a call option asks for {the current price, desired exercise price, [European | American | Asian], date of expiry, volatility}.  That’s five inputs: three ℝ⁺ numbers, one option from a set isomorphic to {1,2,3} = ℕ₃, and one date.

A bilinear map takes two inputs, and it’s linear in both terms.  Meaning if you adjust one of the inputs, the final change to the output is only a linear difference.
Multiplication is a bilinear operation (think 3×17 versus 3×18). Vectorial dot multiplication is a bilinear operation. Vectorial cross multiplication is a bilinear operation but it returns a vector instead of a scalar. Matrix multiplication is a bilinear operation which returns another matrix. And tensor multiplication ⊗, too, is bilinear.
Above, Juan Marquez shows the different bilinear operators and their duals. The point is that it’s just symbol chasing.

* The distinct usage “I love sandwiches” would be considered a separate mathematical operator since it takes a different kind of input.

Bilinear maps and dual spaces

Think of a function that takes two inputs and gives one output. The + operator is like that. 9+10=19 or, if you prefer to be computer-y about it, plus(9, 10) returns 19.

So is the relation “the degree to which X loves Y”. Takes as inputs two people and returns the degree to which the first loves the second. Not necessarily symmetrical! I.e. love(A→B) ≠ love(B→A). * It can get quite dramatic.

L(A,B) &notequals; L(B,A)

An operator could also take three or four inputs.  The vanilla Black-Scholes price of a call option asks for {the current price, desired exercise price, [European | American | Asian], date of expiry, volatility}.  That’s five inputs: three ⁺ numbers, one option from a set isomorphic to {1,2,3} = ℕ₃, and one date.

inputs and outputs of vanilla Black-Scholes

A bilinear map takes two inputs, and it’s linear in both terms.  Meaning if you adjust one of the inputs, the final change to the output is only a linear difference.

Multiplication is a bilinear operation (think 3×17 versus 3×18). Vectorial dot multiplication is a bilinear operation. Vectorial cross multiplication is a bilinear operation but it returns a vector instead of a scalar. Matrix multiplication is a bilinear operation which returns another matrix. And tensor multiplication , too, is bilinear.

Above, Juan Marquez shows the different bilinear operators and their duals. The point is that it’s just symbol chasing.

File:Dual Cube-Octahedron.svg

* The distinct usage “I love sandwiches” would be considered a separate mathematical operator since it takes a different kind of input.


hi-res