Posts tagged with taylor's theorem

One reason polynomials are interesting is that you can use them to encode sequences.


In fact some of the theory of abstract algebra (the theory of rings) deals specifically with how your perspective changes when you erase all of the x^297 and x^16 terms and think instead about a sequence of numbers, which actually doesn’t represent a sequence at all but one single functional.


When you put that together with observations about polynomials

  • Every sequence is a functional. (OK, can be made into a functional / corresponds to a functional)
  • So plain-old sequences like 2, 8, 197, 1780, … actually represent curvy, warped things.
  • Sequences of infinite length are just as admissible as sequences that finish.
    (After all, you see infinite series all the time in maths: Laurent series, Taylor series, Fourier series, convergent series for pi, and on and on.)
  • Any questions about analyticity, meromorphicity, convergence-of-series, etc, and any tools used to answer them, now apply to plain old sequences-of-numbers.
  • Remember Taylor polynomials? There’s a calculus connection here.
  • Derivatives and integrals can be performed on any sequence of plain-old-numbers. They correspond (modulo k!) to a left-shift and right-shift of the sequence.
  • You can take the Fourier transform of a sequence of numbers.
  • How about integer sequences from the OEIS? What do those functions look like? How about once they’re Taylored down? (each term divided by k!.)
  • Sequences are lists. Sequences are polynomials. Vectors are lists. Ergo—polynomials are vectors?!
  • Yes, they are, and due to Taylor’s theorem sequences-as-vectors constitute a basis for all smooth ℝ→ℝ functionals.
  • The first question of algebraic geometry arises from this viewpoint as well. A sequence of "numbers" instantiates a polynomial, which has “zeroes”. (The places where the weighted x^1192 terms sum to 0.)

    So middle-school algebra instantiates a natural mapping from one sequence to another. For example (1, 1−2−1, 1 (−1, 1−φ, 1, φ). Look, I don’t make the rules. That correspondence just is there, because of logic.

    Instead of thinking sequence → polynomial → curve on a graph → places where the curve passes through a horizontal line, you can think sequence → sequence. How are sequences→ connected to →sequences? Here’s an example sequence (0.0, 1.1, 2.2, 3.3, 4.4, 0, 0, 7.7) to start playing with on WolframAlpha. Try to understand how the roots dance around when you change sequence.
  • Looking at sequences as polynomials explains the partition function (how many ways can you split up 7?) As explained here.
  • Also, general combinatorics http://en.wikipedia.org/wiki/Enumerative_combinatorics problems besides the partition example, are often answered by a polynomial-as-sequence.
  • Did I mention that combinatorics are the basis for Algorithms that make computers run faster?
  • Did I mention that Algorithms class is one of the two fundae that set hunky Computer Scientists above the rest of us dipsh_t programmers?
  • There is a connection to knots as well.
  • Which means that group theory / braid theory / knot theory can be used to simplify any problem that reduces to “some polynomial”.
  • Which means that, if complicated systems of particles, financial patterns, whatever, can be reduced to a polynomial, then I can use a much simpler (and more visual) way of reasoning about the complicated thing.
  • I think this stuff also relates to Gödel numbers, which encode mathematical proofs.
  • You can encode all of the outputs of a ℕ→ℕ function as a sequence. Which means you may be able to factor a sequence into the product of other sequences. In other words, maybe you can multiply simple sequences together to get the complicated sequence—or function—you’re looking for.

This is an example of when the kind of language mathematics is, is quite nice. Every author’s sprawling thoughts coming from here and going to there while taking a detour to la-la land, are condensed by uniformity of notation. Then by force of reasoning, analogies are held fast, concrete is poured over them, and eventually you can walk across the bridge to Tarabithia. Try nailing down parallels between Marx & Engels, it’s much harder.

All of these connections give one an archaeological feeling, like … What exactly am I unearthing here? 

The chief triumph of differential calculus is this:

Any nonlinear function can be approximated by a linear function.

(OK…pretty much any nonlinear function.) That approximation is the differential, aka the tangent line, aka the best affine approximation.  It is valid only around a small area but that’s good enough. Because small areas can be put together to make big areas. And short lines can make nonlinear* curves.

In other words, zoom in on a function enough and it looks like a simple line. Even when the zoomed-out picture is shaky, wiggly, jumpy, scrawly, volatile, or intermittently-volatile-and-not-volatile:

Fed funds rate history since 1990 -- back to 1949 available at www.economagic.com

Moreover, calculus says how far off those linear approximations are. So you know how tiny the straight, flat puzzle pieces should be to look like a curve when put together. That kind of advice is good enough to engineer with.


It’s surprising that you can break things down like that, because nonlinear functions can get really, really intricate. The world is, like, complicated.

So it’s reassuring to know that ideas that are built up from counting & grouping rocks on the ground, and drawing lines & circles in the sand, are in principle capable of describing ocean currents, architecture, finance, computers, mechanics, earthquakes, electronics, physics.


(OK, there are other reasons to be less optimistic.)



* What’s so terrible about nonlinear functions anyway? They’re not terrible, they’re terribly interesting. It’s just nearly impossible to generally, completely and totally solve nonlinear problems.

But lines are doable. You can project lines outward. You can solve systems of linear equations with the tap of a computer.  So if it’s possible to decompose nonlinear things into linear pieces, you’re money.


Two more findings from calculus.

  1. One can get closer to the nonlinear truth even faster by using polynomials. Put another way, the simple operations of + and ×, taught in elementary school, are good enough to do pretty much anything, so long as you do + and × enough times. 

  2. One can also get arbitrarily truthy using trig functions. You may not remember sin & cos but they are dead simple. More later on the sexy things you can do with them (Fourier decomposition).

Branes, D-branes, M-theory, K-theory … news articles about theoretical physics often mention “manifolds”.  Manifolds are also good tools for theoretical psychology and economics. Thinking about manifolds is guaranteed to make you sexy and interesting.

Fortunately, these fancy surfaces are already familiar to anyone who has played the original Star Fox—Super NES version.


In Star Fox, all of the interactive shapes are built up from polygons.  Manifolds are built up the same way!  You don’t have to use polygons per se, just stick flats together and you build up any surface you want, in the mathematical limit.

The point of doing it this way, is that you can use all the power of linear algebra and calculus on each of those flats, or “charts”.  Then as long as you’re clear on how to transition from chart to chart (from polygon to polygon), you know the whole surface—to precise mathematical detail.



Regarding curvature: the charts don’t need the Euclidean metric.  As long as distance is measured in a consistent way, the manifold is all good.  So you could use hyperbolic, elliptical, or quasimetric distance. Just a few options.

Manifolds are relevant because according to general relativity, spacetime itself is curved.  For example, a black hole or star or planet bends the “rigid rods" that Newton & Descartes supposed make up the fabric of space.

bent spacetime

black hole photo

In fact, the same “curved-space” idea describes racism. Psychological experiments demonstrate that people are able to distinguish fine detail among their own ethnic group, whereas those outside the group are quickly & coarsely categorized as “other”.

This means a hyperbolic or other “negatively curved" metric, where the distance from 0 to 1 is less than the distance from 100 to 101.  Imagine longitude & latitude lines tightly packed together around "0", one’s own perspective — and spread out where the "others" stand.  (I forget if this paradigm changes when kids are raised in multiracial environments.)


If you stitch together such non-Euclidean flats, you’ve again constructed a manifold.

Think about this: the pixel concept re-presents brush-stroke or natural images by a wall of sequential colored squares.  You could extend it to 3-D, for example representing humans by little blocks—white for the bone, burgundy for the blood, pink for the fingernails, etc.

In a similar fashion, the manifold concept extends rectilinear reasoning familiar from grade-school math into the more exciting, less restrictive world of the squibbulous, the bubbulous, and the flipflopflegabbulous.

ga zair bison and monkey

calabi-yau manifold

cat detective