Posted by: holdenlee | November 1, 2015

October Links and Activities

Links (*found especially interesting, [h] personal highlights)

Other media/activities

Posted by: holdenlee | October 17, 2015

Wymar, the city of wind

Wymar was the city of wind. It was nestled between hills. Each house had a weathervane, with some icon that glinted in the sun: a silver lark, a spinning solar system, a blue-and-white teacup that crawled with Sina-style dragons, a whale, a bionic dancer in heels. They swiveled and creaked. Laundry fluttered on lines that were weighed down with round, golden ornaments in the shape of fruit. Shirts and underpants were clipped on with steel teeth.

Keep reading (on googledocs)…

This is a chapter of the novel I was trying to write last November. The novel fell apart, but this is one part I really liked (and is seasonally appropriate!), so I thought I’d share it. This is pretty much the raw version, with minimal edits. (There’s background in the linked post, but it isn’t too helpful. All you really need to know is that Linus and Zeo are the main characters, their bots are Linx and Argon, and they’re touring the world.)

Posted by: holdenlee | October 9, 2015

The LLL lattice basis reduction algorithm

I gave a talk on Tuesday at the “Gems of Theoretical Computer Science” seminar on the LLL algorithm.


The LLL (Lenstra–Lenstra–Lovász) algorithm is a lattice reduction algorithm that can approximate the shortest vector in a lattice, and has numerous applications (which I’ll cover as time permits), including polynomial-time algorithms to

  • factor polynomials over the rationals
  • given an approximate root of a low-degree rational polynomial, recover the polynomial
  • find integer relations such as \pi = 176 \tan^{-1}(\frac{1}{57}) + 28 \tan^{-1}(\frac{1}{239}) - 48 \tan^{-1}(\frac{1}{682}) + 96 \tan^{-1}(\frac{1}{12943}).
  • solve integer programming in fixed dimension
  • break certain cryptosystems.

Notes from the talk are here (TYPO on pg. 3: the example at the bottom should be [[1, .3, 4.2], [0, 1, -.3], [0,0,1]] -> [[1, .3, .2], [0, 1, -.3], [0, 0, 1]]. I didn’t get to cover all topics). Quick summary:

  • The goal is to find the shortest vector in a lattice. The LLL algorithm draws inspiration from 2 simple algorithms, the Euclidean algorithm (1-D case; it has a “reduce” and “swap” step), and the Gram-Schmidt orthonormalization process (to generalize to higher dimensions). A reduced basis comes “naturally” from making an algorithm that does a “reduce” and “swap” steps. The LLL algorithm only gives an exponential approximation, but this is good enough for many purposes.
  • The algorithm terminates because smaller vectors filter to the front; create a invariant out of that.
  • A closely related problem is the Closest Vector Problem. Solve it by moving the point to a fundamental parallelopiped of the reduced lattice.
  • The LLL algorithm can be used to find integer relations between numbers given sufficiently good approximations. Applying this to powers of an algebraic number, we can recover the minimal polynomial. Applying this to roots of a polynomial we can factor the polynomial.
  • The algorithm for Closest Vector can solve Integer Programming in fixed degree, by recursion on the dimension. Bounds on the size of the basis elements in a reduced basis bounds the number of hyperplanes cutting the convex set.
  • LLL can break RSA with small exponent by finding roots of polynomials mod N of small degree.

Some more resources are the following:

Posted by: holdenlee | October 4, 2015

Links and Activities, 9/2015

*Found particularly interesting.

Links and readings


Posted by: holdenlee | September 5, 2015

Media consumption, 8/23-9/5

*denotes something I found particular interesting.

[h] is a highlighted page (I’m in the habit of using scrible to highlight web pages I read – these are for my own convenience and I don’t expect others to find them useful).

Posted by: holdenlee | August 23, 2015

Media consumption, 8/9-22

Posted by: holdenlee | August 8, 2015

Media consumption, 8/2-8

I’m presenting Barron’s Theorem in the Machine Learning reading group today.

Abstract: I will state and prove Barron’s Theorem, which shows that 1-layer neural networks can evade the curse of dimensionality. Barron’s Theorem bounds the error of the best neural net approximation to a function, in terms of the number of hidden nodes and the smoothness of the function, independently of the dimension.

Notes are available here (source).
Update: Thanks to Alex Anderson for pointing out limitations of Barron’s Theorem. In his words:
In the context of deep learning, Barron’s Theorem has been a huge red-herring for the neural network community. Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters), and these approaches are never used in practice now. There are some things happening that are much more subtle than can be treated in this fashion.
Here are some recent papers he recommends:
Posted by: holdenlee | August 1, 2015

Media consumption, 7/26-8/1

* = found particularly interesting

  •  Podcasts
    • 99% Invisible
      • Automation paradox*: 2-part series on automation.
        • 170 Children of the magenta – How automation has changed airplane safety (no question it increases safety, but automation can lead to incompetence – pilots being too reliant)
        • 171 Johnnycab: A world with self-driving cars. How would cities change?
      • 172 On location: on the Bradbury building, a film icon in Los Angeles
      • 173 Awareness: Ribbons for causes – it started with AIDS. How to do it: get the ribbons in a lot of high-profile places, and then tell people it’s for AIDS awareness
      • 174 From the sea, freedom: Micronations
      • 169 Freud’s couch
    • Freakonomics:
    • TED radio hour
      • Finite* (innovation comes out of scarcity; rainforests contain a wealth of medical knowledge that we haven’t valued and have turned away from; antibiotics should be viewed as a scarce resource because they create resistance)
      • Transformation
  • Following Slate star codex*. (Things like minimum wage, tenure, and medical school admission create “duality”: a gap between those who get it vs. those who don’t, with similar consequences), (People misuse the word “debunk” to refer to literature that opposes their beliefs, rather than is publicly dismissed; often both sides claim they “debunked” each other). Lots of links. It’s very draining to sift through lots of media to find interesting things, so a curated list of link is gold.
  • Glanced at first 2 chapters of Experimental mathematics (website). Seems to be a lot of interesting math problems to do here, though I mostly just read the notes on doing math:
    • Hardy and Littlewood’s 4 axioms for collaboration:

      The first [axiom] said that when one wrote to the other (they often preferred to exchange thoughts in writing instead of orally), it was completely indifferent whether what they said was right or wrong. As Hardy put it, otherwise they could not write completely as they pleased, but would have to feel a certain responsibility thereby. The second axiom was to the effect that, when one received a letter from the other, he was under no obligation whatsoever to read it, let alone answer it, – because, as they said, it might be that the recipient of the letter would prefer not to work at that particular time, or perhaps that he was just then interested in other problems….The third axiom was to the effect that, although it did not really matter if they both thought about the same detail, still, it was preferable that they should not do so. And, finally, the fourth, and perhaps most important axiom, stated that it was quite indifferent if one of them had not contributed the least bit to the contents of a paper under their common name; otherwise there would constantly arise quarrels and difficulties in that now one, and now the other, would oppose being named co-author.“

    • Observation vs. proof in math (p. 11)

      I have myself always thought of a mathematician as in the first instance an observer, a man who gazes at a distant range of mountains and notes down his observations… proofs are what Littlewood and I call gas, rhetorical flourishes designed to affect psychology, pitcures on the board in the lecture, devices to stimulate the imagination of pupils. – G H. Hardy

    • Site + article on Escher: What an Escher picture has to do with elliptic curves.
  • Curiosity – thought about this after listening to From Curiosity to Discovery (TED radio hour) a while back. Curiosity is an important ingredient to a fulfilling life and yet is not talked about/encouraged in society. Picked up Curious, Ian Leslie and A Curious Mind, Brian Grazer.
    Snippets from the intro:

    No one had thought to let him in on a secret: “I was suddenly seeing that the world is incredibly interesting… The closer you look at anything, the more interesting it gets. But nobody tells you this.”

    For most of Western history, it has been regarded as at best a distraction, at worst a poison, corrosive to the soul and to society… curiosity is deviant.
    [The Renaissance happened was a shift where society realized it was good to ask questions.]

    [the] new challenge is to make people hungry to learn, question, and create…. how to instill a culture of inquiry and critical thinking into their educational systems.

    Curiosity is more state than trait: we can arrange our lives to stoke our curiosity or quash it.

  • An interesting talk by Nisheeth Vishnoi on thinking about evolution using computer science. Papers quite readable.
  • LLL algorithm, notes and slides. (One application is finding algebraic dependenciess, see Experimental mathematics above.
  • Started reading Game of Thrones.
  • Watched up to episode 11 of My Little Pony
Posted by: holdenlee | July 25, 2015

Media consumption, 7/19-25

To keep track of some of the stuff I read/watch/listen to/play. Perhaps I’ll post one of these every week or two.

Older Posts »



Get every new post delivered to your Inbox.

Join 1,232 other followers