Posted by: holdenlee | July 25, 2015

## Media consumption, 7/19-25

To keep track of some of the stuff I read/watch/listen to/play. Perhaps I’ll post one of these every week or two.

Posted by: holdenlee | July 21, 2015

## Fully Homomorphic Encryption

I’m giving a talk today on fully homomorphic encryption at the student-run “Gems of theoretical computer science” seminar.

Here are the notes (source).

Posted by: holdenlee | July 21, 2015

## TCS talk: High-rate locally-correctable and locally-testable codes with sub-polynomial query complexity

Here are notes (source, original paper) from a talk by Noga Ron-Zewi.

Abstract:

Locally-correctable codes (LCCs) and locally-testable codes (LTCs) are special families of error-correcting codes that admit extremely efficient sub-linear time algorithms for error correction and detection, respectively, while making a few queries to the received word. These codes were originally studied in complexity theory because of their close relationship to program checking and probabilistically-checkable proofs (PCPs), but subsequently they were extensively studied for their own sake.

In this work, we construct the first locally-correctable codes and locally-testable codes with constant rate, constant relative distance, and sub-polynomial query complexity and running time. Specifically, we show that there exist binary LCCs and LTCs with block length n, constant rate (which can even be taken arbitrarily close to 1), constant relative distance, and query complexity and running time $\exp(\sqrt{\log n})$. Previously such codes were known to exist only with query complexity and running time $n^{\beta}$ (for constant $\beta > 0$), and there were several, quite different, constructions known.

Along the way, we also construct LCCs and LTCs over large (but constant) size alphabets, with the same query complexity and running time $\exp(\sqrt{\log n})$, which additionally have the property of approaching the Singleton bound: they have almost the best-possible relationship between their rate and distance. This has the surprising consequence that asking for a large alphabet error-correcting code to further be an LCC or LTC does not require any sacrifice in terms of rate and distance! Such a result was previously not known for any sub-linear query complexity.

If time permits I will also discuss a very recent work that substantially improves the query complexity of LTCs to quasi-polylogarithmic function of the form $(\log n)^{O(\log \log n)}$.

Joint work with Swastik Kopparty, Or Meir and Shubhangi Saraf

Posted by: holdenlee | July 18, 2015

## Inside Out

I saw Inside Out recently and I loved it!

Some ramblings.

I’m fascinated with the whole “society of mind” idea – the idea that what constitutes our consciousness or our intelligence isn’t just one central entity (oneness is an illusion) – but rather a large collection of agents – feelings, methods of reasoning, etc. – cooperating/competing in our minds. To turn this into a story – portray all these agents in a person’s mind as characters – would be amazing but also very difficult – how could one get this across without being either being overly pedantic or abstract, or too simplistic? Inside Out really strikes a sweet spot here.

Posted by: holdenlee | July 12, 2015

## Namba.elm: A fast-paced arithmetic game

Play me at http://bit.ly/playnamba.

Source code is available at http://www.github.com/holdenlee/Namba. This is the minimal playable version (v0.1) – leave me some comments so I can make the next version better.

Detailed instructions (plus thoughts on making the game, and directions to go from here) follow. But see if you can figure it out by yourself first.

If you get a score >10, let me know in the comments:)

Posted by: holdenlee | July 10, 2015

## FCRC talks

I was at FCRC (the CS conference conglomerate that happens once every 4 years), June 13-19. Here are some of the talks I found particularly memorable.

Personal notes on FCRC talks are at https://workflowy.com/s/wkI79JfN0N and on STOC/CCC/EC talks (very rough) are at https://dl.dropboxusercontent.com/u/27883775/wiki/math/pdfs/stoc.pdf. Note that neither has been edited.

FCRC award/plenary talks

• Turing award lecture (The Land sharks are on the squawk box), Michael Stonebraker (https://www.youtube.com/watch?v=BbGeKi6T6QI): Stonebraker draws parallels between his work in building Postgres, a relational database system, and his cross-country bike trip. He described numerous challenges and how they were overcome in both situations, concluding that they were both about “making it happen”.
• Interdisciplinary research from the view of theory, Andrew Yao: Yao comes from the viewpoint of theoretical computer science but has then worked on connections to physics (quantum computation), economics (auction theory), and cryptography (certifiable randomness). Theoretical computer science started with computability and complexity theory, but has since then grown to involve algorithms and data structures, information theory, combinatorics, cryptography, quantum computing, bio-informatics, game theory and mechanism design… Yao argues that interdisciplinary research not only impacts the development of other disciplines but also has a transformational effect on the nature of CS research. He distinguishes between surface-level interdisciplinarity – ex. outsourcing a computational problem to CS researchers – vs. core-to-core interaction – where researchers cooperate not just to solve a problem at the computational level but also the conceptual level. The trend is increasingly that computer scientists are becoming specialists in different fields on a part-time basis; he is optimistic about a return to the “Renaissance man.”
• Hardware neural nets: from inflated expectations to a plateau of productivity, Olivier Temam: Temam refers to the “tuning curve” for reactions to innovation, consisting of
• peak of inflated expectations
• trough of disillusionment
• slope of enlightenment (when people understand what’s useful for)
• plateau of productivity

This happened with neural nets, which were initially hyped as “emulating the brain in hardware”; now the focus is on the modest goals of doing well in concrete tasks like classifying images, which we have made a lot of progress in. (There is a “bimodal” response to the idea of “emulating the brain in hardware”: bullshit vs. awesome. People should be able to engage in more level conversation without fearing for their careers.) Three communities need to work together on neural nets: neuroscience – to understand what the brain is doing, computer architecture – to implement hardware, and machine learning/algorithms – to translate brain functionality into algorithms to pass on to the architects, the “middleman” in this game.

• The F# path to relaxation, Donald Syme: Syme gives his perspective on designing and implementing a programming language (F#). F# is notable in that it is a functional language that incorporates features from OOP. People can often be polarized towards different programming language paradigms (ex. functional vs. object-oriented (OOP)). But rather than create opposing camps, it’s good to look at the advantages of each and see if there’s a way to include all those advantages (which Syme calls “relaxation” or “synthesis from contradiction”) in as simple a way as possible.

STOC (Symposium on the Theory of Computing)

• Forrelation, Scott Aaronson, Andris Ambainis: Prove that a certain problem, forrelation (Fourier correlation), gives (essentially) the largest possible separation between quantum and classical query complexities (how many queries it would take to solve the problem if the queries are classical vs. quantum). http://dl.acm.org/authorize.cfm?key=N98526
• Super-resolution, extremal functions and the condition number of Vandermonde matrices, Ankur Moitra: Investigates the very general question: When we recover fine-grained structure from coarse-grained measurements (called “super-resolution”)? He finds there is a sharp threshold for the cutoff frequency (we’re allowed to make measurements below this frequency) above which measurements can be recovered from $\frac1{\text{poly}}$ noise, and below which measurements cannot be recovered from exponentially small noise. The analysis involves calculating the condition number of a Vandermonde matrix (and surprisingly, involves the Beurling-Selberg majorant, which has traditionally been used in analytic number theory, e.g., to count primes in intervals). More generally, many problems about well-conditioning turn out to be much nicer when studied over the complex numbers. For example, the Discrete Fourier Transform is a particularly well-conditioned Vandermonde matrix. Then tools from complex analysis naturally come into play. There may be more opportunities to apply complex analysis to problems in machine learning/signal processing/statistics. http://dl.acm.org/authorize.cfm?key=N98679
• Knuth Prize lecture, Laszlo Babai: Among other things, Babai talks about the many surprising applications of group theory to theoretical computer science, including the following:
• “Width-5 branching programs can count” because $A_5$ is not solvable (https://people.cs.umass.edu/~barring/publications/bwbp.pdf).
• The idea of a semidirect product inspires the zig-zag product of graphs used to construct explicit expanders for derandomization (SL=L). (http://arxiv.org/pdf/math/0406038.pdf)
• The automorphism group of a graph has consequences for its structure: vertex transitivity implies facts about perfect matchings, cycle length…possible Hamiltonicity.
• A nontrivial monotone boolean function invariant under a transitive permutation group conjecturally has the maximum decision tree complexity (is “evasive”).
• What is the complexity of questions about groups (specific classes of group (non)membership questions, etc.)? They have appeared as candidate separations for complexity classes.

EC (Economics and computation)

I went to just the award lectures for EC.

• Algorithms for strategic agents (Matthew Weinberg): One big difference between computer science and economics is that:
• in CS (algorithm design), we assume that inputs to algorithms are correct, vertex connectivity, expansion… possibly Hamiltonicity.
• in economics (mechanism design), we have to deal with the possibilities that people will lie to take advantage of the system. More precisely, the inputs – ex. people’s bids in an auction, or preferences in a lottery – are given by agents acting in their self-interest, i.e., to maximize their own utility.

Not only do you have to design an algorithm that gives the correct answer, you also have to design it in such a way that people can’t take advantage of it by lying about their values or rankings. A natural question from a TCS perspective, which Weinberg explores, is whether there is a black-box reduction from certain classes of algorithms to corresponding mechanisms.

• Test of Time talk (influential paper from 10-25 years ago): The social cost of cheap pseudonyms, Eric Friedman, Paul Resnick: The Internet allows cheap pseudonymity. At the dawn of the Internet age, Friedman and Resnick establish that there is a definite economic price for pseudonymity. They first model pseudonymity through a repeated prisoner’s dilemma game where players are allowed to leave and come in with new identities. This model, however, gives rise to strange equilibria, and little can be said. The authors then adds the realistic assumption of “trembling,” i.e., accidentally defecting when you attempt to cooperate (e.g. a package you send through eBay gets lost in the mail). One can then prove quite a few things about this simple model. For example, it’s natural for old-timers with good reputation to be occasionally hostile against newcomers (which they calls Pay Your Dues).
Posted by: holdenlee | May 1, 2015

## Efficiently learning Ising models

In his paper Guy Bresler gives a new algorithm for learning Ising models. I presented this paper in Sanjeev Arora’s class on intractability in machine learning. Here are the notes from the talk.

As always, the source is available on github.

Posted by: holdenlee | March 23, 2015

## The Grasshopper King, Jordan Ellenberg

Few books capture academia in a way that feels true – it’s easy to caricature professors into one or the other end of the spectrum: mad scientists or senile old men. The great majority of people in academia spend much of their lives adding to a small branch of knowledge. It’s hard to make them fit into any kind of dramatic story, so it’s natural to either exaggerate their contribution or make them bemoan their dusty lives wasting way.

The Grasshopper King does none of these things. It is a satire, but one that really understands academia – its author, after all, is the well-known mathematician Jordan Ellenberg. Maybe one way to describe it is as a coming-of-age in the academic world. The story takes place at Chandler State University, a no-name university that is catapulted to fame by a professor in an obscure department.

Posted by: holdenlee | March 22, 2015

## Notes on writing about books

To come soon: notes on The Grasshopper King (Jordan Ellenberg), A Passage to India (E. M. Forster).

But first, a little note, on what I’m doing with these posts on books. For most of my life, I’ve read lots of books, many of which I’ve really enjoyed but remembered very little about, and am hardly able to talk about them at all. Much in the spirit of Socrates’s quote “the unexamined life is not worth living,” I’m deciding to take notes on books I read and write a little about each one that I feel I’ve enjoyed. This means that each book may take me about twice as long to read, but it feels to me to be worth it. I may not be so detailed in the future, but I do want to capture my thoughts for each book. (I listened to Moonwalking with Einstein recently, which raises these questions in a much deeper way – but that’s another post.)

There’s two halves to this notetaking and writing. One part is (a) simply recording summaries and quotes (perhaps with brief notes) indexed into the book, and the other is (b) a post on my thoughts.

Firstly, making chapter/page notes on workflowy (a) will hopefully, make everything that was significant to me in the book easily indexable. My secret hope is that other people will do this, and there will be a whole library of these “indexed summaries/quotes/impressions” online, so that we can much more easily draw upon ideas, feelings, and quotes from books (or movies, etc.) in our conversations, as well as compare different people’s responses side by side. (This is the basic idea of the “backchannel” idea I had, adapted to books – I would love for such a site to exist. It would be like opening up a book and finding that the previous reader had left a bunch of post-it notes inside on their thoughts – but this for everyone who logs their thoughts on the site.) Notes aren’t meant to be a replacement for the book itself, more like a way to remember it without rereading it. I don’t think this is prevalent idea at all – there are terse summaries of books, and there is the book itself, and there are literary essays which are a bit detached, but nothing in the middle.

(b) My posts aren’t so much reviews – they’ll pretty much spoil the most important parts of the plot – but rather me trying to get to the essence of what the book is saying to me, and how I think about it. There’ll be lots of quotes, because nothing captures the essence of a book better than quotes from the actual books. They’re not going to be like literary essays. Books, I think, are meant to be taken in the context of life, so I’ll write about what it suggests about love, faith, death, whatever.

How to read these posts? Two ways, I think. One is to read the book yourself, and then read my post to see what someone else thought and remembered about it, to compare with your own thoughts. Another is to say, I don’t have time to read this book, so I’m just going to read this post. There’s many more books than I have time to read, and it seems a winning proposition for people to write down these essences of books to communicate.

Importantly, the posts aren’t a factual summary (which by itself would be hard to remember anyway). To me, the more important thing is to try and capture what the book meant to me, the thoughts and questions that it brought up in my head, the kind of things I would want to talk about in a discussion on the book. The filler material between the quotes from the book will be my own interpretation of things, which should be taken to be “in the world of the book as it exists in impressions in my head.” One thing that keeps people from talking about books more, I think, is that English class encourages a certain kind of literary analysis, giving the impression that intelligent discussion has to involve this kind of literary analysis, when it can, and should, I think, simply start with one’s feelings about the book. Books make us all feel, therefore we can talk about them.

Anyway, if all works out I’ll keep writing these. So I would love to have discussions of this process, how these posts/notes might work better! It’s working for me so far, but I don’t know if anyone else thinks they’re useful in the way I describe.

Posted by: holdenlee | March 21, 2015

## Justine, Lawrence Durrell

It’s a book about the contradictions of love, but unlike any I’ve seen before. The prose is exquisite, wounding, some of the most powerful I’ve seen. It feels impossible to put any sort of frame around it. The story shifts forwards an back in time (in the narrator’s words, the events are “not in the order in which they took place — for that is history — but in the order in which they first became significant for me”) so that plot can be hard to follow, but read it for the prose. It speaks for itself, so I’ve compiled a long list of quotes/notes. I won’t reach any kind of conclusion, but rather try to order my thoughts about a very chaotic novel, try to fold the events in the story together, to see the beginnings and the ends brought together through the intervening pages. Nah, actually, this post is more like throwing a bunch of quotes onto the page. They speak too strongly for themselves.

A small group of people, perambulating, interact with each other in Alexandria. The city is a character; it influences their actions and weighs heavily on their minds. The narrator and his lover Justine try to explain what they desire, explain the contours of love, justify their own actions, find some pattern the way their feelings shift. In these attempts they form hundreds of absurd but striking hypotheses that form the patchwork of the novel – because the most powerful quotes about love are not those that reach a sensible conclusion, but those utterances born out of passion, despair, confusion, and false enlightenment, a desire to make sense of things.