Sometimes I tell people that I'm a theoretical computer scientist. If they haven't yet found some sort of excuse to go to the bathroom before I take my next breath, the next question is often an exasperated, "What does that even mean?"

I tell them that it's sort of like math except that instead of thinking about numbers, I think about computers. When that doesn't work, I usually just say that I spend most of my time staring at a mostly-empty piece of paper.

Still, for unknown reasons, this answer leaves some unsatisfied. To help you understand the theoretical computer scientist, I have pigeon-holed and stereotyped them so that you too can take part in the fun of misrepresenting huge classes of people.

There are many flavors of theoretical computer scientist...

**Cryptographers**

One particularly coveted subset of theorists are the cryptographers. I've always been jealous of the cryptographers. These are the people that study encryption, codes, and security. You, dear reader, probably don't need me to explain why such people are useful. Not a big fan of your credit card information getting stolen on the internet? Boom. Let's just put some more cryptographers on the case. Their relevance is immediate. This is when the jealousy sets in.

There is a drawback to being a cryptographer, though. I imagine it's the same sort of drawback the Hulk would experience in an anger management class. Much like how the Hulk can't control his impulse for aggression, a cryptographer can't help but deal in secrets. Wanna know what some cryptographers are currently working on? My guess is that you won't find out until they publish their results. The consequence of course is that you never really know if somebody is working on the same project that you are. Anybody out there up for some research roulette?

**Algorithmists**

Then, of course, you have the theorists that work on finding new cool algorithms for computational problems. Describing what an algorithms theorist does is still pretty straightforward. An algorithm is just a sequence of instructions for a computer that will perform some (hopefully useful) task. Furthermore, because algorithmists want you to know that they "don't have all day here," they stipulate that all algorithms must finish after some finite amount of time. For instance, you can have an algorithm that takes in a list of numbers and outputs them in sorted order. Turns out there are clever ways to do this quickly. In fact, one of these fast algorithms is called "quick sort"---algorithmists, you now see, are often cited for their wordplay.

Sometimes I feel sorry for the algorithmists. For one, I sympathize with them because, like me, they are also not cryptographers. I also feel sorry for them, though, because in their pursuit for algorithmic nirvana, they have inadvertently forgotten the English language. In particular, the word "efficient" seems to give the algorithmist much difficulty. Unfortunately, an algorithm that is efficient in the technical sense doesn't mean that when you try to execute that algorithm for realz it'll actually accomplish its task quickly.

As much as I hate to admit it, there are good reasons for this, and the algorithmists are actually working on bridging this theory-vs-practice divide in many cases. Still, if an algorithmist comes up to you in the street trying to sell you their mixtape of efficient algorithms, I would advise you to be skeptical.

**Complexity Theorists**

Finally, I bring you the crown-jewel of esoterica, the field in which I work: complexity theory. Complexity theorists have it rough. Complexity theorists are like your drunk uncle telling you that your life will never amount to anything.

Woah. Okay. Before I can explain that analogy, we have to discuss a few fundamental things about computers. (Remember how I told you how I hate cryptographers?) Er... anyways.

There are two main "resources" that a computer needs, at least in the abstract: time and space. Time is pretty straightforward. Some problems are really hard to solve. In order to solve them, a computer might need to take a really long time. Similarly, it might not be obvious at first, but some computers also need a lot of space (aka "memory") in which to store their intermediate calculations. Indeed, there is often an inherent tradeoff. If I give you more space, you can solve the problem in less time. If I give you less space, it might take more time or you might not even be able to solve it at all! Complexity theorists spend their time classifying various computational problems by the amount of time and space you need in order to solve them.

How might this be useful? Suppose you're trying to develop some algorithm to solve some computational problem, but each time you design a new algorithm you find that it's still really slow and terrible. You might go up to a complexity theorist and say "Heyo, there! I'm trying to find a cool algorithm that'll solve my problem really quickly. I can't find a good algorithm yet, but I'm full of hope and optimism." The complexity theorist may then respond "Give up now. There is no hope. Your problem falls into this really hard class of problems. You'll never solve it quickly." So now you know where the "your life will never amount to anything" part of the analogy comes from.

Erm... but what about the "drunk uncle" part? This is where I have to make a confession, dear reader. In general, complexity theorists can't actually can't prove that a lot of the problems that we claim are difficult to solve are actually difficult to solve at all.

* Shhhh. You are now part of a secret academic cabal. Welcome to the great conspiracy. *

Okay. So we don't just sit around twiddling our thumbs all day either (though that is a significant fraction of it). Indeed, complexity theorists still classify problems based on how difficult they are to solve, but sometimes we have to settle for "guesses" (or if you'd like to sound pretentious, then you could call them "conjectures") about the true difficulty of those classes.

But don't despair, there are still interesting things we can prove about these classes of problems. Even if we often can't show that your favorite problem necessarily requires a lot of time to solve, we can show that it is very tightly linked to many other problems. For instance, it turns out that if you found a fast algorithm to determine the winner of a game of chess from an arbitrary starting position, then you could also find a fast algorithm which would tell you how the amino acids in your body combine to make proteins. Flipping this around, if you could prove that the amino acid problem was hard, then you would instantly know that the chess problem was hard, too. Cool.

Anyways... as you would hope from a piece that gives sweeping generalizations about a diverse group of people, I've left out some other important groups of theorists that are not cleanly described as cryptographers, algorithmics, or complexity theorists. Sorry?

So what, at least, do all these theorists have in common? Well, every year the theory department at MIT takes a trip to some remote area where we can hang out, see nature, and be nerdy together. One night my first year, we were hanging out around the fire. Do you know how long you can melt glass in a fire so that you can still extract it in one piece?

Er... perhaps this story is best left for another blog.

Let's just say that as theorists we are always pretty convinced that we should perform "just one more" test to make sure our conjecture still holds true.