This is the story that I tell when people ask me how I got into Internet law. Like all of its kind, it’s a polished and refracted version of what actually happened. Still, for all the misremembered details and conscious omissions, it bears a resemblance to the truth.
Late in the summer of 1999, armed with a bachelors degree in computer science, a working knowledge of some programming languages, and an exceptional degree of self-confidence, I set out to conquer the world. I took a job as a software engineer at Microsoft Research for a compiler-tools group I had interned for during college. My plan was to stay for a few years, identify some problems I wanted to solve, and then either stay in the software industry or go back to get a PhD in computer science.
To quote Colson Whitehead:
I could do it. It was going to be a great year. I was sure of it. Isn’t it funny? The way the mind works?
The group I joined was reorganized out of existence that fall; my new manager was a literal and metaphorical pointy-haired boss. I spent the winter discovering what being miserable at work felt like, and I spent the spring fighting corporate bureaucracy to be allowed to transfer. I had to go up three levels and tell the vice president of Microsoft Research that I’d quit if they tried to make me stay in a group I hadn’t signed up for.
I wound up moving in mid-2000 to a product group, doing XML internals for a new web-based office suite called NetDocs. The working conditions were much better: I had an exemplary manager and some wonderful colleagues. But again, entropy won. In late spring of 2001, NetDocs became the most expensive cancelled project in Microsoft history. (If you know much about the history of the web stack, you’ll understand why it was an idea fatally ahead of its time.)
At any other time in the last few decades, I might have stayed in the tech industry. But this was the low following the dot-com crash, and things seemed equally bleak everywhere else. It wasn’t just that there weren’t many jobs; it was that no one seemed to be doing anything interesting, anything worthwhile. (If you know some Internet history, you’ll understand how hilariously wrong I was.)
With no obvious lifeboat to jump ship for, I was thrown back on myself, and I came to realize that software development wasn’t for me. Programming had been a stimulating part-time vocation in college, but it was a tedious slog as a full-time job. I liked everyone on my team, but I hated working as part of one. One of my good friends has described her ideal job as sitting in a room by herself, being passed puzzles through a slot in the door, solving them, and passing them back out through the slot. I had thought that was what programming was—and for me, it wasn’t.
I resigned in the summer of 2001. I wasn’t pushed; I jumped. I knew that I could stay employed as long as I wanted to. I could write code that worked well enough, get good-enough performance reviews, and feel happy enough about coming in to work. The problem was that I could see how my motivation was already draining away. I didn’t hate my job now, but I knew that I’d hate it in ten years, and it was better to get out now rather than wait.
Now we need to rewind a few years, because even as my career as a technologist was undergoing an uncontrolled descent, something else was rising to take its place. My grandmother died in December of 1999, and thus, very close to the darkest day of the year, I took a red-eye to be home for the funeral. If it was not quite the lowest point of my life, it was a local minimum. But when we hit our lowest point, we are open to the greatest change.
On the flight home, I read Lawrence Lessig’s Code: And Other Laws of Cyberspace. It was, and remains, the most revelatory book I have ever read, and my entire academic career consists of a series of footnotes to Lessig. But even putting aside how it changed my life by teaching me about Internet policy, it changed my life by showing me that a law professor understood computers better than anyone I knew did. Even from his outsider’s perspective, Lessig cut to the heart of how the Internet worked, and how it could change under legal pressure. Lawyers had access to an entirely different source of knowledge about the technologies I’d devoted myself to studying.
In fact, I soon discovered, the lawyerly way of seeing the world was congenial to the way my brain worked. I had thought that law and legalese were brain-numbingly boring, equal parts superficial rhetoric and bad theatrics. But every time I read an actual legal text (and there were plenty in those heady days of U.S. v. Microsoft, Napster, and DeCSS), I found it logical and persuasive. There was something surprisingly familiar to me about how judges approached a problem. As I read more, I started to wonder whether “law professor” was shorthand for a job where you got paid to write about interesting things.
I also did my due diligence about the road to get there. I asked my friends in law school about the experience, and I snuck peeks at their books when they weren’t looking. Then I found a cheap copy of the Dukeminier property casebook at a used bookstore, and I was hooked: here was an intricate and sometimes elegant system of rules that structured the entire world around me. (It stuck: Property is my core first-year subject, and I’ve written extensively in the field.) By the time I took the LSAT and applied to law school, I had a pretty good sense of what to expect.
I also started blogging in May 2000, and I found that writing for the blog—even just for the small group of friends and random Internet acquaintances who read it—filled a part of me I hadn’t realized was hollow. On evenings and weekends, and even some days in hours stolen at work, I wrote and wrote and wrote. Publishing a post gave me a sense of satisfaction that checking in code notably didn’t. My early law blogging was terrible (I keep those posts online out of honesty, not out of pride), but it felt meaningful.
I didn’t go immediately to law school after quitting Microsoft. I spent a year back in Boston, working part-time at Harvard (I wrote reports on why study abroad was broken and why moving the sciences to Allston would be a logistical disaster) and trying my hand at being a freelance writer (I failed miserably). But even if I was supposedly keeping my options open, I had a strong sense that law was where I was meant to be. When I was accepted to Yale—infamous for training future law professors—it was an easy decision to enroll.
When I arrived in law school, I told myself that if I didn’t become a legal academic, I’d go back to programming rather than practice law. Fortunately, that was a choice I didn’t have to make. This is my nineteenth year as a law professor studying and teaching Internet law. My job as a Cornell Law School faculty member working at Cornell Tech is exactly halfway in between technology and law. It’s a kind of position that didn’t exist when I started teaching law, and it feels as though it was created specifically for me. I cannot imagine anything better.
I’ve been extraordinarily fortunate in my career. But it started with a terrible mistake. I’m able to do what I do today only because I was able to accept that I wasn’t who I thought I was twenty-five years ago.