Author Q&A
Q: What inspired you to write a memoir about your relationship with technology?
A: After I stopped programming full-time, I wrote for several years about computer issues across the spectrum: artificial intelligence, big data, computer science, data privacy, my responsibility for the “Somebody is typing” messages in chat programs. I wanted to discuss computers more generally, in order to try to explain what computers are doing to us.
I was part of the first generation that grew up with home computers. I fell in love with computers when I was a little kid and learned how to program when I was 6 or 7 years old. The web started to explode when I was in college, as I was studying computer science. After graduation, I worked at Microsoft during its middle-age in the late 1990s, then joined Google during its massive growth in the 2000s.
In a sense, the memoir is more about my experiencing than my doing. Being in these incubators for the technology that now surrounds us 24 hours a day forced me to grapple with their implications and consequences a bit earlier than people who hadn’t spent a third of their lives staring first into a cathode ray tube and then into a flat-panel screen. There are some crazy stories from my time in software engineering, like the instant messaging war between Microsoft and AOL, but I hope that I’ve also fit these stories into a bigger picture about how our lives have become computationalized.
And if you read the book, it’s obvious I feel ambivalent about how computers now condition and guide our lives. I kept returning to literature and the humanities as a counterweight to the rigid data structures of computers, and eventually I saw my way through to some reconciliation between those two seemingly divergent impulses.
That’s what I hoped to achieve with the book. By explaining how I fit life and computation together, I hoped to illuminate how the world at large is fitting life and computation together—for better and for worse.
There’s an entire world from my childhood in the 1980s, an entire feeling of experience, that is just gone now, a way in which one could be alone from the world. Peter Laslett wrote a book about pre-industrial life in England, The World We Have Lost. We are currently in a process of losing and gaining ways of life because of computers. So a memoir seemed appropriate to mourn what is lost, memorialize what I hope can be preserved, and assess what we are gaining, both good and bad.
Q: Why were early text-based computer games like Trinity from the 1980s so important to you as you were growing up? How were those early games different from popular role playing games today?
A: Games like Trinity and Zork and Fish! were the closest thing to literature that computers had back then. The prose could be creaky sometimes—I’m not going to put them up against Virginia Woolf or Ralph Ellison—but they possessed a genuinely new way of engaging with a fictional story. You played a character, you typed in an English sentence of what you wanted the character to do, and the game attempted to understand it. It was very different from a Choose Your Own Adventure book because of the far greater degrees of freedom. Trinity and many of the other best games presented imagined worlds which you could investigate and piece together.
Role-playing games, both then and now, also provided digital worlds to explore, but they were far more focused around quantitative metrics of achievement: creating characters with particular skills, finding the best loot in order to bash monsters harder, and leveling up to make your characters more powerful for the next, harder challenges.
I was never into that side of things as much. I liked the world itself, and too many of the RPGs offered carbon-copy fantasy worlds derived from Tolkien. Text adventures were far more varied in setting and tone. I’m being a little unfair to RPGs. Especially in the last 20 years, there have been some ingenious and febrile RPGs (Planescape: Torment is probably the most praised, deservedly so) that created great worlds, but they’re fewer and farther between than I’d hope. They tend not to make money, though they garner intense cult followings.
Q: Despite the advancement of technology, and how we use it to translate our human experiences, you mention how we’ve actually become “standardized” on the Internet: through emojis, monosyllables, or Facebook’s six reactions. Do you see this as a growing trend as we progress technologically, and should we be cautious of this moving forward? Are algorithms coarsening our lives?
A: Algorithms do not create meaningful classifications. We provide them classifications and computers divide up life into them. Computers do not cope with ambiguity or nuance, nor can they make exceptions. If the census lists five options for racial classification, computers can’t add a sixth: everyone must be slotted into one or another. The same goes for emotional reactions.
The simplicity of these schemes is great for computers, but also for companies, because it makes analyzing the data much easier. If I post an article, a comment thread of three hundred comments is difficult for an algorithm to understand. On the other hand, the count of Likes, Wows, or Hahas it gets gives computers useful data that can be used to promote and suggest content. All this is deadly for complexity and diversity. Because the strength of online sentiment is now measured in numbers, broad and universal indicators tend to win out over diverse and subtle ones.
There is a great quote from the writer Ottessa Moshfegh: “I feel so bad for the millennials. God, they just had their universe handed to them in hashtags.” I think that’s exactly right. The way I see it, it’s not the specific hashtags that bind your existence. It’s that we’re quantifying our lives in hashtags to begin with. Once you’re looking for these overarching labels to describe your existence—personality types, political affiliations, mental illnesses, whatever—you gear yourself toward thinking in reductive terms, and you’re likely to join up with labels to which you feel sympathy. It’s your universe. There’s no option for being in between two discrete labels—at least, not if you want to be counted by a computer.
Q. How is it that computer data can reflect our biases and prejudices? Is this something you had to address when working at Google and Microsoft?
A: Facebook currently thinks I’m African-American, as far as advertising targeting is concerned. I’m not sure why, but I suspect there is some element of stereotypical bias in there. That is, I display some trait or taste that Facebook stereotypically associates as being African-American. Last year, Facebook thought I was Asian-American. I point this out because even if these classifications were accurate, they would still be based in de facto stereotyping.
There’s a fallacy that algorithms are unbiased while computers are fallible. This couldn’t be further from the truth. Computers adopt the classifications we give them. If you give them loaded classifications, they produce loaded results. If Facebook presents ads to consumers along a strict gender divide, as much online advertising does today, those categories will tend to reinforce themselves because you are reifying the split. If political consultants market campaign ads or propaganda to people most likely to believe that material, they will exacerbate political divisions, something we’re also seeing today. Because these systems function as large-scale feedback loops, the polarization and bias feeds off itself. People like to see their biases reflected back at them, computers are happy to oblige, and companies are happy to profit.
Computers can do this in the absence of classifications. When an algorithm to analyze recidivism rates for convicts in Wisconsin returned disproportionately high results for blacks and disproportionately low results for whites, that’s not a fault of the algorithm, but the fault of analysts regimenting the data in such a way that racial bias emerges from the analysis, even though none was intended.
I was, I think, fortunate in not dealing with this problem directly during my time as a programmer. I saw it a bit at Google, where I worked on the infrastructure for the search engine, but in the 2000s, the Internet was different in that there was a comparative lack of guiding classifications. You weren’t selecting from a drop-down list or a set of hashtags on social media. Instead, you searched for whatever phrase you were looking for and got back a set of ranked results. It was a softer form of quantification than what we experience today, where everything is tagged and classified in order to best shunt people to profitable content.
Q: In the chapter entitled, “Programming My Child,” you discuss how you see a crossover between parenting and coding. How did you come to discover this analogy, and has it helped you understand your role as a father?
A: My wife and I are both programmers, and sooner or later programmers will analogize every single thing in their lives to programming. So there’s an element of inevitability there.
Yet the lesson in that chapter is also that you don’t program a child like you do a computer. I joke about how utterly ridiculous it is that my daughter kept getting these “upgrades” out of nowhere: “I mean, really: you pour food in one end and suddenly she gains the ability to crawl and stand and babble? Sure, and you can grow a beanstalk up to the clouds with magic seeds. It doesn’t work for computers, and it doesn’t work for babies.” The joke is that it does, and we still don’t really understand how.
With algorithms, you need to specify what those new abilities are going to be. There aren’t any surprises—except for bugs. But I did see some similarities not with algorithms, but with artificial intelligence, and specifically with machine learning (or “deep learning”). These sorts of learning networks, which are increasingly used to make stock trades, filter content, determine relevance, make content recommendations, and more—are produced by algorithms, but they aren’t algorithms themselves. Rather, they’re feedback systems that adjust themselves based on how humans interact with them. Yet the maddening thing about machine learning is that you lack fine-grained control over what it does, and often you don’t fully understand why it produced a particular result. And that, to my mind, felt a lot more like raising a child.
It helped me understand that parenting was a compromise between gentle guidance and respect for autonomy. You can’t and shouldn’t control a child in the way you control an algorithm, but I do strive to give my daughters certain values that I hope will guide them productively. Children are far, far more mysterious and advanced than even the most complicated machine learning algorithm—and I explain exactly how they are in the book—but machine learning is definitely one step toward closing the gap between humans and computers.
Q: What was the most surprising thing you learned about yourself from writing this book? Did the process of writing a memoir come easily?
A: No, it’s probably the hardest thing I’ve ever done. I had to simultaneously write about myself yet maintain enough critical distance not to get mired in solipsism. Sometimes it was like looking into an infinity mirror and not knowing which was the real me.
A memoir isn’t an excuse for talking about yourself however you like. The goal, as with any form of writing, is to communicate something meaningful to your audience. It’s just that the form of the communication is through telling a story about yourself. I tried to tell the most important story about myself that I could find, and also to explain why I think it’s important.
The most surprising thing I learned is something esoteric. While revising, I found I had placed many connecting motifs and structuring themes across the chapters without realizing it at the time. The book was more unified than it felt as I was writing it. So what I learned was that my subconscious is a lot smarter than my conscious mind, and I shouldn’t hamstring it too much.
Q: What do you think are the implications of the web moving from ‘pages’ to ‘people’? What should we be wary of?
A: Since the 1990s, the internet and web have been mined for commodities and intelligence, so we should be careful that we don’t become the commodities and intelligence. We are now constantly shadowed by dozens if not hundreds of overlapping shadow selves maintained by computers, and they often constrain our experiences before we even have them.
At the same time, computational tools offer unparalleled possibilities for coordinating human understanding and action, possibilities which we desperately need in facing the global-sized problems confronting us today: economic, political, ecological. Right now, the sentiment is mostly pessimistic. I hold out some hope that things will sort themselves out and humanity will find a way to regulate itself in a healthier, more coordinated, and more sustainable way. But we may have to endure a trial by fire before getting there.
Q: What’s next for you?
A: I’m terminally eclectic. I am currently researching quantitative methods of authorial attribution on Renaissance literature, in particular the question of whether William Shakespeare had a hand in the anonymous (but unusually good) Elizabethan play Arden of Faversham. I continue to write on and research topics around computers and artificial intelligence, currently focused on the risks, real and imagined, of AI and how to regulate them. And I am working on a young adult novel called A Tale of Six Cities, which is something of a cross between Harry Potter and The Matrix.
Bitwise itself is the first part of a triptych about technology in the world today. Each book will stand on its own but together they’ll form an arc from past to future and from personal to philosophical. The second book will deal with artificial intelligence past, present, and future, and what it means for humanity. The third will explore the limits of the human: just how far we can expect to go with technological aid—and how much we can expect to understand—and what the future could reasonably hold for us.
With my wife, I’m also trying to raise two wonderful daughters in this mad world, so that maybe they’ll be able to figure things out better than I have.