Technically Wrong Read online




  Technically Wrong

  Sexist Apps, Biased

  Algorithms, and Other

  Threats of Toxic Tech

  Sara Wachter-Boettcher

  W. W. NORTON & COMPANY

  INDEPENDENT PUBLISHERS SINCE 1923

  NEW YORK LONDON

  For Elena and Audrey, who remind me

  that wrongs are always worth righting.

  Contents

  1. Welcome to the Machine

  2. Culture Misfit

  3. Normal People

  4. Select One

  5. Delighted to Death

  6. Tracked, Tagged, and Targeted

  7. Algorithmic Inequity

  8. Built to Break

  9. Meritocracy Now, Meritocracy Forever

  10. Technically Dangerous

  Acknowledgments

  Notes

  Index

  Chapter 1

  Welcome to the Machine

  Open your phone’s app store or spend a day trawling the tech press, and you’ll probably end up a little excited, a little bewildered, and a lot overwhelmed. You can download an app to track your diet, adjust your thermostat, and find a dog walker. You can order dinner—and tip the delivery person—without saying a word or touching your wallet. You can subscribe to weekly meal kits, monthly stylist-selected clothing deliveries, or even quarterly “maker boxes” curated by none other than Bill Nye (yeah, the science guy). The list goes on and on; whatever you want to do, odds are good you can now do it online.

  And we are. In late 2015, the Pew Research Center reported that most American adults go online every day, and 20 percent say they’re online “almost constantly.” 1

  I’m one of them. My days usually start with bouncing between checking the news on Twitter and skimming my inbox for important stuff. While walking to the gym, I’ll listen to the news on the NPR One app. While waiting for the train, I’ll answer a question from a teammate via Slack, the private-chat-room service. At some point, I’ll pop over to Facebook to catch up on babies and cats. In the course of a day, I might make travel plans, check local events, look up historical information, get directions, and do a zillion other things online.

  I’m also part of the tech industry. In 2006, I was a twenty-three-year-old aspiring writer from Oregon who had found herself stuck in the Arizona suburbs for a while. I had stumbled my way into a job as a copywriter at a small ad agency. We marketed luxury real estate in planned communities: 5,000-square-foot stucco homes with subzero appliances and infinity-edge pools. Within six months, I’d exhausted all the ways one could possibly describe a house on a golf course in the desert. Then I saw the ad for a web writer. I wasn’t sure what a web writer did, to be honest. But I figured, why not? I’d been using the web since AltaVista was the search engine of choice. Besides, whatever the job was, it sounded a hell of a lot more interesting than hawking granite countertops.

  It turns out 2007 was a good time to enter tech (and, uh, get as far away from real estate as possible). Facebook was just starting to transform from a college-centric site to the behemoth it’s since become. Fledgling messaging service twttr had just renamed itself Twitter. Google had just bought YouTube. The iPhone was about to launch. Pretty soon we’d be watching viral videos, rickrolling our friends, laughing at lolcats—and, of course, managing all that mundane stuff like banking and shopping from our screens. And that meant pretty much every business was aiming not just to have a website, but to figure out how technology might change the way it served its customers.

  So here we are, a decade later, and technology is so pervasive that a version of psychologist Abraham Maslow’s hierarchy of needs with “WiFi” added to the base of the pyramid has become one of the most enduring internet memes around.

  Maslow’s hierarchy, updated.

  Even if you’re not someone who checks Instagram five times during dinner, you probably rely on connected technology for all kinds of things you used to do in person or over the phone. Applying for a job? Many companies only take applications submitted online. Contacting a business? How would you even find their phone number without visiting their site or asking Google? Booking tickets, researching a candidate, making a restaurant reservation, figuring out bus routes in a new place—sure, you can do all these tasks offline, but it’s getting harder to manage it. (Just try to find a good printed map of public transit in the next city you visit.)

  As technology shifted, so did my role—morphing from “write something better than ‘click here’” to “help us figure out why we have stuff on the web in the first place, and how to communicate there more effectively.” I’ve advised startups, universities, nonprofits, and even massive corporations that are still struggling to move to the web. I’ve designed large-scale websites, devised publishing strategies, and used many more than my fair share of sticky notes and whiteboards to map out user flows or sketch product features.

  Eventually, though, something started to feel off. Despite all the improvements in technology, my peers and I weren’t getting better at serving people.

  It really hit me at the end of 2014, when my friend Eric Meyer—one of the web’s early programmers and bloggers—logged onto Facebook. It was Christmas Eve, and he expected the usual holiday photos and well-wishes from friends and families. Instead, Facebook showed him an ad for its new Year In Review feature.

  Year In Review allowed Facebook users to create albums of their highlights from the year—top posts, photos from vacations, that sort of thing—and share them with their friends. But Eric wasn’t keen on reliving 2014, the year his daughter Rebecca died of aggressive brain cancer. She was six.

  Facebook didn’t give him a choice. Instead, it created a sample Year In Review album for him, and posted it to his page to encourage him to share it. “Here’s what your year looked like!” the copy read. Below it was a picture of Rebecca—the most popular photo Eric had posted all year. And surrounding Rebecca’s smiling face and curly hair were illustrations, made by Facebook, of partiers dancing amid balloons and streamers.

  The Year In Review promotion that Facebook created for Eric Meyer. In the center is his daughter Rebecca, who died of cancer that year. (Eric Meyer)

  He was gutted.

  “Yes, my year looked like that,” he wrote in Slate as his story went viral. “True enough. My year looked like the now-absent face of my Little Spark. It was still unkind to remind me so tactlessly, and without any consent on my part.”

  Facebook had designed an experience that worked well for people who’d had a good year, people who had vacations or weddings or parties to remember. But because the design team focused only on positive experiences, it hadn’t thought enough about what would happen for everyone else—for people whose years were marred by grief, illness, heartbreak, or disaster.

  People like Eric paid the price.

  . . .

  It’s not just Facebook, and it’s not just grief or trauma. The more I started paying attention to how tech products are designed, the more I started noticing how often they’re full of blind spots, biases, and outright ethical blunders—and how often those oversights can exacerbate unfairness and leave vulnerable people out.

  Like in the spring of 2015, when Louise Selby, a pediatrician in Cambridge, England, joined PureGym, a British chain. But every time she tried to swipe her membership card to access the women’s locker room, she was denied: the system simply wouldn’t authorize her. Finally, PureGym got to the bottom of things: the third-party software it used to manage its membership data—software used at all ninety locations across England—was relying on members’ titles to determine which locker room they could access. And the title “Doctor” was coded as male.2

  Or in March of 2016, when JAMA I
nternal Medicine released a study showing that the artificial intelligence built into smartphones from Apple, Samsung, Google, and Microsoft isn’t programmed to help during a crisis. The phones’ personal assistants didn’t understand words like “rape,” or “my husband is hitting me.” In fact, instead of doing even a simple web search, Siri—Apple’s product—cracked jokes and mocked users.3

  It wasn’t the first time. Back in 2011, if you told Siri you were thinking about shooting yourself, it would give you directions to a gun store. After getting bad press, Apple partnered with the National Suicide Prevention Lifeline to offer users help when they said something that Siri identified as suicidal. But five years later, no one had looked beyond that one fix. Apple had no problem investing in building jokes and clever comebacks into the interface from the start. But investing in crisis or safety? Just not a priority.

  Siri’s responses to a series of queries in March 2016. (Sara Wachter-Boettcher)

  Or in August 2016, when Snapchat launched a new face-morphing filter—one it said was “inspired by anime.” In reality, the effect had a lot more in common with Mickey Rooney playing I. Y. Yunioshi in Breakfast at Tiffany’s than a character from Akira. The filter morphed users’ selfies into bucktoothed, squinty-eyed caricatures—the hallmarks of “yellowface,” the term for white people donning makeup and masquerading as Asian stereotypes. Snapchat said that this particular filter wouldn’t be coming back, but insisted it hadn’t done anything wrong—even as Asian users mounted a campaign to delete the app.

  Mickey Rooney’s depiction of Mr. Yunioshi (left) offers a typical example of yellowface. Snapchat’s filter produced the selfie of Grace Sparapani shown on the right. (Everett Collection; Grace Sparapani)

  Individually, it’s easy to write each of these off as a simple slipup—a misstep, an oversight, a shame. We all make mistakes, right? But when we start looking at them together, a clear pattern emerges: an industry that is willing to invest plenty of resources in chasing “delight” and “disruption,” but that hasn’t stopped to think about who’s being served by its products, and who’s being left behind, alienated, or insulted.

  . . .

  There’s a running joke in the HBO comedy Silicon Valley: every would-be entrepreneur, almost always a twentysomething man, at some point announces that his product will “make the world a better place”—and then describes something either absurdly useless or technically trivial (“constructing elegant hierarchies for maximum code reuse and extensibility,” for example).

  I’m sure it’s funny, but I don’t actually watch the show regularly. It’s too real. It brings me back to too many terrible conversations at tech conferences: some guy who’s never held a job in his life backing me into a corner at cocktail hour and droning on and on about his idea to “disrupt” some industry or other, while I desperately scan the room for a way out.

  What Silicon Valley gets right is that tech is an insular industry: a world of mostly white guys who’ve been told they’re special—the best and brightest. It’s a story that tech loves to tell about itself, and for good reason: the more everyone on the outside sees technology as magic and programmers as geniuses, the more the industry can keep doing whatever it wants. And with gobs of money and little public scrutiny, far too many people in tech have started to believe that they’re truly saving the world. Even when they’re just making another ride-hailing app or restaurant algorithm. Even when their products actually harm more people than they help.

  We can’t afford that anymore. Ten years ago, tech was still, in many ways, a discrete industry—easy to count and quantify. Today, it’s more accurate to call it a core underpinning of every industry. As tech entrepreneur and activist Anil Dash writes, “Every industry and every sector of society is powered by technology today, and being transformed by the choices made by technologists.” 4

  Including, it’s now clear, democracy.

  I’m writing this in the wake of the 2016 presidential election—an election that gave us an American president who is infamous for allegations of sexual assault, racism, conflicts of interest, collusion, and angry Tweetstorms, and who rode to power on a wave of misinformation. That misinformation was, at least in part, stoked by a proliferation of fake-news stories and propaganda pieces that were picked up by social media algorithms and promoted as “trending,” without any verification.

  We can’t know for sure how much those stories, and sites like Facebook that put them in front of millions of readers, influenced the election. But too many of us don’t even know this is happening in the first place—because we simply don’t know enough about the design and technology choices that shape our world, or the people who are making them.

  If you’re like most people, you probably don’t read Terms of Service agreements when you install new software (but you might grumble about how unintelligible they are). You probably don’t know how Facebook decides which ads you should see (but you might find it creepy when they start trying to sell you a product you looked at last week). You probably don’t spend your days deconstructing how your digital products were designed, and why.

  But we can—and, as I’ll show in this book, we all must. Because tech is only going to become more fundamental to the way we understand and interact with our communities and governments. Courts are using software algorithms to influence criminal sentencing. Detailed medical records are being stored in databases. And, as information studies scholar Safiya Noble puts it, “People are using search engines rather than libraries or teachers to make sense of the world we’re inhabiting.” 5

  It’s not that digitizing the world is inherently bad. But the more technology becomes embedded in all aspects of life, the more it matters whether that technology is biased, alienating, or harmful. The more it matters whether it works for real people facing real-life stress.

  The great news is that understanding tech culture’s excesses, and the effect they have on our digital lives, is easier than you might think. You don’t need a computer science degree or a venture capital fund. You don’t need to be able to program an algorithm. All you need to do is slough away the layers of self-aggrandizement and jargon, and get at the heart of how people in technology work—and why their decisions so often don’t serve you.

  In this book we’ll take a closer look at how the tech industry operates, and see how its hiring practices and work culture create teams that don’t represent most of us—no matter how many “diversity” events these companies put on.

  Then we’ll walk through ways these design and development teams create shallow perceptions of audiences and their needs, and how those perceptions lead to products that, at best, leave out huge percentages of users—and at worst, take advantage of our personal data and encode bias into systems that hold tremendous power over real people’s lives and livelihoods.

  Along the way we’ll also meet people and companies who are trying to change things: The neighborhood-based community site that’s reducing racist posts by changing the forms users fill out on its site. The news organization bucking journalism’s trend toward shock and designing its app to provide clarity, not anxiety. The email-marketing platform that’s focused on empathetic communication rather than endless peppy pitches.

  By the time we’re done, I hope you’ll see tech more like I do: not magical, but fallible—and ripe for change. Even more, I hope you’ll feel comfortable asking hard questions of the digital products you use, and the people who make them. Because tech has spent too long making too many people feel like they’re not important enough to design for. But, as we’ll see, there’s nothing wrong with you. There’s something wrong with tech.

  Chapter 2

  Culture Misfit

  Designers and technologists don’t head into the office planning to launch a racist photo filter or build a sexist assumption into a database. So how do these alienating, unethical, and downright offensive decisions unfold—over and over again? We can see a common example in the story of Fatima, a Middle-Eastern
American design strategist based in the Bay Area.

  A couple years ago, Fatima was working at the Silicon Valley innovation center for a large corporation—the sort of startup-like “lab” you’ll see associated with all kinds of big companies these days (Walmart, Volkswagen, and Capital One all have them). These centers are meant to generate new product ideas, experiment with emerging technologies, and ultimately build prototypes and products.

  It was 2015. The first Apple Watch had just arrived. Smartwatch sales were climbing. And Fatima’s company wanted to get in on the action. It partnered with a major fashion brand, with the goal of designing a women’s smartwatch—something fashion-forward, to serve as an alternative to all the “ugly” wearables that tech companies were launching.

  As the project kicked off, Fatima sat down with the teams from both companies—and was literally the only woman at the table. Pretty soon, someone started a video meant to show the product’s positioning. It was all flash: yacht parties, private jets, $2,000 shoes. Fatima cringed. The smartwatch they were designing was meant to target the midrange market—think Macy’s, not Neiman Marcus.

  “Let’s wait until we get some research to make decisions,” she said, trying to pull the kickoff session back on track. No one paid attention. Instead, she spent the next hour listening to older men tell her about the “female market,” using tales of their wives’ shopping habits as proof.

  But Fatima didn’t give up. She worked with a colleague from another office to develop an aggressive research program, gleaning insights from more than a thousand people across the United States and in a couple other prominent markets. Fatima then brought their findings back to the group.

  The team wanted to target women who are fashionable, tech-savvy, or both. About two-thirds of respondents were identified as the former, and half as the latter. Except, the men refused to believe Fatima. As soon as she started presenting her data, they wrote her off: “Oh, 51 percent of the women can’t be tech-savvy,” they said.