Where Technology & Ethics Collide
SELECT MEDIA APPEARANCES
“Digital Age Etiquette“ TEDx
Guest on Huffington Post Live, “Tech at a Funeral,” Nov. 15, 2012.
Guest on Huffington Post Live, “End of Privacy,” January 18, 2013.
Guest on “All Eyes on Privacy: Transparency in the New Economy,” sponsored by The National Journal and The Atlantic, June 13, 2013.
Guest on Huffington Post Live, “Tech Qs: College Edition,” August 7, 2013.
Interviewed on Wall Street Journal Live, “Think Before You Tweet, If You’re Sitting In Class,” September 06, 2013.
Interviewed by David Berreby at Big Think, “Why Your Devices Shouldn’t Do The Work Of Being You,” September 2, 2014.
Guest on The Current “Apple’s iOS 8 QuickType Keyboard Could Turn Us Into Predictable Versions of Ourselves,” October 1, 2014.
Guest on The Current “Jian Ghomesi and the Rush to Judgment,” October 28, 2014.
Guest on The Curent “Reporting on Sony hack has some blaming media as enabling crime,” December 16, 2014.
SELECT ARTICLES QUOTING ME OR DISCUSSING MY IDEAS
Anna North, NY Times November 4, 2014: “Your OkCupid Self“
Anna Altman, NY Times September 7, 2014: “A College Education Should Include Rooming With A Stranger“
Natasha Singer, NY Times September 6, 2014: “OkCupid’s Unblushing Analyst of Attraction“
Anna North, NY Times September 5, 2014: “Will the New Autocorrect Steal Your Soul?“
Mathew Ingram, Giaom July 29, 2014: “It’s Complicated: why we need a new etiquette for handling what’s private and what’s public“
Will Oremus, Slate July 28, 2014: “Zillow is Becoming the Facebook of Homes“
Harry Bruinius, Christian Science Monitor July 1, 2014: “Facebok’s Secret Experiment On Users Had a Touch of Inception.”
Steve Rosenbush, Wall Street Journal June 30, 2014: “Facebook Furor is Overblown.”
Bianca Bosker, Huffington Post June 6, 2014: “Should Online Ads Really Offer Binge Drinkers a Booze Discount?“
Aviva Rutkin, New Scientist May 28, 2014: “App takes the strain out of tricky moral dilemmas.”
Dave Baxter, Business Reporter May 12, 2014: “Are you being served? Outsourcing your life.”
Harry Bruinius, Christian Science Monitor April 27, 2014: “The many rewards, and the hidden risks, of high-frequency trading.”
Zoe Corbyn, The Guardian April 6, 2014: “Google Glass– Wearable tech but would you wear it?“
Yannick LeJacq, Kotaku April 3, 2014: “Here’s the Next-Gen Gaming Toothbrush“.
Margaret Meyers, PBS News Hour, March 14, 2014: “Turns out Google’s not so hot at predicting flu cases“.
Hal Hodson, New Scientist, March 13, 2014: “Google Flu Trends gets it wrong three years running“.
Alex Goldman, National Public Radio, March 13, 2014: “On the Media,” March 13, 2014: “Google Flu Trends is Wrong. A Lot“.
Abby Ellin, NY Times March 10, 2014: “After Online Dating, Online Making Up“.
Michelle Quinn, San Jose Mercury News February 28, 2014: “Companies Struggle With How We Live On In The Digital Afterlife“.
Tom Risen, U.S. News and World Report Feb 27, 2014: “Is the Internet Bad for Society and Relationships?“
Maggie Lane, NY Magazine February 26, 2014: “BroApp Will Automatically Send Sweet Texts to Your Girlfriend, You Lazy, Thoughtless Swine“.
Hal Hodson, New Scientist January 9, 2014: “”Lifelogging: Digital Locker Looks After Your Stuff“.
Quentin Hardy, NY Times January 7, 2013: “Webcams See All (Tortoise, Watch Your Back)”
Yannick LeJacq, NBC News December 18, 2013: “‘The Walk’: Video game challenges you to get off the couch“
Harry Bruinius, Christian Science Monitor December 5, 2013: “Newtown 911 calls: Should Media Have Release Them?“
Hal Hodson, New Scientist Nov. 7, 2013 :”Data trackers monitor your life so they can nudge you“
Rebecca Rosen, The Atlantic, August 16, 2013: “What Does it Really Matter if Companies Are Tracking Us Online?“
Jathan Sadowski, Slate June 28, 2013: “The Injustices of Open Data “
Stephen Hutcheon, The Sydney Morning Herald June 17, 2013: “Coming Soon to Facebook’s Graph Search, More Users, More Data“
Jasmine Aguilera, Democrat and Chronicle June 13, 2013: “RIT’s Selinger: Poll shows trust level in e-surveillance“
D.E. Wittkower, Wall Street Journal June 8, 2013: “NSA Scandal: Does America Need a Digital Bill of Rights?“
Yannick LeJacq, NBC News May 25, 2013: “Xbox? More like Xbody: Future game consoles will get under your skin.”
Megan Garber, The Atlantic May 23, 2013: “Do You and Google Need a Relationship Counselor?“
Bianca Bosker, Huffington Post May 15, 2013: “The Truth Behind Google’s Bizarre Mission to Make Tech ‘Go Away‘”
Gabriel Beltrone, AdWeek May 14, 2013:”The Ad Smackdown: Facebook Vs. Google“
A.C. Lee, The New York Times April 24, 2013: “Zuckerberg Unbound“
Dan Mitchell, San Francisco Weekly, April 23, 2013: “Facebook is Helping Us Disconnect from the World“
Xavier de la Porte, Le Monde April 12, 2013: “Politesse 2.0“
Leon Neyfkah, Boston Globe April 7, 2013: “Taxes a Love Story“
Eric Zorn, Chicago Tribune April 4, 2013: “Thx“
Clive Thompson, Wired March 25, 2013: “Relying on Algorithms Can Be Really Dangerous“
Jathan Sadowski, The Atlantic February 26, 2013: “Why Privacy Matters? One Scholar’s Answer“
Ari Melber, The Nation January 27, 2013: “Why Graph Search Could Be Facebook’s Largest Privacy Violation Ever“
Mathew Ingram, Business Week January 24, 2013: “You Can’t Hide From Facebook Graph Search“
Bianca Bosker, Huffington Post January 22, 2013: “Siri Rising: The Inside Story of Siri’s Origins“
Gracie Bonds Staples, Atlanta Journal Constitution January 16, 2013: “Armstrong Faces Big Climb to Redemption“
Yannick LeJacq, Wall Street Journal December 21, 2012: “After Sandy Hook, Should Violent Video-Games Call For a Cease-Fire?“
Ned Resnikoff, MSNBC December 19, 2012: “What Can Philosophy of Technology Tell Us About the Gun Debate?”
Yannick LeJacq, International Business Times December 1, 2012: “How to Make An AR-15 Or Another Gun From Home: Start With a 3-D Printer“
Rebecca Rosen, The Atlantic November 20, 2012: “The Mannequins Will Be Watching You“
PASSCODE at CHRISTIAN SCIENCE MONITOR
“What is intellectual privacy and how yours is being violated”
An interview with Neil Richards
Christian Science Monitor February 25, 2015
Privacy may be one of the biggest casualties of the Digital Age. If it’s not the government that’s storing records of cellphone calls, it’s advertisers tracking our every online clicks. While limiting government and corporate snooping is now a matter of heated debate, the notion of guarding intellectual privacy has yet to generate much fuss.
“Frank Pasquale unravels the new machine age of algorithms and bots”
An interview with Frank Pasquale
Christian Science Monitor January 28, 2015
Slate recently said Frank Pasquale’s new book, “The Black Box Society: The Secret Algorithms That Control Money and Information,” attempts to “come to grips with the dangers of ‘runaway data’ and ‘black box algorithms’ more comprehensively than any other book to date.’
I recently spoke with Pasquale about his new book and about how algorithms play a major role in our everyday lives — from what we see and don’t see on the Web, to how companies and banks classify consumers, to influencing the risky deals made by investors.
Read Full Article
“With big data invading campus, universities risk unfairly profiling their students”
An interview with Jeffrey Alan Johnson
Christian Science Monitor January 13, 2015
Privacy advocates have long been pushing for laws governing how schools and companies treat data gathered from students using technology in the classroom. Most now applaud President Obama‘s newly announced Student Digital Privacy Act to ensure “data collected in the educational context is used only for educational purposes.”
But while young students are vulnerable to privacy harms, things are tricky for college students, too. This is especially true as many universities and colleges gather and analyze more data about students’ academic — and personal — lives than ever before.
“The case against publishing hacked Sony e-mails“
Christian Science Monitor December 12, 2014
By now you’ve probably read something about the hacked Sony Pictures Entertainment e-mails containing confidential information, including unflattering remarks made by Sony executives. Stories with titillating gossip about celebrities are everywhere. Even Time peddled an “outrageous” schadenfreude listicle.
The voyeuristic coverage of dirty laundry not meant to be publicly aired is morally revolting. It’s a predictable, yet deplorable way to bring eyeballs to pages and screens. So long as ill-gotten, petty gossip stories run in high-profile places, there’s incentive for future hackers to get their kicks and settle beefs illegally.
“Robot Servants Are Going to Make Your Life Easy. Then They’ll Ruin It.”
Wired September 5, 2014
Jibo, the “world’s first family robot,” hit the media hype machine like a bomb. From a Katie Couric profile to coverage in just about every outlet, folks couldn’t get enough of this little robot with a big personality poised to bring us a step closer to the world depicted in “The Jetsons” where average families have maids like Rosie. In the blink of an eye, pre-orders climbed passed $1.8 millionand blew away the initial fundraising goal of $100k.
But, should we let robot servants into our lives?
“How to Stop Facebook From Making Us Pawns In It’s Corporate Agenda”
(with Woodrow Hartzog)
Wired July 1, 2014
You didn’t know it, but Facebook used some of you to manipulate your friends.
Even though you can’t anticipate how a company will integrate your data into its undisclosed activities, you’re still unintentionally providing grist for the manipulation mill. In the case of Facebook’s most recently published study, the company used the words of some of you—and we can’t know who—in ways you certainly did not intend, to tweak News Feed based on emotional indicators to measure the effect it would have on mood. But this study is not unique. Social media regularly manipulates how user posts appear; the abuse of socially shared information has become a collective problem that requires a collective response.
This is a call to action. We should work together to demand that companies promise not to make us involuntary accomplices in corporate activities that compromise other people’s autonomy and trust.
“You’ve Been Obsessing Over Your Likes and Retweets Way Too Much”
Wired June 9, 2014
The digital age version of the proverbial tree falling in the woods question is: Does something exist if it hasn’t been liked, favorited, linked to, or re-tweeted? According to many tech critics, the tragic answer is no. Like Lady Gaga, we live for the applause. But if constantly chasing other people’s approval is a shallow way to live that leads to time and energy being wasted over pleasing others and recurring feelings of insecurity and emptiness, how can we course correct?
“Google Can’t Forget You, But It Should Make You Hard to Find”
With Woodrow Hartzog
Wired May 20, 2014
As soon as news spread that the European Court of Justice now requires search engines like Google to allow people to “be ‘forgotten’ after a certain time by erasing links to web pages,” critics in the U.S. worried that the decision would break the internet.
While there’s good reason to disagree with the European ruling, we should avoid being too self-congratulatory. As the recent Snapchat debacle illustrates, the language that’s driving key privacy discussions worldwide is fostering false expectations and diverting attention away from what should be the focal point: the proper way to enhance or preserve obscurity.
“Colleges Need to Act Like Startups–Or Risk Becoming Obsolete”
With Andrew Phelps
Wired March 5, 2014
The Golden Age of universities may be dead. And while much of the commentary around the online disruption of education ranges from cost-benefit analyses to assessing ideology of what drives MOOCs (massively open online courses), the real question becomes — what is the point of the university in this landscape?
It’s clear that universities will have to figure out the balance between commercial relevance and basic research, as well as how to prove their value beyond being vehicles for delivering content. But lost in the shuffle of commentary here is something arguably more important than and yet containing all of these factors: culture.
“Today’s Apps are Turning us into Sociopaths”
Wired February 26, 2014
While I am far from a Luddite who fetishizes a life without tech, we need to consider the consequences of this latest batch of apps and tools that remind us to contact significant others, boost our willpower, provide us with moral guidance, and encourage us to be civil. Taken together, we’re observing the emergence of tech that doesn’t just augment our intellect and lives — but is now beginning to automate and outsource our humanity.
But let’s take a concrete example. Instead of doing the professorial pontification thing we tech philosophers are sometimes wont to do, I talked to the makers of BroApp, a “clever relationship wingman” (their words) that sends “automated daily text messages” to your significant other. It offers the promise of “maximizing” romantic connection through “seamless relationship outsourcing.”
“The ‘Mood Graph’: How Our Emotions Taking Over the Web”
Wired August 19, 2013
Recently, URL shortener Bitly announced a beta version of its tool for “Feelings”, a “fun bookmarklet to express how you feel about the content you’re sharing”. Its tagline, however, is even more telling: “Because you don’t ‘like’ everything.” This is a subtle jab at Facebook’s Like button, even though Facebook, too, provided a way earlier this year for its users to more broadly express how they feel by selecting from a dropdown menu of options (happy, sad, tired, etc.) and emoji.
All of this is especially interesting when you consider the most recent research finding, released this past week, that Facebook may “provide an invaluable resource for fulfilling the basic human need for social connection,” but “rather than enhancing well-being … [it] may undermine it.”
Oh, the irony: Facebook keeps expanding the emotional bandwidth of its interface, yet its users are still depressed.
“Facebook Home Propaganda Makes Selfishness Contagious”
Wired April 22, 2013
The new ads for Facebook Home are propaganda clips. Transforming vice into virtue, they’re social engineering spectacles that use aesthetic tricks to disguise the profound ethical issues at stake. This isn’t an academic concern: Zuckerberg’s vision (as portrayed by the ads) is being widely embraced — if the very recent milestone of half a million installations is anything to go by.
“How We’re Turning Digital Natives Into Etiquette Sociopaths“
Wired March 26, 2013
Let’s face it: Technology and etiquette have been colliding for some time now, and things have finally boiled over if the recent spate of media criticisms is anything to go by. There’s the voicemail, not to be left unless you’re “dying.” There’s the e-mail signoff that we need to “kill.” And then there’s the observation that what was once normal — like asking someone for directions — is now considered “uncivilized.”
Cyber-savvy folks are arguing for such new etiquette rules because in an information-overloaded world, time-wasting communication is not just outdated — it’s rude. But while living according to the gospel of technological efficiency and frictionless sharing is fine as a Silicon Valley innovation ethos, it makes for a downright depressing social ethic.
“We Grip the Gun and the Gun Grips Us”
Wired Dec. 21, 2012
David Dobbs introduces and re-posts my Atlantic article on philosophy and guns.
“Will autocomplete make you too predictable?”
BBC Future January 15, 2015
Do you know what you really want? Right now, there are computers all over the world busily trying to tell you the answer – often before you know yourself.
If you’ve bought books or music on Amazon, watched a film on Netflix or even typed a text message, then these mind-reading machines may have steered you to that choice by making recommendations. These predictive algorithms work by finding patterns in our previous behaviour and making inferences about our future desires – and they are everywhere.
“Google vs. our humanity: How the emerging ‘Internet of Things’ is turning us into robots”
Salon May 22, 2014
According to a new Pew Research Center report, by the time 2025 rolls around the Internet of Things will dramatically improve our lives. Janna Anderson, co-author of the document,says experts expect “positive change in health, transportation, shopping, industrial production and the environment.” While these are genuine possibilities, I’m worried that insufficient attention is being paid to a troubling issue that goes beyond potential privacy problems: the moral cost of outsourcing our decisions to increasingly interconnected smart devices.
“Satire’s Corporate Takeover: “Community,” “Silicon Valley,” and the Entertainment Industrial Complex”
Salon May 11, 2014
More than 20 years ago, David Foster Wallace lamented that television had co-opted irony, using the medium to flatter viewers into believing they were smarter than the rest of the naïve public – all the while lulling them into consuming more and more of the products advertised on television, just like everyone else. While irony perhaps has gotten an unduly bad rap, Wallace was absolutely right to worry about the manner in which the entertainment-industrial complex has been doling out winks to the viewer. Today the very tools that appear to dilute the power of advertising only reinforce its authority — an issue that’s especially troubling, given the fuzzy line protecting independent content in the digital age.
“Don’t Outsource Your Dating Life”
CNN May 1, 2014
Critics haven’t been kind to Personal Dating Assistants, a new service that allows men to up their online dating game by outsourcing tasks to paid, clandestine wingmen who pimp profiles, locate prospects and ghostwrite correspondences. GQ calls it “creepy.” CNET says customers eventually will have to admit they are big fakes. And over at Jezebel, dudes who take advantage of the deception are called “human trash.”
Unfortunately, Personal Dating Assistants is a sign of things to come. Thanks to technology, we’ll be seeing more opportunities to degrade ourselves and others through outsourcing activities that are basic to our humanity.
“Fighting Facebook, A Campaign for a People’s Terms of Service”
with Ari Melber and Woodrow Hartzog
The Nation May 22, 2013
Facebook is on the defensive again. Members of the social networking site sued the company for co-opting their identities in online ads, and Facebook agreed to revise its “Statement of Rights and Responsibilities” and offer a $20 million settlement. The case has drawn less attention than the dorm disputes portrayed in “The Social Network,” but the impact is far wider. An underpublicized aspect of the dispute concerns the power of online contracts, and ultimately, whether users or corporations have more control over life online.
“Why Is Facebook Putting Teens at Risk?”
with Woodrow Hartzog
Bloomberg Opinion October 24, 2013
When Facebook Inc. recently lifted its restriction on public posts by teenagers, some privacy scholars applauded the move as a win for parents — offering them a chance to teach their children about digital accountability. They may be overstating the case, however. If information and communication technologies aren’t designed to help users — especially younger ones — guard their information, appeals to good judgment and discipline won’t go very far.
“Don’t Let Nudges Become Shoves“
New Scientist June 22, 2013, p. 37
online version titled, “Nudge: When does persuasion become coercion?”
NUDGES are born of good intentions and clever ideas. Alas, that’s not enough.
I once proposed a nudge to promote online civility. I suggested that magazines and newspapers should moderate comments using a variation of ToneCheck, an “emotional spell-checker” for email that prompts users to tone down angry messages.
Richard Thaler, one of the chief architects of nudge, loved it, tweeting: “A Nudge dream come true.” But my students saw a problem: legitimate opinions getting censored or watered down. The lesson I learned is that nudge designers must always consider the possibility of unintended consequences. In fact, that is only one of many concerns about nudging.
As I found, creating effective nudges is difficult. Thaler and Cass Sunstein’s influential book Nudge creates the impression that nearly anyone can do it. All you need is a basic understanding of how ..
THE WALL STREET JOURNAL
“E-Etiquette in the Classroom”
Wall Street Journal Review Section, C-4, September 07, 2013
“There’s a widely shared image on the Internet of a teacher’s note that says: ‘Dear students, I know when you’re texting in class. Seriously, no one just looks down at their crotch and smiles.’
College students returning to class this month would be wise to heed such warnings. You’re not as clever as you think—your professors are on to you. The best way to stay in their good graces is to learn what behavior they expect with technology in and around the classroom.”
“The Online Funeral”
Wall Street Journal Nov. 6, 2012
My grandfather died on Halloween. Thanks to Hurricane Sandy, none of the New York family members could attend the funeral in Massachusetts. Fortunately, another option became available: The ceremony was streamed online, and so my wife, daughter and I gathered around a laptop in our living room to watch the live webcast.
The rabbi began by giving technology center stage, poignantly acknowledging that the virtual participants played an important role in honoring the deceased’s memory. After that, technology receded into the background for the Massachusetts crowd. My grandmother looked like a bereaved widow. Online coverage didn’t affect her demeanor—or anyone else’s.
“The Outsourced Lover”
The Atlantic February 14, 2014.
If you’re looking to add a digital spark to your relationship this Valentine’s Day, you can download the new app Romantimatic.
Romantimatic will send you scheduled reminders to contact your significant other and give you pre-set messages to fire off. The pre-set messages include simple, straightforward classics like “I love you” and “I miss you.”
Or maybe that doesn’t sound appealing. It sure doesn’t to me. In that case, I recommend you follow my lead: Take a solemn oath before the Greek god Eros and vow to never, ever go this far down the outsourced sentiment rabbit hole.
“I See You: The Databases that Facial Recognition Apps Need to Survive”
with Woodrow Hartzog
The Atlantic January 23, 2014
Privacy concerns have been ignited by “NameTag,” a facial-recognition app designed to reveal personal information after analyzing photos taken on mobile devices. Many are concerned that Google Glass will abandon its prohibition on facial recognition apps. And, there are open questions about the proper protocols for opting customers in and out of services that identify people through facial comparisons in real time. These kinds of services are technically “face matching” services, though they are colloquially referred to here as “facial-recognition technologies.”
“How Not to Be a Jerk With Your Stupid Smart Phone”
The Atlantic Nov. 4, 2013
As technology expands our communicative reach, new opportunities to be rude inevitably arise. Some people overreact to this incivility by turning to uniform and mechanical etiquette rules, hoping to make things better by constraining choices and limiting situational judgment. But for societies that value diversity and autonomy, general mandates—like expecting everyone to turn off their cell phones in theaters—only work in exceptional cases.
“Quitters Never Win: The Costs of Leaving Social Media”
with Woodrow Hartzog
The Atlantic February 15, 2013
Simple solutions have been proposed to help users cope with the vulnerability of disclosing information on the social web. These remedies are clear and decisive, but they demand significant trade-offs — perhaps greater sacrifice than typically is acknowledged.
One such option, which Farhad Manjoo, the technology columnist at Slate, bluntly spelled out in a two-word article, “How to Stay Private on Facebook,” is “Quit Facebook.” Manjoo offers this security-centric path for folks who are anxious about the service being “one the most intrusive technologies ever built,” and believe that “the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club“. Bottom line: stop tuning in and start dropping out if you suspect that the culture of oversharing, digital narcissism, and, above all, big-data-hungry, corporate profiteering will trump privacy settings.
“Obscurity: A Better Way to Think About Your Data Than ‘Privacy’”
with Woodrow Hartzog
The Atlantic January 17, 2013
Facebook’s announcement of its new Graph search tool on Tuesday set off yet another round of rapid-fire analysis about whether Facebook is properly handling its users’ privacy. Unfortunately, most of the rapid-fire analysts haven’t framed the story properly. Yes, Zuckerberg appears to be respecting our current privacy settings. And, yes, there just might be more stalking ahead. Neither framing device, however, is adequate. If we rely too much on them, we’ll miss the core problem: the more accessible our Facebook information becomes, the less obscurity protects our interests.
While many debates over technology and privacy concern obscurity, the term rarely gets used. This is unfortunate, as “privacy” is an over-extended concept. It grabs our attention easily, but is hard to pin down. Sometimes, people talk about privacy when they are worried about confidentiality. Other times they evoke privacy to discuss issues associated with corporate access to personal information. Fortunately, obscurity has a narrower purview.
Huffington Post Live had me on as a guest for follow-up show, “The End of Privacy” on January 18,2013. Click here for episode.
The Atlantic December 16, 2012
Racism is ugly to confront, and, like most people, I’ve got plenty of personal stories. My grandmother, bless her heart, was a wonderful grandmother, but like many Jewish people of her generation, she was incredibly racist, afraid of black people she didn’t know. This fear caused her anxiety when she got the urge to go to a favorite restaurant. She loved the food, but, as she would derisively say, so did the schvartze (Yiddish slur for a black person).
What if she didn’t have to see the black people at all? This possibility is what worries me about our augmented-reality future, which is (mostly) anticipated with optimism. If grandma had lived to see ubiquitous augmented reality, I suspect she’d put it to dehumanizing use, leaving for the restaurant with her goggles on (a less obtrusive artifact than the Coke bottle glasses she actually wore), programming them to make all dark skinned people look like variations of Larry David and Rhea Pearlman. As Brian Wassom – who regularly writes on augmented reality – notes, if apps can “recognize a particular shade of melanin, and replace it with another,” racists could one day “live in their own version of…utopia.”
“Can a Robot Learn to Cook?”
Co-authored with Evelyn Kim
The Atlantic October 9, 2012
Everyone’s coming over to watch the big game. You’ve got beer, a giant high-definition television, and a well-deserved reputation for serving wings hotter than Dante’s eighth circle of hell. Unfortunately, you are pressed for time. Wouldn’t it be great if a machine like Rosey from The Jetsons could quickly prepare them? Maybe you could even pass off the dish as your own!
Then again, maybe not. Would Rosey’s version taste like yours, or would her rendition expose your duplicity? Could she cut the chicken into the right size parts and ensure your friends don’t choke on bone chips? Would Rosey know when the chicken pieces hit the ideal state of crispiness without being raw inside? Most importantly, could she discern when the spice Rubicon was crossed? These questions all revolve around one issue: Can Rosey can acquire tacit knowledge?
“‘But Everybody’s Doing it!’: Lance Armstrong and the Philosophy of Making Bad Decisions“
The Atlantic August 28, 2012
Lance Armstrong’s decision not to fight the U.S. Anti-Doping Agency has drawn mixed response: supporters and detractors wasted no time before airing their views. While some supporters maintain lack of incriminating evidence is key, others have stated that Armstrong still deserves our sympathy even if he is guilty of using banned substances. It is crucial to understand why this might be the case, as the implications of the judgment extend well beyond feelings directed at a high-profile athlete.
The sympathy-for-a-possible-cheater argument is expressed clearly in “Pillorying Armstrong: Complete Nonsense,” a piece co-written by Arthur Caplan – one of the most famous bioethicists in the U.S. — and two other NYU professors. The authors write: “Shouldn’t Armstrong, especially because of the inspiration he is to cancer survivors or anyone on the short end of the advantage stick, get a pass for being no more dirty, but a whole lot better than everyone else in his sport? Armstrong isn’t being investigated as the only cheater. He is in all likelihood just the best, most talented one.” In other words, we should feel bad for Armstrong because LiveStrong promotes so much social good that it blunts part of the cheating stain, and because professional cycling is rotten to the core, filled with so many cheaters that breaking the rules is the only viable way to compete.
“Nudge, Nudge: Can Software Prompt Us Into Being More Civil?”
The Atlantic July 30, 2012
The closer we get to the presidential election, the more concern gets raised about how divided the country is and how acrimonious our discussions are over fundamental issues. Attack ads aren’t the only problem. The comments sections on web pages and blogs are overflowing with bitterness. The mood expressed there shows such heightened signs of technological influence, it seems ripped from the pages of the Marshall McLuhan playbook: the medium of communication is influencing the messages people send and receive. The best solution, then, might be for magazines, newspapers, and blogs to address the root problem by hacking the source: re-designing the structure of the forum to encourage civility. Before considering whether we want to go there, let’s quickly review why the medium matters.
At Scientific American, the hyperbolically titled “Why Is Everyone on the Internet So Angry?” asked why so many readers post hostile and rude comments on controversial Web stories. The answer? A “perfect storm of factors”: anonymity lessens personal accountability; distance from our conversation partners makes us treat them as abstractions, not human beings; it’s easier to be mean to someone when addressing them through writing rather than through speech; armchair commentary provides a false sense of accomplishment; and, a lack of real-time flow in the conversation encourages monologues.
“The Philosophy of the Technology of the Gun“
The Atlantic July 23, 2012
The tragic Colorado Batman shooting has prompted a wave of soul-searching. How do things like this happen? Over at Wired, David Dobbs gave a provocative answer in “Batman Movies Don’t Kill. But They’re Friendly to the Concept.” I suspect Dobbs’s nuanced analysis about causality and responsibility won’t sit well with everyone.
Dobbs questions the role of gun culture in steering “certain unhinged or deeply a-moral people toward the sort of violence that has now become so routine that the entire thing seems scripted.” But what about “normal” people? Yes, plenty of people carry guns without incident. Yes, proper gun training can go a long way. And, yes, there are significant cultural differences about how guns are used. But, perhaps overly simplistic assumptions about what technology is and who we are when we use it get in the way of us seeing how, to use Dobbs’s theatrical metaphor, guns can give “stage directions.”
“What Happens When We Turn The World’s Most Famous Robot Test on Ourselves?“
The Atlantic June 20, 2012
This weekend marks the centenary of Alan Turing’s birth. Turing was one of the greatest computer scientist of all time. In a 1950 paper that outlined what has come to be known as the Turing Test he offered a way out of endless philosophical speculation about whether computers could ever be classed as ‘intelligent.’ He said that if human judges ask interview questions of a hidden computer and a hidden person and cannot tell the difference after five minutes, the computer should be considered intelligent. Nowadays, programmers compete yearly for the Loebner Prize, which is won by the computer that is most often mistaken for a human.
But the Turing Test’s application is no longer limited to questions of artificial intelligence: Social scientists too are getting in on the action and using the test in a completely new way — to compare different human subjects and their ability to pass as members of groups to which they do not belong, such as religious and ethnic minorities or particular professional classes. With the Turing Test, sociologists can compare the extent to which subjects can understand people who are different from them in some way.
“Why It’s OK to Let Apps Make You a Better Person”
The Atlantic March 9, 2012
In article after article, one theme emerges from the media coverage of people’s relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification–nudging, the quantified self, and gamification–and good old-fashioned financial incentive manipulation, are tackling weakness of will. They’re harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.
Skeptics might believe while this trend will grow as significant gains occur in developing wearable sensors and ambient intelligence, it doesn’t point to anything new. After all, humans have always found creative ways to manipulate behavior through technology–whips, chastity belts, speed bumps, and alarm clocks all spring to mind. So, whether or not we’re living in unprecedented times is a matter of debate, but nonetheless, the trend still has multiple interesting dimensions.
“Why Occupy Wall Street is So Hard to Understand”
The Atlantic December 01, 2012
Our view is that the protesters should not be expected to offer clear public-policy guidance. In this essay, we’ll lay out why. The Occupy protesters have succeeded in creating emotionally impacting, ethically guided collective action. But they’re constrained by the forms of expressing political dissent that Americans are familiar with. Most people are habituated to express their political dissent by way of a limited number of options: individualism, token gestures of solidarity, joining an existing campaign, and partaking in a standard form of political participation. None of those forms are on the scale of action needed to deal with the problems to which OWS has addressed itself.
In other words, Occupy protesters have to create not just a set of demands, but a set of new ways of demanding. That sort of social experiment requires breaking from the status quo to find new leverage points on existing power structures. That’s what Occupy has attempted to do. This new type of emotionally impacting, ethically guided collective action is not incoherent, but it may be illegible.
Read Full Article
“Google Files Creepy Patent to Automate Your Social Media Voice”
Slate Dec. 3, 2013
Who has time anymore to manage their social media feeds? All the status updating, replying, and posting of smart takes on the day’s news is exhausting. Well, Google wants to help you out with that: The company recently submitted a patent for software that learns how users respond to social media posts and then automatically recommends updates and replies they can make for future ones. Consider it outsourcing, for your social life—an amped up, next gen blend of automated birthday reminders and computer generated, personalized remarks (more successful Turing Test than random word salad).
“Humans are Already More ‘Enhanced’ by Technology than We Realize“
Slate October 3, 2013
Time recently ran a cover story titled, “Can Google Solve Death?” The wording was a bit much, as the subject of the piece, Google’s new firm Calico, has more modest ambitions, like using “tools like big data to determine what really extends lives.” But even if there won’t be an app for immortality any time soon, we’re increasingly going to have to make difficult decisions about when human limits should be pushed and how to ensure ethics keeps pace with innovation.
“When Nudge Comes to Shove” (re-print of New Scientist article)
Slate July 7, 2013
Read Full Article
“Why We Need New Rights to Privacy”
Slate Nov. 2, 2012
Thanks to the real state website Zillow, it’s now super easy to profit from your neighbor’s suffering. With a few easy clicks, you can find out “if a homeowner has defaulted on the mortgage and by how much, whether a house has been taken back by the lender, and what a house might sell for in foreclosure,” as the Los Angeles Times recently reported. After using the service, you can stop by the Johnsons’ to make them a low-ball offer, perhaps sweetening the exploitation with a plate of cookies.
Maybe that’s not fair. Zillow doesn’t let people opt-out, but the company omits borrowers’ names, has a process for correcting mistakes, and uploads only legal information that was previously—albeit inconveniently—available.
“How to Make a Spy Exhibit Boring”
Co-authored with John Mix
Slate October 10, 2012
In the status update age, it may be hard to believe, but not every aspect of technology should cater to you and your experience. Museums are especially vulnerable to the dangers of user-centrism, and pressure is increasing for them to embrace the experience economy by offering interactive exhibits that “come alive.” Sure, visitors are learning and having fun, but in the long run, this attitude may threaten the durability of collections—you know, the reason why you go to the museum in the first place.
Today’s museums give the entertainment industry a run for its money. According to the Horizon Report: 2011 Museum Edition—a joint effort between the Marcus Institute for Digital Education in the Arts and the New Medium Consortium—there’s an all-out digital love fest going on. In the near-term, its authors expect extensive efforts to focus on mobile apps. In the next two to three years, they predict “wide-spread adoptions” of augmented reality. Four to five years down the road, emphasis should shift to digital preservation (looking for ways to “future-proof” digital objects) and smart artifacts that “blur the line” between digital and physical things.
“Future of Privacy Forum Director: Browser Settings Should Be As Easy to Navigate as a Car“
Slate August 23, 2012
We’re all concerned about privacy, but have a hard time separating hype from fact, hysteria from reasonable concerns, and peripheral from main issues. For insight into what’s really going on, I spoke with Jules Polonetsky, director and co-chair of the Future of Privacy Forum, a Washington, D.C.-based think tank that seeks to advance responsible data practices. His résumé includes a period of citizen advocacy, with Jules serving as legislative aide to Rep. Charles Schumer and as NYC consumer affairs commissioner under Mayor Rudolph Giuliani. Polonetsky has also worked in consumer advocacy for AOL.
“Why Do We Love to Call New Technologies ‘Creepy’?”
Slate August 22, 2012
What if you walked into a bar and everybody knew your name—except you’d never been there before?
A couple of weeks ago, we were introduced to Facedeals, which integrates Facebook’s APIs with facial recognition technology. When you enter a store, restaurant, or bar that uses Facedeals, your mug will be scanned so that you can be offered special deals and get automatically checked in to the location. “Creepy,” tech sites RedOrbit and TechCrunch both labeled it. That’s not surprising.
Creepy is the go-to term for broadcasting how technology unsettles us. Time and time again we’re asked to think in binary terms and identify a device or app either as good or its polar opposite, creepy. Although we’re often led to believe that creepy is an emotional response to things going horribly awry, our creepy radar isn’t nearly as reliable as Peter Parker’s danger determining spider sense.
“Digital Jiminy Crickets”
Slate July 13, 2012
co-authored with Thomas Seager
As if we didn’t already have enough reasons to distrust Wall Street, a new study finds that a troubling number of financial services professionals would rather bury a moral compass than use one. Twenty-four percent of participants attested that “unethical or illegal behavior could help people in their industry be successful.” Would Main Street be better off if this greed were curtailed by behavioral-steering technology—digital Jiminy Crickets?
In the classic story Le avventure di Pinocchi, Pinocchio learns that the essential difference between machines—an animated puppet—and real people is moral conscience. Though insignificant in Collodi’s novel, Jiminy Cricket serves as an external moral compass for Disney’s Pinocchio, following our hero through his adventures to tell him right from wrong. Pinocchio only develops moral maturity when he frees himself from the cricket’s advice and grasps how to make ethical decisions on his own.
“Was Hitler a Bully? Teaching the Holocaust to Kids“
Slate April 20, 2012
Should I allow my 5-year-old daughter to embrace the world of Disney, or break Prince Charming’s spell by pointing out that royalty got awesome castles by exploiting poor serfs? Answers to questions like this define a parent’s outlook on what childhood should be like. Despite my exposure to critical gender studies, I generally encourage my daughter to get her politically incorrect princess on. So, imagine my dismay at discovering that her kindergarten class planned to commemorate Yom Hashoah (Holocaust Remembrance Day) by discussing a person called “Bully Hitler.”
To be fair, the teachers did their best when comparing the worst criminal in history to a playground tormentor. By combining Chrysanthemum, a story about a young girl bullied because of her unusual name, with the forest-animal tale Terrible Things: An Allegory About the Holocaust, no traumatic detail was ever uttered. Nobody mentioned concentration camps filled with emaciated prisoners and flesh incinerating ovens. And that’s a good thing, because 5- and 6-year-olds just can’t grasp the complexity of the Holocaust.
“The Technologically Enhanced Memory”
Slate February 13, 2012
According to a recent study, memory’s sharpness deteriorates earlier than we presumed: Forty-five is the new mental 60. Fortunately, there are practical ways to enhance mental agility: exercise, healthy diet, sufficient rest, learning new things. Increasingly, technology will play an important role in preserving cognitive function. From the sanctioned war on Alzheimer’s to widespread off-label use of Ritalin, Adderall, and Modafinil, one thing is clear: We’re intent on getting our memory enhancement on.
Ubiquitous information and communication technology is a major player in the memory enhancement game. I’m not alluding to products that target impairments, like the iPhone app for combating dementia. Rather, I mean commonplace software that people use to make recall less taxing, more extensive, or easier to visualize.
“How’d My Avatar Get Into That Sneaker Ad?”
Slate January 4, 2012
Co-authored with Shaun Foster
Let’s play a game—thought experiment. Imagine it’s the near future. You’re walking along a city street crowded with storefronts. As you walk past boutiques, cafes, and the Apple Store, your visage follows you. Thanks to advances in facial recognition and other technologies, behavioral marketers have developed the capacity to take your Facebook profile, transform it into a 3-D image, and insert it into ads. That sweater you’re eyeing? In the display, the mannequin wearing it takes on your face and shape. The screen showing a car commercial depicts you behind the wheel. At a travel agency (let’s pretend they still exist—after all, this is a thought experiment!), you see yourself sunning on a beach, while the real you is bundled up against the cold. The ads might show you with an attractive stranger or a lost love (after all, Facebook knows whom you used to date). Or they could contain scenes of you and your happy family. No longer do you have to picture yourself in the ad—technology has that covered.
Although the technology in our thought experiment doesn’t yet exist, many of the necessary components already do. There is Autodesk 123D Catch, a program that uses computer vision technology to transform simple photographs into 3-D objects. Facebook has its own recognition tools to help users identify and tag photos. Video games generate avatars using sophisticated motion capture techniques.
Slate November 8, 2011
Co-authored with Thomas Seager
For the most part, media coverage of the Occupy Wall Street protest has been predictable. Stories are narrated according to the pro/con structure typical of—depending on whom you ask—balanced reporting or sensationalism. On the one hand, positive focus sympathetically explains why protesters have been demonstrating en masse since Sept. 17. These accounts place the activist mantra of “We are the 99%” in a historical and economic context that connects significant inequalities in wealth to violations of justice that should prompt people of conscience to demand rectification. On the other hand, negative reports argue against interpreting the protest as legitimate civil disobedience. Detractors’ opinions range from indictments of individual work ethic—contending that that problem at issue is poor individual decisions, not dysfunctional systems—to indignation over an unclear protest agenda that allows Dionysian energy to manifest in this millennium’s Woodstock.
“Talking Privacy With The ACLU’s Jay Stanley”
October 19, 2014
In a previous post, I mentioned that exciting speakers are making guest appearances in my current “Technology, Privacy, and the Law” course. Jay Stanley, Senior Policy Analyst at the American Civil Liberties Union, just dropped by via Skype. The conversation was so interesting that I wanted to share some of the highlights with you here.
“Can Predictive Technology Make Us Less Predictable?”
September 27, 2014
Over at the NY Times, Anna North asks if we can become more creative by using an unusual search engine called Yossarian that purports to help us see things in new ways—ways that go beyond the predictable associations we’re inclined to make when thinking about people, things, ideas, events, etc. What fascinates me about this possibility is that in order for it to be true, prediction needs to be the antidote to predictability. Without inferring where your mind is prone to wandering, neither a person nor an algorithm stands a chance of presenting something to you in a new light.
“Why It’s Too Easy To Dismiss Technology Critics: Or, The Fallacies Leading A Reviewer To Call Nicholas Carr Paranoid”
September 19, 2014
Over at the LA Times, Maria Bustillos has a harsh review of Nicholas Carr’s new book, The Glass Cage: Automation and Us. Referring to Carr as one “of the Information Age’s chief scaredy-cats,” Bustillos characterizes his latest endeavor, an explanation of problems with automation, as expanding “the field of his paranoia to computers in general.”
Sure, Carr’s last book, The Shallows: What The Internet is Doing to Our Brain, stirred up lots of debate about whether Google is making us stupid. Some said no and decided he’s too pessimistic—a negative judgment that’s absolutely appropriate to reach. You certaintly can have good reasons for believing that Carr’s conclusions aren’t supported by all of the research he musters. But ‘paranoia’ has connotations of irrationality and delusion. It’s an unfair association when applied to Carr. It’s particularly troubling because versions of the rhetoric are routinely applied to technology critics to unduly strip their skepticism of legitimacy.
“Why Smart Phones Should Help Us Avoid Selfie Sabotage”
With Woodrow Hartzog
Forbes September 10, 2014
Over at the New York Times, Farhad Manjoo argued that smart phones should be designed to better protect people from the harms that can arise when their nude selfies end up in the wrong hands. Manjoo’s proposal entails nudging, and consequently has greater moral complexity than meets the eye. We think it’s a good and important idea, and will explain why to help make the case more persuasive.
“Two Reasons Why Extreme Social Surveillance Doesn’t Replace Privacy”
With Woodrow Hartzog
Forbes September 1, 2014
More than a few people maintain that if we all knew everything about each other, the world would be a better place. The total transparency argument takes many forms, and shades of it can be seen in the surveillance policy and discourse that holds that “more information is always better than less information,” and information asymmetries should always be remedied by more disclosure and surveillance, not less.
“Why Predictive Shopping Might Be Bad For The Future”
Forbes August 21, 2014
Harvard law professor Cass Sunstein presents some disturbing statistics about “predictive shopping” in hisNY Times Op-Ed, “Shopping Made Psychic”. Unfortunately, Sunstein doesn’t emphasize the downside to his findings. Without sufficient critical commentary, it’s too easy to be too optimistic about the wrong way to build the future.
“Why A Philosopher Teaches Privacy”
Forbes August 19, 2014
Next week, the new term begins and I’ll be teaching an undergraduate philosophy course called, “Technology, Privacy, and the Law.” The first order of business will be to explain why thinking critically about privacy—determining what it is, deciding when it should be protected, and pinpointing how it ought to be safeguarded—means doing philosophy. Given the practical stakes of these issues, you might not realize that getting into them involves philosophical thinking. But if you’ve got a principled bone to pick with corporate, peer, or governmental surveillance, or if you’ve good reasons for being displeased with the activists who are taking stands against it, you’ve got your philosopher’s cap on.
“The Trifecta of Roommate Selection Technology: Privacy, Prejudice, and Diversity”
Forbes July 20, 2014
Over at The New York Times, Natasha Singer discusses the pros and cons of universities providing incoming students with online technology that helps them select roommates. She does a great job of identifying salient points. But I think it’s important to augment the story by adding some remarks on privacy and prejudice.
“Why We Should Be Careful About Adopting Social Robots”
Forbes July 17th, 2014
Although Jibo, designed by MIT professor Cynthia Breazeal to be the “world’s first family robot,” isn’t set to ship until 2015, folks are already excited about this little bot with a “big personality.” While there’s much to be said for Breazeal’s vision of “humanizing technology” so that the smart home of the future doesn’t “feel cold and computerized,” we might want to pause a bit before rushing to build the type of world depicted in the movie Her. Although it is easy to imagine we’ll be better off when we’ve got less to do, we don’t actually know the existential and social implications of outsourcing ever-more intimate tasks to technology.
“Too Titillating for Twitter: Why Outsourcing Social Media Participation is Disconcerting”
Forbes May 07, 2014
The Los Angeles Times just updated the design of its online edition. One of the new features is called “sharelines,” and it’s basically summaries appearing at the top of articles that readers can click on to instantly tweet out. Even the editor’s super-succinct note introducing the changes begins with three of these talking points!
While this exercise in concise craftsmanship is informative and user-friendly, it’s also got disconcerting overtones. Seen in the larger context of technological development, it’s a wakeup call to examine how often we’re being asked to outsource labor at the expense of living up to our potential.
“Why Goal Tracking Apps Are Existentially Provocative”
Forbes April 9, 2014
Normally, if you asked me to free associate what comes to mind when I hear words like “productivity app” and “life hack,” you’d be treated an all out vent session—a combination of skepticism and cynicism directed overly hyped products, overesteem for efficiency, and overblown attempts to delegate responsibility and willpower. But then I read a gushing review of Full, an app for tracking and measuring “what’s important to you.” I actually think it’s a good product and an excellent prompt for thinking about why goal track apps are so existentially provocative.
“Coping With Unsafe Campuses: Maybe Phones, Not Guns”
Forbes March 9, 2014
College can be a wonderful experience. But no environment is absolutely safe. Tragically, shootings, date rape, stalking, alcohol induced fights, and other predatory and violent incidents occur on campuses. Some see guns as the solution—letting students carry firearms to protect themselves. Just look at what’s happening in Minnesota, Idaho, and Oklahoma. Maybe a better way forward, however, is to arm students with a different technology: smartphones loaded with safety apps.
My university, Rochester Institute of Technology, uses TigerSafe (available on Android and iOS) developed by alumnus Eric Irish, currently Founder and CEO of CampusSafe, LLC. The app provides three functions: inform, report, and assist.
“Watching You Play: Can A Dystopian Video Game Help Us Better Appreciate Privacy?”
Forbes March 4, 2014
The biggest challenge for privacy advocates is getting people to appreciate why privacy matters even if you don’t have anything to hide. Those of us who feel strongly about the topic tend to lean on arguments Daniel Solove made in a seminal article back in 2011. But there’s other ways to explore the thesis that take us beyond privacy theory. Dystopian fiction is a powerful vehicle for considering the consequences of society placing too much value on transparency and over-sharing. So are dystopian video games, as is evidenced by the demo of Nicky Case’s “Nothing to Hide: Any Anti-Stealth Game Where You Are Your Own Watchdog”—a crowd-funded and open sourced endeavor (both code and art).
“Why App Developers May Be Selling Their Souls To Apple And Google”
Forbes February 8, 2014
The app economy is booming. Back in May, Apple AAPL +1.4% noted customers are downloading “more than 800 apps per second at a rate of over two billion apps per month on the App Store.” While this massive market reflects consumer taste at a time when smartphones and tablets are ubiquitous, a dark side also clouds consumer consequences. With respect to games alone, we hear recurring stories of exploited kids, adults being tricked into “doing something against their will,” and questionable privacy practices.
“Inside Google’s Mysterious Ethics Board“
with Patrick Lin
Forbes February 3, 2014
The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
Is the deal linked to Google’s buying spree of seven robotics companies in December alone, including Boston Dynamics, “a company holding contracts with the US military”? Is Google building an unstoppable robot army powered by AI? Does Google want to create something like Skynet? Or, is this just busybody gossip that naturally happens in an information-vacuum? The deal could simply be to improve search engine functionality.
“5 Ways to Avoid Being Suckered by Unreliable Information“
Forbes January 25, 2014
Without “noise makers”—folks spreading rumors, false information, hoaxes, rumor, and hearsay—markets and the blogosphere might grind to a halt. But as Vincent Hendricks argues in “When Twitter Storms Cause Financial Panic,” information bubbles can be immensely destructive. They can hurt the economy and damage society.
There’s no surefire way to use new media and only consume “correct information and convincing arguments.” Any consultant who tells you otherwise is, at best, exaggerating. Fortunately, there are simple things we all can do that can make a big difference. I reached out to Hendricks, Professor of Formal Philosophy at the University of Copenhagen and co-author of the new book Infostorms: How to Take Information Punches and Save Democracy. He offered the following basic recipe for determining if you’re stuck in an information bubble.
“Why Grandma Shouldn’t Have Posted Instagram Pics on Facebook”
Forbes January 7, 2014
Co-authored with Woodrow Hartzog
A well-intentioned grandmother accidentally hurt her grandkids’ feelings. She took screenshots of their delightful Instagram photos and proudly uploaded them to Facebook for all of her social network friends to see. If the younger generation didn’t set their accounts to private, could Grandma possibly have committed a faux pas? All she did was lovingly pass along publicly available information!
“Keep on Tweeting, There’s No Techno-Fix for Incivility or Injustice”
Forbes January 2, 2014
It would be nice to believe that the road to civility could be paved by following simple formulae, like Frank Bruni’s New Year’s exhortation, “Tweet less, read more”. Unfortunately, uncomplicated Op-Ed advice doesn’t translate into effective results in the messy real world.
“Why Debating Apple’s “Misunderstood” Ad is an Amazing Holiday Gift”
Forbes December 23, 2013
Apple’s latest television ad, “Misunderstood,” is leaving viewers with impassioned and conflicting interpretations. Giving Talmudic treatment to a short commercial might seem like overkill, especially given the Christmas theme. But I think we’re lucky the narrative has become a Rorschach test for discussing the social and ethical impact of technology.
“What You Don’t Say About Data Can Hurt You”
Forbes November 21, 2013
Co-authored with Woodrow Hartzog
Big data generates big myths. To help society set realistic expectations, the right kind of skepticism is needed.
Kate Crawford, Principal Researcher at Microsoft Research and Visiting Professor at MIT’s Center for Civic Media, does a fantastic job of explaining why folks are too optimistic about the promise of what big data can offer. She rightly argues that too much faith in it inclines us to misunderstand what data reflects, overestimate the political efficacy of information, and become insensitive to civil rights concerns.
“The Chilling Implications of Democratizing Big Data: Facebook Graph Search is Only the Beginning”
Forbes October 16, 2013
Co-authored with Woodrow Hartzog
While privacy advocates have expressed concern about the phenomenon of massive data collection and analytics colloquially known as “big data,” most people are more familiar with social media anxiety, like inappropriate Facebook posts leading to embarrassing and reputation ruining incidents. This situation is likely to change, and in the near future society will have to confront a profound question.
What happens when everyone can get their curious, envious, and outraged hands on increasingly powerful surveillance tools and correlation-creating algorithms that have high predictive value, powerful aggregation potential, and can be put to discriminatory, manipulative, and exploitative use?
“What is the Right Balance for Protecting Privacy and Promoting Accountability on the Internet?”
Forbes September 27, 2013
Co-authored with Woodrow Hartzog
According to NPR, 300 plus teenagers broke into former NFL player Brian Holloway’s vacation home, causing massive damage and showcasing their exploits on social media. In response, Holloway created a website, helpmesave300.com, that collects the alleged culprits’ social media posts. He claims this repository has enabled teens to be identified, and that the growing list of names is “being turned over to the sheriffs (sic) department to assist them to verify and identify the facts.”
“Does My Daughter Need to Grow Up Because Selena Gomez Did?“
Huffington Post, March 15, 2013
For the past few weeks, my six-year-old daughter has been obsessed with Selena Gomez reprising her role as Alex Russo on the Disney show Wizards of Waverly Place. Like many of her friends, Rory has seen every episode of Wizards and religiously listens to Selena’s music. While Alex–like so many of the current Disney lineup–is a snarky character, we haven’t had to worry much about the consequences of Selena fandom until now, when the complications of online information are smacking us in the face.
“What Sci-Fi Can Teach Us About The Present and Future of Information”
Huffington Post, January 24, 2013
Combine growing attachment to smartphones with advances in cutting-edge goggles (think Google Glass), and what do you get? Acceptance of augmented reality (AR), which supposedly became ready for “prime time” last year. With the technology out of the incubator and in our living rooms, Silicon Valley’s mouthpieces are becoming increasingly comfortable generating hype about the exciting new world it will create. Get ready, they say, for a “more information-rich, more navigable, more interesting, more fun” existence.
Equating more with better is an old advertising trick. The message is so deeply burrowed in our psyches that it sounds less like Madison Avenue and more like an ancestral call. Is it shallow? Yes. Is it easy to pick apart in academic discussions and stern parental lectures? Sure. Does it reek of the idealistic Internet coverage that we’ve been long bombarded with? Absolutely! But, let’s face it. The ideal wouldn’t persist if it didn’t work. We’re suckers for the supersized.
“Would Outsourcing Our Morality Diminish Our Humanity?”
Huffington Post, September 19, 2012
My colleague Thomas Seager and I recently co-wrote “Digital Jiminy Crickets,” an article that proposed a provocative thought experiment. Imagine an app existed that could give you perfect moral advice on demand. Should you use it? Or, would outsourcing morality diminish our humanity? Our think piece merely raised the question, leaving the answer up to the reader. However, Noûs—a prestigious philosophy journal—published an article by Robert J. Howell that advances a strong position on the topic, “Google Morals, Virtue, and the Asymmetry of Deference”. To save you the trouble of getting a Ph.D. to read this fantastic, but highly technical piece, I’ll summarize the main points here.
“Impatience as a Digital Virtue“
Huffington Post September 6, 2012
Apple’s Siri commercials promise a perfectly anthropomorphized digital assistant; a virtual, voice recognition secretary programmed to serve every scheduling and questioning whim by celebrity and average citizen alike.
But what — beyond a willingness to endure gentle caricature — does Siri ask us from us in return? The superficial answer is little but consumption: purchasing iPhones and data plans. But Michael Schrage, Research Fellow with the MIT Sloan School’s Center for Digital Business, argues the superficial answer misses something important. Siri — as well as other increasingly popular — and pervasive — technologies asks us to participate in a fundamental re-design of our social sensibilities.
“Lab Rats in the Social Experiment of Personalized Advertising“
Huffington Post August 29, 2012
Advances in biotechnology, nanotechnology, and nuclear energy have turned society into what Dutch ethicist Ibo van de Poel calls a large-scale laboratory for experimenting with the unforeseen consequences of new technologies. In comparison, personalized advertising — also called targeted and behavioral ads — doesn’t seem nearly so dangerous. It is easy to believe that the worst that can happen is we’ll buy a few unnecessary things, lose some privacy, or find some content off-limits (as in the case of new London billboard that uses facial recognition technology to send male and female viewers different information). A more sober look suggests we should be worried about participating in a social experiment that gambles with our human agency and freedom.
“Nietzsche’s Transformative Typewriter”
Cyborgology July 26, 2012
My recent article in The Atlantic, “The Philosophy of the Technology of the Gun,” is provocative in part because it suggests tools like guns might have more power of us than meets the eye. Given widely held views about autonomy (e.g., the notion that “guns don’t kill people, people kill people”), this alternative way of looking at things can cause anxiety, especially when misunderstood and translated into terms like those offered by the first commenter, “Guns are magic mind control machines.” The article presented an account of how humans relate to technology, and to further illuminate those relations, I’ll briefly revisit media theorist Friedrich Kittler’s take on Friedrich Nietzche’s use of the typewriter. Like my gun essay, this analysis challenges the “instrumentalist” conception of technology.
THREE QUARKS DAILY
“Ultrasound Technology Can Impede Informed Consent”
3 Quarks Daily July 23, 2012
Earlier this year, controversy surrounded ultrasound legislation in Texas, Virginia, North Carolina, Texas, and Idaho. Lost in the critical commentaries on abuses of patients’ and physicians’ rights was concern over a fundamental violation of liberty. This issue hasn’t gone away, even though sonogram coverage isn’t currently grabbing headlines.
Medical experts routinely use ultrasound technology in ways that favor the Right to the Life agenda, even in states that don’t have mandatory ultrasound laws. This problem goes unnoticed because the potential harm caused by the medical community is not the result of political ideology. Rather, it arises from inadvertent exploitation of patients’ natural human weaknesses and cognitive tendencies. To understand why, we need to grasp how typical conversations about ultrasound images can impede rather than foster informed consent.
“Are Millennials Less Green Than Their Parents?”
3 Quarks Daily May 28, 2012
Co-authored with Thomas Seager and Jathan Sadowski
A highly publicized Journal of Personality and Social Psychology study depicts Millennials as more egoistic than Baby Boomers and Generation Xers. The research is flawed. The psychologists fail to see that kids today face new problems that previously weren’t imaginable and are responding to them in ways that older generations misunderstand.
The psychological study seems persuasive largely because the conclusions are supported by massive data. Investigators examined two nationally representative databases (Monitoring the Future and American Freshman surveys) containing information provided by 9.2 million high school and college students between 1966 and 2009. Such far-reaching longitudinal analysis seems to offer a perfect snapshot of generational attitudes on core civic issues.
“Peace Prize for Homeless Hotspots”
3 Quarks Daily April 02, 2012
When the media discovered the Homeless Hotspots “charitable experiment,” it responded with a torrent of moral condemnation. Critics wasted no time denouncing the initiative as a publicity stunt that cruelly objectified homeless people as technological infrastructure. Instead of equating the initiative with exploitation, perhaps a movement should be started that advocates Saneel Radia, head of innovation at BBH Labs, be given a Nobel Peace Prize. After all, in 2006 Muhammad Yunus and the Grameen Bank received one largely for helping create the Village Phone program—an initiative praised for hiring impoverished and marginalized women as “Phone Ladies.”
On the surface, Homeless Hotspots looks like a typical conscientious enterprise. BBH Labs, a private company, partnered with Front Steps, a non-profit shelter located in Austin, Texas, to create a new business niche. Building off the model of employing homeless people to sell newspapers on the street, the Homeless Hotspots participants (sometimes reported as totaling 13, other times tallied at 20) offered 4G Internet to South by South West Interactive Festival attendees in exchange for $20 per day plus customer donations. The suggested donation—which anyone could refuse to pay—was $2 for every 15 minutes of Internet use. BBH labs, however, claims to have guaranteed the participants would earn a minimum of $50 per day.
CORRIERE DELLA SERA
“L’anima della cose”
Corriere della Sera July 1, 2012
Gli alieni sono qui. Così dice Ian Bogost — studioso dei mass media di fama internazionale, game designer e professore al Georgia Institute of Technology — nel suo nuovo libro, Alien Phenomenology, or What it’s like to be a thing. Ma nel parlare di alieni, Bogost non si riferisce ai marziani. Il suo saggio si occupa della vita segreta delle cose — grandi e piccole, high e low tech, comuni e rare, vecchie e nuove.
Bogost è preoccupato del fatto che la nostra capacità di dominare la tecnologia sia stata ottenuta a caro prezzo. Raramente guardiamo gli oggetti, i congegni e i meccanismi con un senso di ammirato stupore. Incoraggiati da potenti strumenti scientifici e da conoscenze tecniche, sopravvalutiamo con arroganza la portata regolatrice dell’attività umana.
“E i cacciatdori di dati personali costruiscono spot mirati“
Corriere della Sera March 06, 2012
I progressi della biotecnologia, della nanotecnologia e dell’energia nucleare hanno trasformato la società in quel che lo studioso olandese di etica Ibo van de Poel chiama un grande laboratorio per la sperimentazione delle conseguenze impreviste delle nuove tecnologie. Al confronto, la pubblicità personalizzata — detta anche mirata o comportamentale — sembra assai meno pericolosa. È facile credere che al massimo acquisteremo qualche oggetto inutile, perderemo un po’ della nostra privacy, o avremo dei limiti nell’accesso alle informazioni (come nel caso di un nuovo cartellone pubblicitario londinese che utilizza la tecnologia del riconoscimento facciale per inviare messaggi diversi al pubblico maschile e femminile). Un’analisi più seria consiglierebbe però di essere più cauti nel farci coinvolgere in un esperimento che mette in questione le nostre scelte e la nostra libertà.
Davanti alla pubblicità abbassiamo la guardia perché ci è familiare. La troviamo già nei papiri dell’antico Egitto, ed esiste anche in natura, dove i pavoni, ad esempio, esibiscono le piume per attrarre il partner. Siamo inoltre rassicurati dal fatto che controllori e associazioni per i diritti dei consumatori si adoperano instancabilmente per proteggere la privacy e garantire la possibilità di sottrarsi a sollecitazioni indesiderate. Ma purtroppo è in gioco qualcosa di più di quel che appare.