Navigate to News section

Privacy … LOL

Aleph-Bits is Tablet’s new feature about the ethics of digital life. In today’s column: Privacy is over, Facebook already controls all of your personal data, and the only real question is whether you’ll get anything in return.

by
David Auerbach
December 20, 2018
Photo by Ludovic MARIN /AFP/Getty Images
Facebook CEO Mark Zuckerberg leaves after the 'Tech for Good' summit at the Elysee Palace in Paris, on May 23, 2018. Photo by Ludovic MARIN /AFP/Getty Images
Photo by Ludovic MARIN /AFP/Getty Images
Facebook CEO Mark Zuckerberg leaves after the 'Tech for Good' summit at the Elysee Palace in Paris, on May 23, 2018. Photo by Ludovic MARIN /AFP/Getty Images

Data privacy is over. That is the lesson of Cambridge Analytica and the ongoing privacy scandals that are plaguing Facebook and would be plaguing many other companies if people paid as much attention to them. In the wake of reports that Facebook has been letting other tech giants harvest large chunks of their data and the lawsuit against Facebook filed Wednesday by Washington, D.C., over the Cambridge Analytica breach, the bad PR and potential fines still pose little chance of changing the fundamental invasiveness of internet companies. The real question now is whether we—whose data is routinely pillaged, stored, and harvested—are going to receive anything in return for what we are giving up.

When I chronicled the burgeoning world of data collection, consumer profiling and microtargeting in 2013, the systems were already in place to track, classify, agglomerate, analyze, and exploit our online and offline activities, both in commercial and governmental domains. Nearly six years on, little regulation has occurred, and the threat of privacy loss has become a fait accompli. Hundreds of companies today have hundreds of millions of people profiled to various degrees, with little transparency. Due to the monotonic nature of privacy (you can’t gain it, you can only lose it), our current dilemma is not how to keep people’s personal information private but how to cope with the ongoing fact that people’s data is not private.

People do not own much of their own data, to begin with. Consider your own personal data: Everyday, random people inadvertently collect data on you, though they rarely do anything with it. The person behind me in line at the drugstore may see that I’m buying cold medicine, and therefore know that I’m sick. We are accustomed to little data leaks like that because, in isolation, they don’t make much difference. Who cares if a random person knows that I have a cold if they remain a stranger and that’s all they know about me.

A private eye who follows me around everywhere and tracks my habits is considerably more nosy because he or she can put that data together to profile me and infer more about my life. The private eye may not even see much more than the random people I run into every day. Just by watching what businesses I visit, they might be able to figure out after trips to the therapist and accountant, that I’m going broke and through a divorce (I’m not, but since Facebook currently classifies me as “African-American,” accuracy is not the highest priority here)—and thus at the kind of moment of unique vulnerability that unscrupulous actors might prey on.

Facebook, Google, and other tracking companies like ComScore function as mechanical private eyes that collect everyone’s data all the time. By centralizing vast quantities of data about me, these companies turn incidental data collection into a more comprehensive invasion of privacy, and I have few options to escape their observation even if I avoid using their services.

Facebook tracks people who do not use Facebook, partly in order to learn more about people who do use Facebook, partly to track advertising responses among non-Facebook users, and partly to encourage more people to create an account. The Economist reports that “Google and Facebook have squashed all competitors in the digital-advertising marketplace. The duopoly commands the majority of the $100bn internet advertising market in America; Vice, BuzzFeed, and Vox combined get less than 1% of that market.”

While Facebook and Google dominate display advertising, the data world is bigger than just ads. More shadowy “data brokers” like Acxiom, BlueKai, and Exelate make it their business to accumulate, analyze, and market any and all advertisers to help target consumers. Your data flies between these companies without your having given much, if any, consent.

Consequently, when more nefarious entities try to take advantage of this ubiquitous consumer surveillance, as with the Cambridge Analytica scandal earlier this year or Russia’s attempt to use Facebook to target African-American communities in the 2016 election, there’s no magic fix to put in place to stop such abuse. Most of the people targeted by Cambridge Analytica or Russia’s IRA had no idea their data was even being collected or used, and while these entities may have uniquely nefarious intentions, their actual transactions with Facebook were absolutely ordinary.

Facebook’s executives often seem oblivious to the sheer sensitivity of their position, as when former head of communications and policy Elliott Schrage wrote of hiring a firm to do opposition research on George Soros, “I believe it would be irresponsible and unprofessional for us not to understand the backgrounds and potential conflicts of interest of our critics.” Coming from a company that has the largest store of personal backgrounds in human history, Schrage’s words read as a veiled threat.

Yet attempts to regulate this data free-for-all have been slow in coming, and the proposals that have been seen are either toothless (like the Do-Not-Track browser flag) or unenforceably idealistic (as with the EU’s General Data Protection Regulation). The conceptual failure of much of the legal and technological theorizing around data privacy lies in a failure to grasp the economic nature of data.

For all the cheerleading around the “information economy” and the blunt fact that more data is now produced every day than was produced in the history of humanity prior to the year 2000, there is scant understanding of the economic properties of information.

The key difficulty lies with data’s nonlinear utility. Unlike most commodities, data’s value vastly increases when you can accumulate it in sufficiently huge quantities. Three Google researchers wrote in 2009, “Simple models and a lot of data trump more elaborate models based on less data,” by way of explaining how Google had made massive leaps in search, voice recognition, and translation not through algorithmic innovation but sheerly by having orders of magnitude more data than anyone up to that point had ever analyzed.

The value of an individual piece of data is highly relative. Facebook can put partial demographic profiles of a thousand people to great use by integrating such data into its existing stores. A new company cannot get anywhere near as much utility out of it. Since data is more valuable to companies who already have a good deal of it, they will be willing to pay more for a particular data set—more than it could well be worth to a smaller or niche player in the same field. Moreover, your data is far less valuable to you than it is to Facebook or Google, so they are in a position to dictate terms as to how you are rewarded for the data accumulated on you. Currently, those terms mostly involve free, advertising-driven services like Facebook and Gmail.

As a result, there is a natural tendency toward data monopoly or oligopoly. Google and Facebook dominate not only because of their ubiquitous platforms and economies of scale, but because they are hoarders of personal data as much as they are brokers of it. What you provide to Facebook, Google, Amazon, Mastercard, or any other large company is only one facet of the data that comes to be collected and used to profile you. Traditional antitrust remedies, such as breakups, are unlikely to improve the fundamental problem, when data is liberally shuffled back and forth between the major players to everyone’s benefit save the consumers. Two Facebooks will not be any more privacy-friendly or competition-friendly than one Facebook.

Any prospective solution to the unethical promiscuity of data must factor in the discrepancy of the value of data to its nominal owners (the consumers) and the companies collecting and exploiting it, as well as the fundamental and ubiquitous lack of consent in the collection of the data. This collection is unlikely to stop, so regulatory efforts could focus instead on providing more meaningful compensation to those whose information is collected.

One principle to start with should be the radical idea that the use of information about you, provided you are not a public figure, constitutes a use of your actual labor, that is, the life which you have been living. If your personal data is valuable to companies, then the generators of that data should be compensated as well as the collectors, on negotiated terms. The exact mechanism of compensation, whether a tax or a dividend or something else, may vary, but it will be more than merely free internet services that snoop even further into your life. Since the data genie will not be put back in the bottle, the most realistic privacy backstop that now exists is to require that the original sources of so much of data’s data—people and the lives they live—be recognized as sources of economic value in and of themselves.

David Auerbach is the author of Bitwise: A Life in Code (Pantheon). He is a writer and software engineer who has worked for Google and Microsoft. His writing has ap­peared in The Times Literary Supplement, MIT Technology Review, The Nation, Slate, The Daily Beast, n+1, and Bookforum, among many other publications. He has lectured around the world on technology, literature, philosophy, and stupidity. He lives in New York City.