The views expressed by contributors are their own and not the view of The Hill

To solve the Facebook problem, think big (data)

Getty Images

When Congress grilled Mark Zuckerberg about Facebook’s disclosure of the personal data of 87 million of its users without their specific consent, it called for more individual control over personal data. Two senators asked whether Facebook would support their proposed Consent Act, which would require companies to provide such control.

But the disclosure of Facebook users’ personal information is only half the problem. The other half is what Cambridge Analytica did with this data. It used big data analytics to infer the personality type of each of these users and then sent them psychologically manipulative political ads.

{mosads}More consent rights will not solve this second problem. Most people do not understand how big data analytics works. This prevents them from making informed choices about whether to subject their personal information to it. To protect people in the age of big data, we need more than individual control. We need substantive rules that spell out which big data practices are acceptable and permissible, and which are not.

 

To appreciate this distinction it helps to review exactly what happened. Researcher Aleksandr Kogan convinced 270,000 Facebook users to take a quiz that revealed their psychological type and, in addition, to share their Facebook likes and interests. Facebook’s default settings at the time also gave him access to the data of the users’ 87 million friends. He sold this data goldmine to Cambridge Analytica.

Cambridge Analytica took the initial 270,000 quiz takers’ personality types and Facebook data and looked for patterns. What do neurotic people tend to “like” on Facebook? What do conscientious people “like”?  Once it had the patterns, it was able to apply them to the 87 million friends in order to infer their personality types and send them manipulative political ads.

Cambridge Analytica was using big data analytics, a technology that analyzes massive data sets in order to construct profiles, and then employs these profiles to infer additional, hidden data. Big data analytics can be used for good. For example, it can tell us which medicines will work for particular patients, or which educational strategies are best for specific students.

But it can also infer people’s hidden vulnerabilities so as to take advantage of them. Cambridge Analytica’s hard-to-resist political ads demonstrate this. In another example, a for-profit college identified “isolated,” “impatient,” “low self-esteem” individuals who felt “stuck,” and sent them ads for highly expensive, but often fruitless, courses.

Big data also presents another threat. Data analysts have been able to infer from Facebook “likes” not only psychological type but also a person’s skin color, sexual orientation, political party, religious affiliation, and alcohol use. Companies that go beyond “likes” and integrate more data points into the analysis can predict an even larger number of such attributes. This creates the risk that companies will deny loans, jobs or other opportunities based on these inferred qualities. Big data analytics can lead to unfair discrimination.

More individual control will not protect people against these threats. Most users do not understand big data analytics and so would not know that by clicking “yes” they were disclosing not just their visible data (e.g. Facebook “likes”), but also the hidden information that the company can infer from it (e.g. personality type, political party, alcohol use, etc.)

We do need consent requirements. But in the big data economy we also need something more. We need substantive rules that define which data practices are appropriate, and which are not. We need not just privacy law, but consumer protection law for the age of big data. Congress largely missed this key point in its April 10-11 hearings.

While substantive rules are a stronger form of regulation than consent requirements, they may be easier to achieve. Both the Federal Trade Commission and the Consumer Financial Protection Bureau already have the power to declare business acts and practices to be “unfair.” They could start, tomorrow, to do so with respect to big data analytics.

The FTC and CFPB have been reluctant to use their unfairness authority in this way. That may stem from Congress’s hostile reaction to the FTC’s expansive use of unfairness in the 1970s.

But times have changed. Indeed, the congressional hearings made clear that the real question is not whether we need regulation to sustain trust in the digital economy, but rather what form that regulation should take.

I have my suggestion: Think big. Don’t just legislate consent rights. Go beyond them and tackle the deeper problem. Congress should instruct the FTC and CFPB to use their existing unfairness authority to prevent manipulative and biased big data business practices. That will build trust in this vital and transformative technology, and so allow us better to enjoy its many benefits. That is the deeper lesson of the Facebook-Cambridge Analytica episode.

Dennis Hirsch is professor of law at The Ohio State University Moritz College of Law, where he directs the Program on Data and Governance, and at Capital University Law School. Follow him on Twitter @DennisDHirsch.

Tags analytics Big data Cambridge Analytica Data management Facebook Internet privacy Mark Zuckerberg

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more