The US government is stepping up efforts to counter China's ability to obtain private data of American citizens through the acquisition of information technology companies. In January 2018, for example, the Chinese technology company Kunlun completed the acquisition of California-based Grindr, a gay dating app. The acquisition was reversed in March 2019 by the Committee on Foreign Investment in the United States (CFIUS), an interagency panel with the power to block foreign deals adversely affecting US national security. According to some experts, the aim of CFIUS was to prevent the Chinese government from exploiting sensitive data on sexual behavior to blackmail US citizens. A few weeks later, CFIUS forced a different Chinese company to divest its stake in an app that collects health information, confirming that concerns around personal data are becoming more significant in the review process.
These steps are welcome, but they risk missing a larger point: the need for an overhaul of American thinking about online privacy. Restrictions of investment and trade on national security grounds are sharp, costly precision instruments, to be saved for complex cases. Most worries about foreign entities prying into American lives could be assuaged by strengthening everyday digital rights, a move with benefits beyond security.
The European Model
The United States ranked only 53rd out of 64 countries in terms of legal restrictions on data flows, use, and access in 2017, according to the European Centre for International Political Economy (ECIPE), a Brussels-based think tank. Predictably, the ranking was led by China, Russia, and Turkey, where state control over the internet is pervasive. Perhaps more surprising, France came in at fourth place and Germany at seventh—and that was before the 2018 General Data Protection Regulation (GDPR), a strict privacy law that applies throughout the European Union (EU). Today, all countries in the bloc would rank quite high on the ECIPE scale.
Unlike authoritarian states, the EU does not censor online content, nor does it grant law enforcement agencies access to personal data without a court order. Unlike the United States (US), however, it sets general limits on which information businesses can collect from individuals, and what can be done with it. Where the US has so far opted for minimal regulation, the EU pioneered a model predicated on the idea that citizens should enjoy the same rights online that they do offline—as threats to these rights evolve with technology, new protective measures are needed.
The GDPR introduces a data minimization principle whereby personal data can be collected "limited to what is necessary in relation to the purposes for which they are processed." Such purposes must be "specified, explicit and legitimate" and transparently communicated to the data subject. Collection of sensitive information, e.g., on political opinions, ethnicity, or health, requires explicit consent. Individuals have a right "not to be subject to a decision based solely on automated processing," which produces legal effects.
Most important from the security standpoint, companies of any nationality that collect information on EU citizens are forbidden to transfer it outside of the bloc, unless the destination country has been whitelisted by the European Commission1 or—for non-whitelisted countries—unless the data collector can ensure "appropriate safeguards" and "enforceable data subject rights and effective legal remedies are available." Whitelisting is based on review of a country's privacy laws but also on its record on "the rule of law, respect for human rights and fundamental freedoms."
Embracing Digital Rights
The digital rights model has been sometimes criticized in the US because it can lead to economic losses. Limitations on data processing and resale curb corporate profits. Restrictions on cross-border data flows hamper trade. As the importance of information grows with the size of the digital economy, these are legitimate objections, but they fall short of being convincing on three counts.
First, the increasing tension between internet users and dominant technology companies shows that there is unmet demand for stronger digital rights in the US too. American consumers are not happy about platforms sharing their information far and wide—if they did not complain until recently, it's because they did not know. The current legal landscape is fragmented. California has a tough privacy law, and all US states mandate data collectors to notify data subjects in case of a breach where information was exposed, but no overarching principles of data protection are set at the federal level. The recent congressional hearings (in the House and Senate) should be swiftly followed by legislation. Failing to act may increase mistrust towards digital services and slow technology use. The economic fallout could be much larger than any losses caused by data use restrictions.
Second, several countries—including Brazil, Canada, Japan, South Korea, and Thailand—have adopted or will soon adopt GDPR as a model for their own privacy legislation. If providing adequate privacy protections in recipient countries becomes a condition for engaging in cross-border data flows, the US has two options, assuming it does not want to forego gains from trade. It may strike a series of bilateral agreements, like it did with the EU, essentially committing to treat foreign users better than domestic ones. Already debatable on grounds of principle, this solution would also create a tangled web of country-specific obligations. Only the biggest corporations would likely manage to keep up, to the detriment of smaller businesses in sectors that are already too concentrated. Alternatively, the US could write its own GDPR, tailored to the needs of the American market yet in line with the emerging international standards. Domestic companies would still have to make changes in order to comply, but that would happen in a context of uniform rules. Compliance would be easier, and it could also be largely automated.
Finally, many internet users may tolerate a domestic email provider accessing the text of their private electronic communications in exchange for services such as automated replies, but hardly anyone wants to share personal data with foreign intelligence services. The digital rights model still allows for automated replies, conditional on consent. Conversely, it prohibits by default the transfer of information to places that do not offer sufficient guarantees against unauthorized data access. As data-intensive markets become more global, restricting data flows to countries refusing to prevent breaches, including those perpetrated by their own governments, would set a new, needed standard. It would introduce a protection baseline that courts could enforce against all companies operating domestically, including those headquartered overseas. Such a baseline would remove the need for CFIUS to intervene with every new app.
At the time of writing, the European Commission had whitelisted, partially or totally, 13 jurisdictions. The United States was included, but only with respect to entities certified under the Privacy Shield framework.