Cybersecurity

Would Data Breach Notification Laws Really Improve Cybersecurity?

Responses to top-down federal dictates are hard to predict.

|

Equifax
Alex Milan Tracy/Sipa USA/Newscom

Another month, another major hack. This time, the compromise of consumer credit reporting agency Equifax has exposed the personally identifiable information (PII) of roughly 143 million U.S. consumers (not customers!) to outside groups.

People are understandably furious, and they want solutions. But we should be wary of quick legislative proposals that promise to easily fix our cybersecurity woes. Our problems with security are deep and hairy, and require lasting solutions rather than short-term Band-Aids.

There is no question that Equifax royally botched its handling of the corporate catastrophe. People generally don't have good experiences with credit reporting companies as it is. As a kind of private surveillance body, they collect data on people without permission to determine what kinds of financial opportunities will be available to us. They often get things wrong, which creates unnecessary headaches for unfairly maligned parties who must prove their financial innocence to a large corporate bureaucracy.

You'd think that a company whose sole purpose is to maintain credible, secure dossiers on people's financial profiles would make security one of their highest priorities and would have a strong mitigation plan in place for the horrible possibility that they did get hacked. You'd be wrong. While the details of what exactly went wrong at Equifax are still being fleshed out, their incident response leaves much to be desired, to say the least. (The fact that the company's Argentine website had a private username and password that were both simply "admin" does not inspire confidence.)

Many people feel that Equifax waited too long to notify affected parties (but Equifax executives made sure to cash out just in time). Even then, Equifax didn't reach out to victims directly, but asked people to visit a sketchy domain and enter more PII to determine whether or not you might be affected, as yours truly apparently was. This kind of arrangement primes people to be vulnerable to phishing scams. Rather than setting up a website that clearly associated it with Equifax—say, "equifax.com/securityincident2017″—Equifax directed people to a separate domain called equifaxsecurity2017.com. Illustrating the perils of such a poorly-thought arrangement, Equifax itself promoted a phishing scam in communications to customers, accidentally sending breach victims to a fake notification site called securityequifax2017.com. You just can't make this stuff up.

There is no question that Equifax screwed up majorly and should be held accountable. Already, federal regulators tasked with overseeing consumer safety and credit—namely, the Federal Trade Commission (FTC) and Consumer Financial Protection Board (CFPB)—are hard at work determining how to proceed.

But some feel that this is not enough. Legislators see the Equifax breach as an opportunity to promote data breach notification bills that had trouble getting passed in the past.

Specifically, Rep. Jim Langevin (D-R.I.) is pushing forward a new version of 2015's failed Personal Data Notification and Protection Act (PDNPA). An updated version of the bill is not available on Congress' legislation website, but the earlier version would have required businesses that collect PII on at least 10,000 individuals to notify affected parties within 30 days of a security breach. The bill outlines what information and resources the companies should make available to victims and designates the FTC as the enforcer. There are a few exemptions, such as for incidents that would affect ongoing legal investigations or those that are determined to not be a reasonable harm risk to individuals.

In terms of helping consumers pick up the pieces after a corporate hack, this kind of path forward seems reasonable. The sooner that people know they are affected by a hack, the sooner they can start changing the right passwords and flagging the right accounts. Already, companies are governed by a patchwork of 48 different state and territorial data breach reporting rules. These range from fairly broad, as in Alaska's guidance to notify affected parties "without unreasonable delay," to California's relatively specific requirements for what and when companies need to bring victims in the loop.

Langevin and supporters frame his bill as a boon to corporations affected by this patchwork. He argues that a single, clear federal standard is much preferable for both consumers and companies who would otherwise have to wade through a thicket of sometimes contradictory or unnecessarily duplicative reporting requirements. In general, federal pre-emption can be a handy tool for preventing excessive or unproductive state licensing and regulatory schemes—but of course, we'd need to compare the final bill text with the status quo to see just how this shakes out. Interestingly, some consumer advocates actually oppose measures like the PDNPA because it is actually less stringent than some existing state rules.

Supporters of federal data breach notification laws are on shakier ground when it comes to claims about the effect on overall cybersecurity, however. For example, Langevin told Christopher Mims of the Wall Street Journal that he believes his bill "would have had a direct impact in the case of the Equifax hack."

The reasoning is that if firms knew they had to quickly notify consumers of a breach, they'd proactively invest in better cybersecurity up front. It's very embarrassing to publicly admit that you messed up, and companies usually take a big hit in valuation once they break the bad news.

There's good reason to be skeptical of this claim. First of all, as Langevin himself admits, there already exist a wide array of strong data breach notification laws on the state level. It's possible that a federal-level law would have more of an impact, but his other arguments seem to contradict the claim that the PDNPA would notably improve our cybersecurity position.

More importantly, this kind of legislation could just as easily have the adverse effect of encouraging companies to not invest in security. Depending on the final wording of the bill, companies could make the cynical calculation to just not invest in appropriate monitoring techniques. If they don't know about a breach, they won't be compelled to report on them. Think of it as a "Schrodinger's hack" conundrum. It sounds silly, but regulations can sometimes make people behave in silly ways.

The quest to more closely align an organization's financial incentives with their incentives to take cybersecurity seriously is an admirable one. There is still a large gap between executive priorities and those of their information security underlings whose dire warnings too often go ignored. But the problem with top-down regulation is that it can never accurately predict how people will react to its dictates. One unintended consequence begets more regulations, which can create more unintended consequences and more regulations. The kludge builds up, and the underlying problems remain unaddressed.

This has unfortunately been the case with U.S. cybersecurity policy thus far. As I repeatedly emphasize, the federal government has an abysmal track with its own cybersecurity, and is hardly the entity that should be entrusted to singlehandedly solve our nation's security problems.

This does not mean that nothing should be done. On the contrary, there is a lot that the government could do to improve our cybersecurity position. In terms of better aligning financial incentives with security, the federal government should instead take a serious look at how to better encourage the development of a cybersecurity insurance market. And there's a lot of bad things that the government could stop doing as well. For instance, the National Security Agency and other intelligence bodies should stop hoarding security vulnerabilities that end up getting leaked and weaponized against U.S. bodies. Strong encryption should be supported, not undermined. And counterproductive laws like the Computer Fraud and Abuse Act that penalize security research should be reformed or scrapped.

In general, our politicians need to stop issuing short-term bands for problems that require long-term solutions. It may not be sexy, but our security position depends on it.