Why the House information-sharing bill could actually deter information sharing
I fear that the House bill is indeed seriously flawed, but not because it invades privacy. Instead, it appears to pile unworkable new privacy regulations on the private sector information-sharing that's already going on.
The key point to remember is that plenty of private sector sharing about cybersecurity is already going on. There aren't a lot of legal limits on such sharing, unless the government is getting access to the information. If it is, providers of internet and telecom services can't join the sharing because an old privacy law bars them from providing subscriber information to the government in the absence of a subpoena.
The House bill solves that problem by allowing sharing to occur, "notwithstanding any other law." But overriding even a dysfunctional and aging privacy law quickens the antibodies of the privacy lobby. So they've been pressing for kind of "privacy tax" on information sharing - specifically, they want assurances that personal data will be removed from any threat information that companies share.
Everyone recognizes, at least in theory, that this can't be a blanket exclusion; some threat data shouldn't be separated from personal data. If an IP address or email account is being used to distribute malware, those things are threat information. And they are also personal data, since some human being is probably tied to the address or account. If personally identifying information about attackers can't be shared under the bill, then the bill won't do much good.
The bill tries to square the circle by allowing companies to share data about attackers; a company sharing information is only required to screen out personal data that is "not directly related to a cybersecurity threat."
So far, so good. But how does a company know that the information it's sharing really identifies only persons "directly related to a cybersecurity threat"? Unfortunately, the kind of intelligence that is routinely shared today does not come with that kind of guarantee. Critical Stack is a startup which aggregates publicly available threat intelligence. A quick look at its sources reveals that much of the threat information is collected with tools that are automated and therefore imperfect. Your IP address can get on the list if you are innocent but happen to act like an attacker - perhaps by pinging certain ports or having your IP address temporarily misused.
So, if I share such imperfect information under the House bill, do I get the benefit of liability protection or not? My guess is not. Under the bill, companies must "take reasonable efforts to … remove any information [that the company] reasonably believes at the time of sharing to be personal information of, or information identifying, a specific person not directly related to a cybersecurity threat." In the real world, companies will know that the information they're sharing is not perfect, that it flags as suspicious accounts and addresses that turn out not to be a threat.
Knowing that, how can the company say that it "reasonably believes" it has removed all information identifying a specific person except for information about persons "directly related to a cybersecurity threat"? It can't. (I note that this was not a problem under the earlier version of the bill, which required deletion of data about persons a company "knows" not to be a threat; the question is who bears the burden of uncertainty, and the new bill puts it squarely in the sharing company.)
All this means, I think, that lawyers will end up scrubbing any methodologies that generate threat information before their company decides to share. That's expensive, and the lawyers won't give a lot of clean opinions.
End result: under the House bill, the privacy tax is so high that fewer companies will share threat data, and the ones who do will share less.
It's not clear that this bill will do anything to encourage information sharing.