Coronavirus

Seemingly Small Differences in the Accuracy of COVID-19 Antibody Tests Can Make a Big Practical Difference

When infection prevalence is low, a test with relatively low specificity can generate highly misleading results.

|

We are counting on COVID-19 antibody tests to estimate the prevalence and lethality of the novel coronavirus and to identify people who were infected but now may be immune. Are the tests up to those tasks? That depends on which test you use, how you use it, and the amount of risk you are prepared to accept.

Dozens of different tests are currently available, and their accuracy varies widely. Evaluate Vantage's Elizabeth Cairns looked at 11 tests and found that their reported sensitivity (the percentage of positive samples correctly identified as positive in validation tests) ranged from 82 percent to 100 percent, while their reported specificity (the percentage of negative samples correctly identified as negative) ranged from 91 percent to 100 percent.

A recent study by the COVID-19 Testing Project evaluated 12 antibody tests and found a wider specificity range, from 84 percent to 100 percent. Most of the tests had specificities higher than 95 percent, while three had rates higher than 99 percent.

In the Evaluate Vantage survey, the best bets for sensitivity, based on numbers reported by the manufacturers, were Abbott's SARS-CoV-2 IgG Test and Epitope's EDI Novel Coronavirus COVID-19 IgG ELISA Kit. For specificity, the latter test and another Epitope product, the EDI Novel Coronavirus Covid-19 IgM ELISA Kit, did best, along with Creative Diagnostics' SARS-CoV-2 Antibody ELISA and Ortho-Clinical's Vitros Immunodiagnostic Product Anti-SARS-CoV-2 Total Reagent Pack. Only Epitope's EDI Novel Coronavirus COVID-19 IgG ELISA Kit had perfect scores on both measures, although Abbott's kit came close, with a sensitivity of 100 percent and a specificity of 99.5 percent.

Even when a test has high sensitivity and specificity, its results can be misleading. In the context of studies that seek to measure the prevalence of the virus in a particular place, for example, even a low false-positive rate could generate substantial overestimates when the actual prevalence is very low.

Suppose researchers screen a representative sample of 1,000 people in a city with 2 million residents. Leaving aside the issue of sampling error, let's assume the actual prevalence in both the sample and the general population is 5 percent.

If a test has a sensitivity of 100 percent and a specificity of 99.5 percent (the rates reported by Abbott), the number of false positives (0.5 percent times 950 people) will be about one-tenth the number of true positives (50). The estimated number of local infections (110,000) would then be only 10 percent higher than the actual number of infections (100,000). But if the actual prevalence is 1 percent, a third of the positive results will be wrong, resulting in a bigger gap between the estimate and the actual number: 30,000 vs. 20,000—a 50 percent difference.

Now suppose the antibody test has a specificity of 90 percent (similar to the rate reported by BioMedomics, which supplied the tests used in a recent Miami-Dade County antibody study). If the true prevalence is 5 percent, false positives will outnumber true positives, and the estimated number of infections will be about three times as high as the actual number. Unless the researchers adjust their results to take the error rate into account, that's a big problem. It's an even bigger problem if the true prevalence is 1 percent, in which case false positives will outnumber true positives by about 10 to 1.

As long as the test has high specificity and infection prevalence is relatively high (and assuming the samples are representative), antibody studies should generate pretty accurate estimates. But that won't be true when specificity is relatively low or prevalence is very low unless the researchers have a good idea of the test's error rate and adjust their data accordingly.

What about using antibody tests to figure out who is immune to COVID-19? It's reasonable to believe, based on the experience with other viruses, that antibodies confer at least some immunity. That is, after all, the premise underlying all the fevered efforts to develop a COVID-19 vaccine. But the extent and longevity of such immunity is not yet clear.

If you had symptoms consistent with COVID-19 at some point, you might want an antibody test to confirm your suspicion, even if you tested negative for the virus itself (since those tests may have a substantial false-negative rate). You might also want an antibody test if you were exposed to someone with COVID-19, or simply because you could have been infected without realizing it, since asymptomatic infection seems to be common.

This week Quest Diagnostics began offering COVID-19 antibody tests through an online portal for $119 each. After ordering the test, you make an appointment for a blood draw at one the company's 2,200 patient service centers. The results are available online within three to five business days.

Quest notes that "it usually takes around 10 to 18 days to produce enough antibodies to be detected in the blood." Hence the test is not recommended for people who currently are experiencing symptoms, who tested positive for the virus in the last seven days, or who were directly exposed to COVID-19 in the last 14 days.

Quest also notes that the test "can sometimes detect antibodies from other coronaviruses, which can cause a false positive result if you have been previously diagnosed with or exposed to other types of coronaviruses." How often does that happen? Although Quest's test was not included in the Evaluate Vantage survey, the company reports a specificity of "approximately 99% to 100%."

Quest likewise warns that "negative results do not rule out SARS-CoV-2 infection." It reports a sensitivity of "approximately 90% to 100%."

Those numbers indicate that Quest's specificity is very high—comparable to the figures reported by Abbott, CTK Biotech, Nirmidas Biotech, Premier Biotech, and SD Biosensor, although perhaps not quite as good as the rates reported by Creative Diagnostics, Epitope, and Ortho-Clinical Diagnostics. Quest's reported sensitivity covers a pretty wide range but still makes its test look better by that measure than Epitope's IgM test and the products offered by BioMedomics, Ortho-Clinical, and SD Biosensor.

In her Evaluate Vantage article, Cairns emphasizes that reported accuracy rates have not been confirmed by any regulatory agency. While Abbott and Becton Dickinson (which is collaborating with BioMedomics) "are reputable companies" that are "highly unlikely to make claims they cannot justify," she says, "many of the other antibody tests on sale around the world are from little-known groups and laboratories that might not be so scrupulous." She also points out that "the validation tests these companies have performed varied widely in size," ranging from about 100 samples to more than 1,000.

As with the antibody studies, the actual prevalence of the virus affects the usefulness of these tests for individuals. If the share of the population that has not been infected is very large and the specificity of the test is relatively low, false positives can outnumber true positives, meaning that someone who tests positive probably is not immune. Cairns makes that point in terms of a test's positive predictive value: the likelihood that any given positive result is accurate.

"The prevalence of Covid-19 is estimated at around 5% in the US, and at this low a level the risk of false positives becomes a major problem," Cairns writes. Assuming that prevalence, a test with 90 percent specificity would generate about twice as many false positives as true positives, meaning only about a third of the positive results will be correct. "A test with 95% specificity will lead to a 50% chance that a positive result is wrong," Cairns notes. "Only at 99% specificity does the false positive rate become anywhere near acceptable, and even here the chances are that 16% of positive results would be wrong." With a specificity of 99.5 percent (Abbott's reported rate, which is similar to Quest's), the chance that a positive result will be wrong falls to less than 9 percent.

These considerations are obviously relevant for policy makers as they decide who should be allowed to work (or travel internationally), where, and under what restrictions. Seemingly small differences in specificity can make a big difference when it comes to identifying people who are presumably immune to COVID-19.