Privacy Versus Progress in Medicine
The harms of HIPAA, and overcoming stagnation to live longer and healthier lives
Progress in the worlds of nutrition and everyday health has stalled, as has medicine to a more limited degree. We know a few things. You should exercise, avoid smoking, not be fat, and not jump off tall buildings. Besides that, there isn’t much we can tell you with certainty about what to eat and how to live your life.
In science, you begin by picking the low hanging fruit. Smoking increases your risk of lung cancer somewhere between 15 to 30 times. With an effect size that big, one doesn’t really need a scientific literature to know what’s going on. A single doctor who sees tens of thousands of patients over several decades would probably be able to come to understand the harms of smoking on his own.
But what about smaller effect sizes? Imagine there’s a food that if regularly consumed increases your risk of death by 10% over a given time range. You’d need a lot of data to figure that out. And of course, even if you find slight differences in survival rates between people who eat the food and those that don’t, you can’t be sure that there aren’t confounding variables involved that impact the results. The WHO recently made the elementary mistake of assuming that the correlation between consuming aspartame and getting cancer implied a causal link. We similarly know that people who eat fruits and vegetables live longer, but only a very poor researcher would take that information and make a causal claim, since people who eat fruits and vegetables and those who don’t differ on a thousand other characteristics. The same may be true for smokers and non-smokers, but when the effect size is large enough the odds that a result is driven by one or more confounding variables isn’t very plausible.
It’s the same with the covid vaccine. Over the last few years, the rate of death from the disease for the unvaccinated has been in the range of somewhere between 300% and 900% higher than the vaccinated. Surely, there are confounding variables here, but they usually don’t make such an extreme difference. Not to mention that there were randomized control trials too, which in my experience anti-vaxxers seem to be unaware of. Meanwhile, the WHO makes recommendations to eat fruits and vegetables based on estimated effect sizes of 13-33% for various diseases, numbers that are probably too small to make conclusions about in studies that don’t involve randomized control trials.
Few things are like smoking or vaccines. So we need all the tools we can get. “Big data” refers to the growing ability of scientists to gather large datasets and apply the computing power necessary to analyze them. Researchers studying any topic may run into the problem of not having enough observations. It’s always better to have more data than less, but having more becomes increasingly important when
The effect sizes you are looking for are small, as most effect sizes are
You have more confounding variables you suspect are involved
The best evidence for the effect of any particular treatment or intervention is a randomized control trial, but those are expensive, especially at a large enough scale to find small effects. The hope is that with enough data — both in terms of large enough samples and datasets that account for enough potential confounders — we can at least learn a few lessons about how to live longer, healthier lives.
The path forward is therefore clear. We should be doing more to get more data into the hands of more researchers. Unfortunately, we have laws and regulations surrounding privacy that make that extremely difficult. Reform in this area would do a great deal to advance progress in the fields of science, medicine, and health.
The Privacy Regime
In 1996, President Clinton signed the Health Insurance Portability and Accountability Act (HIPAA). In response, the Department of Health and Human Services (HHS) enacted what came to be known as the HIPAA Privacy Rule. Health providers and a handful of other entities were to safeguard “protected health information.” If you’re interested in the details of the Privacy Rule, you can read about it here.
A few things are important to stress. The rule sets it as a default that healthcare providers cannot disclose information without getting consent for each particular use. So if a doctor has data sitting around in a filing cabinet, he can’t just sell it or give it away to researchers without contacting each individual patient and getting permission. The other thing to note is that the rules are extremely complicated. As can be seen at the HHS website, the original Privacy Rule from 2000 is 419 pages of dense legalese. This is in addition to revisions to the rule from 2002 (93 pages), 2013 (137 pages), 2014 (27 pages), and 2016 (15 pages). Simply knowing what the rules are and how they apply in any particular case can be its own full-time job, and this adds to the workloads of doctors and researchers.
There appears to be much dissatisfaction with the privacy regime among medical scientists. A 2007 survey of epidemiologists asked how much the HIPAA privacy rule had made research more difficult, with 67.8% giving a four or a five on a five-point scale. When asked to write comments about the effects of HIPAA on research, 90% of those who responded had a negative view. The participants were also highly skeptical that the Privacy Rule actually safeguarded patient data. Another paper looked at a study of pregnant women that was ongoing while the Privacy Rule went into effect. The researchers involved went from being able to recruit 12.4 women per week to only recruiting 2-6 per week afterwards.
While some research is simply slowed down, when the Privacy Rule came into effect other projects were discontinued altogether. The California Cancer Registry used to have close working relationships with schools and medical hospitals in the University of California system. But, as the San Francisco Chronicle reported,
For 16 years, California's Cancer Registry has been dutifully logging the names and addresses of all state residents who come down with the dreaded disease, their type of cancer and whether they live or die.
Researchers at universities across the country mine that data, searching for clues to cancer’s causes and possible cures. Using registry records, the scientists call patients, recruiting them for surveys, studies and even clinical trials comparing new drugs or therapies.
Since April 14, 2003, however, a new federal law designed to protect the privacy of medical records has made it harder, if not impossible, for medical researchers in the United States to troll through patient charts, whether they are trying to unravel the riddle of cancer or studying complications in childbirth.
Citing the privacy rule, at least 17 Bay Area hospitals have imposed restrictions on the state Cancer Registry's accustomed rapid access to patient records.
“The door kind of slammed in our face,” said Dr. Dee West, chief scientific officer for the Northern California Cancer Center, which collects data in the Bay Area for the state registry.
In the view of some researchers, this interruption in the flow of information has irreparably harmed major cancer studies — including a $35 million National Cancer Institute project comparing outcomes and quality of life of colon and lung cancer patients…
Yet to the chagrin of many of its world-renowned researchers, UCSF became the first California institution to block rapid access to patient records, citing the privacy rule. The administration’s policy was quickly embraced by the University of California Office of the President, which ever since has been battling the state Cancer Registry — and many of its own scientists. UC officials argue that the privacy rule no longer allows the registry “unfettered access” to patient records at its medical school hospitals.
While there was some reporting on the harms of the Privacy Rule when it was released, this appears to have died down. After all, a journalist might write about a major research study that was shut down, but nobody is going to report on a study that was never begun because it required too much paperwork.
The details of the Privacy Rule are less important than the fact that there are nearly 700-pages of dense legalese dedicated to the topic in the first place, in addition to other ways in which patient data is regulated, as requirements of HIPAA are often tangled and overlapping with the activity of Institutional Review Boards, among other entities. Nobody likes paperwork, and the time of doctors and researchers is extremely valuable. Make the regulations they have to work under complicated enough, and they’ll focus their time on something else, even if the relevant rules might look reasonable when sitting on a page. In the real world, there are misunderstandings and disagreements about what the law requires that must be worked through, payments to be made to lawyers and bureaucrats who need to understand what the rules are, and transaction costs to every form that must be drafted, filled out, and stored.
Why Care About Privacy?
What’s frustrating about this is that one of the most important justifications originally given for the HIPAA Privacy Rule no longer applies. The fear used to be that insurance companies might discriminate against sicker consumers if they had access to all their medical information. Yet the Affordable Care Act now makes it illegal to deny coverage or charge people more for pre-existing conditions.
A similar worry might exist with regards to employers. It’s theoretically possible that firms might comb through health data spreadsheets to find information about workers and decide whether to hire or fire them on that basis. This would be hard to do and can be guarded against simply by retracting the names of patients, which aren’t necessary for research purposes. Nonetheless, if the threat of extremely diligent employers able to use anonymized data to identify specific individuals is worth protecting against, we should simply make such practices illegal, just as how New York currently makes it illegal to discriminate based on marital status. We don’t go and make marital status “protected information” and raise costs throughout society out of a theoretical concern about employer discrimination. Rather, we target the undesirable behavior directly. Most small firms probably don’t have the resources to look into people’s medical backgrounds, and most large firms should be expected to comply with the law. But for those that don’t, the answer is a vigorous enforcement of anti-discrimination laws in this area, which would surely have only a fraction of the cost of delaying scientific progress.
Finally, sometimes people want privacy protections simply because they prefer to keep some things to themselves and their families. It’s not obvious, however, that most people care all that much whether others know about most of their medical issues. Americans often talk openly of being a “cancer survivor” or their struggles to maintain a normal blood pressure. Surely some individuals don’t want others to know anything about their health. But others don’t care, and the question is what the default should be.
A good compromise would be to go back to the pre-HIPAA rules for everything except a few sensitive areas. So doctors, insurance companies, and others can share your height, weight, blood pressure, diagnosis of most kinds of disease, etc. But we can have a special carve out for psychiatric records, issues related to reproductive health, and a few other things. Names would of course be redacted, so the threat would only exist in terms of a theoretical possibility of someone matching an observation to a real person. For those who are extra sensitive about privacy, the law might require that they be given the opportunity to seal their records.
The fact of the matter is that there isn’t much evidence that most people care about privacy all that much, which is why the default rule tends to be so important. Governments and privacy activists will often attack Google and other internet companies for harvesting and selling the data of users, but the economics literature shows that people aren’t willing to pay for privacy protections. There’s a reason Google harvests your data and lets advertisers use it rather than charging you a few cents per search. The ad-based model didn’t take over the internet by accident.
We might overrule people’s market decisions when we believe they don’t know what’s good for them. But questions surrounding what is private information and what isn’t are subjective and culturally determined. Advocates haven’t made a good argument for why people should care about this thing that they obviously don’t care about. Or why they should care about it in the exact way that privacy advocates care about it. I’m pretty sure that if Google’s business model was to publicly post your searches on a screen in Times Square, people would be more hesitant to use their products. But the way Google “invades your privacy” is by putting you in a large database that delivers you the ads that you are most likely to click on. The EU, however, now requires user consent even for decision-making that is completely automated. This is why people notice that when you go to Europe now, the internet is basically the same except much more annoying because you have to click a lot more boxes to agree to terms no one cares to read, again, because people don’t actually care about privacy. I wonder how much effort even “privacy advocates” expend or how much inconvenience they’ll put up with to live by their (very strange) principles.
Medical information may be different. Google itself has some special restrictions on advertising related to health issues. But do people care enough about medical privacy to the extent that it should slow down scientific and technological progress? And if they don’t, should the government act as if they do? Not for any good reason I can see. An opt-out choice for people who do care about privacy, along with perhaps a default rule requiring protection for the most sensitive information, would strike a much better balance than current HIPAA regulations.
Government Coming for the Devices
HIPAA covers healthcare providers, not medical information itself. So while your doctor can’t tell the world you have cancer without your permission, most other people and institutions can, including private businesses. This oversight of the law might have been a reason to be optimistic. Today, people often provide data to third parties through new channels such as fitness tracking devices. HIPAA has nothing to say about that, and one could imagine this might be good for scientific progress as researchers are able to use big data from non-medical sources.
Unfortunately, the regulatory state cares so much about your privacy that it’s taken notice of medical and fitness devices and wants to make sure they aren’t used to help you live longer. The EU would only let Google buy Fitbit in 2019 after it promised to silo the device data from information connected to a user’s ad profile. Google has some of the best data scientists in the world and access to your search history. It should also be able to connect that with your tracking devices, go to your doctor and buy your medical records, put all of that together, and start working on how to prevent cancer and delay aging.
Slowing down medical progress isn’t the only cost of our concern with privacy. Governments and advocates assume that targeted advertising is somehow harming the consumer. In reality, it’s sometimes very valuable to become aware of products that you might like or benefit from. Sure, it may be the case that people can be manipulated into buying products or services they don’t truly need, but it’s a strange assumption that it’s more acceptable for consumers to look at ads for things they don’t care about than to see them for things they might actually want.
Privacy concerns also stand in the way of attempts to make healthcare more affordable. In May, The Washington Post reported on Amazon Clinic, which lets you message with a provider and get a prescription for as little as $30. But, shockingly, Amazon tries to make money off the process:
But there’s a hidden cost to Amazon’s Clinic: your privacy. This is how Big Tech companies get away with invading your intimate business — and the laws that are supposed to protect us just aren’t keeping up.
In other words, Amazon is taking something you don’t care about (your privacy, or “privacy”), to save you something you do care about (your money). This sounds like a win-win.
Recently, Popular Mechanics profiled Leroy Hood, a scientist who is the cofounder of the Institute of Systems Biology, which does research on aging. He and his team have been conducting a study in which they track the data of up to 5,000 participants, including things like “genome sequence, gut microbiome, blood plasma proteins, and lifestyle information.”
Hood imagines creating data clouds that include vast troves of personal health information: entire genome sequences, blood, saliva, stool, urine, molecules produced by your metabolism, proteins, and brain-related biological markers. How this information would be collected is still unclear, but Hood says it would involve using a constellation of apps and other technologies that measure brain signals, blood plasma, sleep quality, blood pressure, and more. Ultimately, Hood hopes that people will be able to — and will want to — do this all from the comfort of their home.
You can guess what the criticisms of this approach are.
But his big-data approach hasn’t gone without controversy. Some researchers are concerned about privacy. “What is actually proposed here is to make these ‘holistic datasets’ about you and model them into this avatar — this digital twin or representation of you as a person,” says Henrik Vogt, MD, PhD, an associate professor in community medicine and global health at the University of Oslo. If such information leaked to insurance companies, employers, or other stakeholders who may have a significant interest in obtaining personal health data for profit, he says, it would be a serious compromise of our most intimate information. Others say personalized medicine fails to solve for the root of health issues. “It doesn’t matter how good we are at precision treatment,” says Sandro Galea, MD, MPH, DrPH, an epidemiologist at Boston University. “Unless we actually are creating conditions for people not to get sick to begin with — housing, food, education — we are going to continue having an unhealthy society.”
So basically, the privacy advocates have two arguments:
Someone might make a profit while they’re curing disease
We shouldn’t even try to achieve any more medical progress until we achieve socialism
These objections are almost too silly to refute, but I’m including them because it’s useful to understand the irrational motivations of many privacy advocates.
Why Progress Has Stalled
I think that the story of how privacy regulations hinder advances in medicine fits into a broader story about the slowdown in progress.
The regulatory state wants to prevent harms, no matter how small. It doesn’t care all that much about solving already existing problems, or making things better. This is partly just the availability bias. We can see a data breach, but we can’t see a research project that never got off the ground or a cure for a disease that was never discovered, even though any reasonable person would have to grant that medical progress is much more important than whether data that most people don’t even care about is safeguarded.
This can be seen in other areas of life. We can’t build nuclear power plants because of the risk of a meltdown, while ignoring the benefits of affordable and clean energy. Despite the successes of Operation Warp Speed, the process should have gone even faster and used human challenge trials. But that would have involved potential harms to individuals, even if they consented to taking the risk, all the while a much greater number of people continued to die from covid. The entire philosophy of the regulatory state appears to be that it’s unfortunate that bad things happen in the world, but it’s even worse to try and solve them if there’s any imaginable cost or risk of doing so.
The other thing that appears to be going on here is, as mentioned above, an animus towards markets more generally. Even if targeted advertising was a bad thing, it would still be worth it to make medical data more freely available due to the potential for new discoveries and savings in healthcare costs. But there’s no reason to think targeted advertising does more harm than good. In fact, if it was connected to your medical devices, it’s easy to imagine situations where it might significantly improve your quality of life. But someone might make money in the process, and therefore according to anti-market types we should be wary.
Last year, Josh Hawley sent a letter to the FTC complaining about Amazon purchasing One Medical. He writes,
Scenarios once written off as scaremongering fictions are now a very real possibility. For instance, if an individual is diagnosed with high blood pressure by a One Medical doctor, will he later be advertised over-the-counter blood pressure medications whenever he shops at Whole Foods Market? Promoting wellness is one thing; dystopian corporate ‘nudging’ is quite another.
So Hawley’s “nightmare scenario,” which hasn’t even happened yet, is that a consumer might have a choice to buy a product that could help him. Among anti-market types, there’s a tendency to use forceful language to hide just how hollow their ideas are. Being offered a choice for a product that can improve your health turns into “dystopian corporate nudging.” Hawley also asks what steps Amazon is taking to make sure it doesn’t “undercut competitor providers.” In other words, in a country that spends more on healthcare than any other nation in the world, a Senator is demonizing a corporation for trying to make it cheaper. Most anti-progress forces continue to be on the left, but Hawley represents the tip of the spear of this kind of thinking finding a home on the right too.
Understanding the impact of the regulatory state isn’t simply a matter of reading the laws on the books. To get a full picture, one must learn and think a bit about the impacts of bureaucratic hurdles, implicit threats, the flexibility of the law when it has its eye on a target, and the impact and the ability of the media, politicians, and activists to generate negative publicity. Changing laws is important. But what’s perhaps even more desperately needed is a change in societal attitudes towards more appreciation for human progress and less catastrophizing over what are at worst potentially very minor harms.
Thank you for highlighting this issue. It is not just in these population wide studies that you talk about we face problems. As a Neurologist, I have participated in Alzheimer's research. In US we have millions of patients who have dementia and in significant proportion of the cases, it leads to Alzheimer's. Many of these patients are tracked over years and we have MRI imaging data for each patient acquired over the years. By applying machine learning models or even simple statistical markers on these MRI images we could get vast information about Alzheimer progress and treatment protocols.
The research group, I collaborate with was able to get MRI image data from a large hospital group in India for carrying out research but the university ethics rules prohibit usage of such data for research as we need paperwork to prove every patient consented to release of data, which the hospital did not have. I have spoken to patients and their families and most of them are ok with these images being on used for research purposes as long as identifying personal information ( name, place, exact date of birth etc) are purged. But hospitals cannot share this data as the law is very strict on patient privacy. Even if we had consent, hospitals fear lawsuits that could claim consent was not clear as many of them have decreased mental capacity due to dementia.
>> Scenarios once written off as scaremongering fictions are now a very real possibility. For instance, if an individual is diagnosed with high blood pressure by a One Medical doctor, will he later be advertised over-the-counter blood pressure medications whenever he shops at Whole Foods Market? Promoting wellness is one thing; dystopian corporate ‘nudging’ is quite another.
> So Hawley’s “nightmare scenario,” which hasn’t even happened yet, is that a consumer might have a choice to buy a product that could help him.
Richard, you are a smart guy, I can't believe you don't see the blindingly obvious conflict of interest here. If the same corporation is in charge of diagnosing the disease and selling the cure, they will be incentivized to (a) over-diagnose, and (b) direct patients to the brand of medication they sell (even if a competitior's brand would be better). This is not just a hypothetical, Purdue Pharma paid doctors to push their addictive opioids onto patients and we all know how that story ended. When a single entity controls every industry, that's socialism, not a free market.