I’m just an average man, with an average life.
I work from nine to five; hey hell, I pay the price.
All I want is to be left alone in my average home;
But why do I always feel like I’m in the Twilight Zone, and
I always feel like somebody’s watching me.
And I have no privacy.
Woh, I always feel like somebody’s watching me.
Tell me is it just a dream?
–80’s one-hit wonder Rockwell
I recently received an advertising promotion for what sounded like the handiest app ever. It is called Luxe Valet, and it basically arranges for a valet/parking service to meet you in San Francisco wherever you are; the person then parks your car and brings it back to you whenever you want wherever you want in SF, charging a below-market parking fee of $15. Wow, I thought. This is going to save me not just money and the time I spend each week circling the blocks before meetings, but increase my mental health at the same time—it’s damn near a health app!
When I went to use it, however, I got a little wierded-out by the GPS feature. After you tell the app where to meet you, it tracks you in real-time to meet you at the exact time of arrival. In theory, totally awesome; in a creepy world, totally alarming. Some random app whose data is not subject to any privacy laws is following me around? Lord knows who can tap into that data and follow me around, as if I’m worth following. But some people are worth following, and that is a little worrisome. It made me think about the unintended consequences inherent in the ever-expanding world of healthcare data and how we need to take a step back and think about what information we put into the world about ourselves.
Most of the time when people worry about healthcare data privacy and related security issues, they are worrying about how and if their employer or insurer might discriminate against them if the news of their condition got out. Since the advent of the ACA, it is now impossible to be denied health insurance for pre-existing conditions and there are laws preventing employers from discriminating against people based on their health, so a lot of those fears are becoming less concerning, or at least theoretically so (albeit, it’s not perfect).
Often times people don’t care about their healthcare data privacy at all. Ask the average young person or Facebook poster if they care whether people see their health status, and they will say that they couldn’t care less. In fact, people post their Fitbit and other wearable fitness data openly to prove how many steps they have walked and to remind their so-called friends how lazy they are by comparison.

But in a world full of data and access and increasing interoperability, what I don’t think people worry about enough is the weird and wacky ways that their health data may be used to essentially stalk them and be used in ways that they could not have foreseen. (note: I have written before about the rise of ransomware, a related topic).
Lately I have seen a lot of articles that raise this issue in ways other than the obvious. Here are two examples:
- Stories about cybercriminals trolling for healthcare data so they can steal knowledge from your personal accounts to engage in illegal stock trading. In other words, they steal your private information and conversations about your company to know what’s happening on the clinical trial or M&A front in order to make gains in the stock market. Healthcare company CEOs are a prime target for this, apparently. It’s not really health data per se that the bad guys are collecting, but business data about healthcare activities. Still it is not meant to be shared and particularly not meant to be shared to manipulate the stock market.
- Stories about how lawyers can use your FitBit or other wearable data against you in court to disprove your claims for workers’ compensation or that you were at work when the cops say you were at a crime scene. Lawyers are starting to think about your wearables as a personal “Black Box.” If you are not humming that Rockwell tune to yourself now, you are not suitably concerned.
According to a report by BitSight Technology called Will Healthcare Be the Next Retail?, healthcare experienced the largest growth in security incidents and the slowest response during the period from April 1, 2013, through March 31, 2014, when the study took place. Retail and utility businesses tended to respond to data breaches must faster than healthcare entities, which are generally pretty far behind in the security realm, both technologically and experientially. Medical records sell for about $20 on the black market, according to one article, while stolen credit card data brings about $1. I’m no math whiz, but that stolen healthcare data thing sounds like one hell of a business opportunity for Boris and Natasha or any other nefarious entrepreneurs (Uber guys notwithstanding).
There was a recent article called Algorithms Can Ruin Lives in Slate Magazine that talked about how algorithms can run amuck and cause the system to make assumptions about you that could literally mess up your life in a big way. The article talks about how algorithms, left to run and act by themselves, could label you as a terrorist or ineligible for benefits or subject you to inadvertent but subtly real racial bias due to the way they are programmed. This is not quite the same as data being stolen or used against you intentionally, but it could become that.
We in healthcare are loving those algorithms, which allow us to use Big Data (my least favorite buzzword bingo term) to make good clinical decisions about patients. But put in others’ hands, such data could be used to rapidly spread the word that you are a difficult patient a la Seinfeld’s Elaine when she wanted her medical records or to send you a message that you really shouldn’t be sitting at the Cheesecake Factory right now because you are looking like you have had enough damn cheesecake, if you get my drift—we CAN SEE YOU! Damned GPS. There have long been technologies that allow you to catch your spouse having an affair, but now there are technologies that will let your spouse catch you cheating on your diet, and that may be even worse. In fact, your GPS-outfitted Apple HealthKit may turn out to do both if you’re not careful.
As they say, just because you’re paranoid doesn’t mean you’re not being watched.
Who’s watching me?
I don’t know anymore . . . are the neighbors watching
Who’s watching?
Well, it’s the mailman watching me: and I don’t feel safe anymore.
Tell me who’s watching.
Oh, what a mess. I wonder who’s watching me now,
(WHO?) the I.R.S.?
–Rockwell
httpv://www.youtube.com/watch?v=7bQwin3Vv0k
Data is not information. information is not necessarily evidence. Neither are knowledge, or “the” truth.
Since we’re riffing on cultural touchstones re: privacy (wow, Rockwell) try this: go watch The Conversation (1974), or for something sexier, Blow-Up (1966). Oh and forget more recent takes on these issues, even Enemy of the State (with the great Gene Hackman again) – they tend to be much too gullible/cynical about the potency of surveillance tech.
Then get back to me about how definitive you believe electronic monitoring specifically, or technologically captured/enhanced reality generally, is or can be when it gets down to (legal) cases.
Anyone using information captured in the ways you’ve mentioned should already know that they do not in fact “know” what their instruments suggest they know, and they should know that blithely using it AS IF they “know” opens THEM to considerable risk.
It’s in all our interests to insist on reinforcement of these basic facts through the law & regulations our government minions implement.
Civisisus–totally agree with you but alas we are not in charge, as some of those articles point out, it becomes the person’s responsibility to prove they are not doing what the data says they are and that is not good.
The dilemma we face in healthcare is that good people with high skills and good intentions really can help you make your life better the more they know about your life and habits – most of which are beyond the scope of just how often you see your doctor and take your prescriptions. While at the same time, bad people with high skills and bad intentions can do more and more harm when armed with more and more knowledge about your life and habits.
How willing are we (or should we be) to share extensive information about ourselves should be tempered by two things. First, do we know who we’re sharing this data with, and why? And second, are we sure these folks can be trusted to use the data only for our good without their intentionally or unintentionally sharing it with anyone else? The potential for good is great. But at this point, I don’t think we have great assurances on either of the caveats I’ve mentioned. So far, healthcare has an especially poor record for securing patient data, and bad guys are getting better and better at what they do in the digital jungle.
I truly believe in the power of information to do good, and sincerely hope we can make progress delivering more information to the do-gooders. But I believe we should pause a moment to consider the current state of our industry before we naively give up our privacy rights without further consideration of the current state of risk.