Category Archives: rapid learning health systems

Quantified Diet Findings

  • sdfsdfsdfsdjf’;klj

People have more goals than they have willpower for. That’s just the way our ambition works. They give up, get distracted, or prioritize some other goal.

https://medium.com/inside-lift/be4809e34563

TLDR; This is the story of how we used the Lift Goal Coaching app to build an ongoing 15,000+ person experiment to compare popular diets. The good news is that dieting works, especially if it means giving up sugar and fast food. See our charts below or take our weight loss calculator. Or better, join one of our diets and contribute to science.

About a year ago, we ran a one-off research project into the Slow-Carb Diet™ that turned up surprisingly strong results. Over a four week period, people who stuck to the diet showed an 84% success rate and an average weight loss 0f 8.6lbs.

But are those results legit? If I picked a person at random out of a crowd, could they expect to see the same results? Almost immediately after publishing the results we started getting feedback about experimental bias.

This first study was biased, which means it doesn’t carry any scientific confidence. That’s a fixable problem, so we set off to redo the study in a bigger and more rigorous way.

That led to the Quantified Diet, our quest to verify and compare every popular diet. We now have initial results for ten diets. This is the story of our experiment and how we’re interpreting the diet data we’ve collected.

Understanding Bias

To understand bias, here’s quick alternative explanation for our initial Slow-Carb data: a group of highly motivated, very overweight people joined the diet and lost what, for them, is a very small amount of weight. In this alternative explanation, the results really are not very interesting and they definitely aren’t generalizable.

However, we had some advice from academics at Berkeley aimed specifically at overcoming the biases of the people who were self-selecting into our study. The keys: a control group following non-diet advice and randomized assignment into a comparative group of diets.

Our Experimental Design

The gist of our experimental design hinged on the following elements:

  • We were going to start by comparing ten approaches to diet: Slow-Carb, Paleo, Whole Foods, Vegetarian, Gluten-free, No sweets, DASH, Calorie Counting, Sleep More, Mindful Eating.
  • Lift wrote instructions for each diet, with the help of diet experts, and provided 28-day goals (with community support) for each diet inside our app.
  • We included two control groups, one with the task of reading more and the other with the task of flossing more.
  • Participants were going to choose which of the approaches they were willing to try and then we would randomly assign from within that group. Leaving some room for choice allowed people to maintain control over their health, while still giving us room to apply a statistically relevant analysis.
  • Participants who said they were willing to try a control group and at least two others were in the experiment. This is who we were studying.
  • A lot of people didn’t meet this criteria, or opted out at some point along the way. We have observational data on this group, but they can’t be considered scientifically valid results for the reasons around bias covered above.
  • Full writeup of the methodology coming.

Top Level Results

At the beginning of the study, everyone thought we were going to choose a winning diet. Which of the ten diets was the best?

Nine of the diets performed well as measured by weight loss. Here’s the ranking, with weight loss measured as a percentage of body weight. Slow-Carb, Paleo and DASH look like they led the pack (but keep reading because this chart absolutely does not tell the whole story).

If you don’t like doing math, the above chart translates to between 3-5lbs per month for most people. If you really don’t like doing math, we built acalculator for you that will estimate a weight loss specific to you.

Sleep, which never really had a strong weight loss hypotheses, lost. We ended up calling this a placebo control in order to bolster our statistical relevance.

Before moving on, lets just call out that people in the diets were losing 4-ish pounds over a one month period on average. That’s great given that our data set contains people who didn’t even follow their diet completely.

The Value of the Control

The control groups help us understand whether the experimental advice (to diet) is better than doing nothing. Maybe everyone loses weight no matter what they do?

This sounds unlikely, but we were all surprised to see that the control groups lost 1.1% of their body weight (just by sleeping, reading and flossing!)

Is that because they were monitoring their weight? Is it because the bulk of the study occurred in January, right after people finished holiday gorging? We don’t actually know why the control groups lost weight, but we do know that dieting was better than being in the control.

Here’s the weight-loss chart revised to show the difference between each diet and the control (this chart shows the experimental effect).

The Value of Randomized Assignment

Randomized assignment helps us feel confident that the weight loss is not specific to the fans of a particular diet.

Because of the randomization, we can ask the following question. For each diet, what happens if we assigned the person to a different diet?

This is an indicator of whether a diet is actually better or if the people who are attracted to a diet have some other characteristic that is effecting our observational results.

The obvious example of bias would be a skew toward male or female. Bigger people have more weight to lose (male), plus we observed that males tended to lose a higher percentage of their body weight (2.8% vs. 1.8%).

Comparing the diets this way adds another promising diet approach: no sweets. But let’s, be real, the differences between these diets are very small, less than half a pound over four weeks, as compared to doing any diet at all, five pounds over four weeks. Our advice is pick the diet that’s most appealing (rather than trying to optimize).

Soda is bad! And other Correlations.

What else leads to weight loss?

  • It helps if your existing diet is terrible (your new diet is even better in comparison). People who reported heavy pre-diet soda consumption lost an extra 0.6% body weight.
  • Giving up fast food was also good for an extra 0.6% (but probably not worth adding fast food just to give it up).
  • Men lost more weight (2.6% vs 1.8%).
  • Adherence mattered (duh). Here’s a chart with weight loss by adherence.

How much of the time did people follow the diet advice?

Choosing a Diet

Ok. Now I think I’ve explained enough that you could choose one of these diets. All of them are available via the Lift app available on the webiPhoneand Android.

Given that all the diets work, the real question you should be asking yourself is which one do you most want to follow.

I can’t stress that enough. It’s not just about which had the most weight loss. Choose a diet you can stick to.

Let’s Talk Success Rate

Adherence matters. Even half-way adherence to a diet led to more than 1% weight loss (better than the control groups).

This brings up an interesting point. So far, our data is based on the people who made it all the way to the end of our study. This is the survivor bias. We don’t know what happened to the other people (hopefully the diets weren’t fatal).

In order to judge the success rate of dieting you’ll have to use some judgement. But we can give you the most optimistic and most pessimistic estimates. The truth is somewhere in between.

Of people who gave us all of their data over four weeks, 75% lost weight. Let’s call this the success rate ceiling. It includes many reasons for not losing weight, including low adherence. But at least they paid attention to the goal for the entire time. The weight loss averages are based on this group.

Of people who joined the study, only 16% completed the entire study (and 75% of those lost weight). So, merely joining a diet, with no other data about your commitment, has a success rate of 12%. Let’s call this thesuccess rate floor.

Read that floor as 12% of people who merely said that they were interested in doing a diet had definitely lost weight four weeks later. There’s no measure of commitment in that result. If we filter by even a simple commitment measure, such as the person fills out the first survey on day one, then the success rate jumps from 12% to 28%.

If you are making public policy, then maybe that 12% number looks important. People have more goals than they have willpower for. That’s just the way our ambition works. They give up, get distracted, or prioritize some other goal.

If you are an individual, I’d put more weight in the ceiling. You want to know that whatever path you choose has a chance of succeeding. 75% is a number that should give you confidence.

Losing Weight?

We’ve focused on losing weight for two reasons. One, it’s a very common goal. But, two, it’s also the strongest signal we got out of our data.

We also measured happiness and energy but the signal was weak. We didn’t measure any other markers of health. That’s important to note.

We are behavior designers, so we’re looking at the effectiveness of behavior change advice. You should still consult a nutritionist when it comes to the full scope of health impacts from a diet change. For example, you could work with our partner WellnessFX for a blood workup (and talk to their doctors).

Open Sourcing the Research

We’ve open sourced the research. You can grab the raw data and some example code to evaluate it from our GitHub repository.

All of the participants were expecting to have their data anonymized for the purposes of research. Take a look and please share your work back (it’s required by the CC and MIT licenses).

There was some lossiness in the anonymization process. We’ve stripped out personal information (of course), but also made sure that rows in the data set can’t be tied back to individual Lift accounts. For that reason some of the data is summarized. For example, weight is expressed as percentage weight loss and adherence is expressed on a 1-5 scale.

If you want to go digging around in the data, I would suggest starting by looking at our surveys where we got extra data about the participants: day 1,week 1week 2week 3week 4.

Citizen Science or No Science

I’m expecting that our research will spark some debate about the validity of scientific research from non-traditional sources. I expect this because I’ve already been on the receiving end of this debate.

Here’s how we’re seeing it right now. I acknowledge that we already have a robust scientific process living in academia. And I acknowledge that the way we ran this research broke the norms of that process.

The closest parallel I can think of is the rise of citizen journalism (mostly through blogs) as a complement to traditional journalism. At the beginning there was a lot of criticism of the approach as dangerous and irresponsible. Now we know that the approach brought a lot of benefits, namely: breadth, analysis and speed.

That’s the same with citizen science. We studied these diets because we didn’t see anyone else doing it. And we’re continuing to do other research (for example: meditation) because we’re imagining a world where everything in the self-improvement space, from fitness to diet to self-help, is verifiably trustworthy.

Continuing Research

One of our core tenants with this research is that we can revise it. We didn’t have to write a grant proposal and it didn’t cost us anything to run the study. In fact, we’re already revising it.

To start with, we’re adding in one more diet: “Don’t Drink Sugar.”

We wrote this diet based on the study results and a belief in minimal effective interventions. So, if you’re at all interested in losing weight while contributing to science, please sign up for the Quantified Diet.

Thanks

Special thanks to many academics who commented on our process along the way, along with our sponsors who helped drive people into the study:The Four Hour BodyNo Meat AthleteFoodistZenHabitsNerdFitness,PaleoHacksDeborah EnosDr. Kevin CampbellTania MercerSarah StanleyWithingsGreatistHintZicoWellnessFXO’Reilly Media,Dreena’s Plant Powered KitchenEat TribalPolarRunHundredFeast,BasisZestyKinduBiome.

an idea of earth shattering significance

ok.

been looking for alignment between a significant industry sector and human health. it’s a surprisingly difficult alignment to find… go figure?

but I had lunch with joran laird from nab health today, and something amazing dawned on me, on the back of the AIA Vitality launch.

Life (not health) insurance is the vehicle. The longer you pay premiums, the more money they make.

AMAZING… AN ALIGNMENT!!!

This puts the pressure on prevention advocates to put their money where their mouth is.

If they can extend healthy life by a second, how many billions of dollars does that make for life insurers?

imagine, a health intervention that doesn’t actually involve the blundering health system!!?? PERFECT!!!

And Australia’s the perfect test bed given the opt out status of life insurance and superannuation.

Joran wants to introduce me to the MLC guys.

What could possibly go wrong??????

Illumina’s $1000 genome

This article nice frames the immaturity of the technology in the context of population health and prevention (vs. specific disease management), and even references the behaviour of evil corporations in its final paragraphs.

 

Cost breakdown for Illumina’s $1,000 genome:

Reagent* cost per genome — $797

Hardware price — $137**

DNA extraction, sample prep and labor — $55-$65

Total Price = $989-$999

* Starting materials for chemical reactions

** Assumes a four-year depreciation with 116 runs per year, per system. Each run can sequence 16 genomes.

http://recode.net/2014/03/25/illuminas-ceo-on-the-promise-of-the-1000-genome-and-the-work-that-remains/

Illumina’s CEO on the Promise of the $1,000 Genome — And the Work That Remains

March 25, 2014, 2:18 PM PDT

By James Temple

Illumina seized the science world’s attention at the outset of the year by announcing it had achieved the $1,000 genome, crossing a long-sought threshold expected to accelerate advances in research and personalized medicine.

The San Diego company unveiled the HiSeqX Ten Sequencing System at the J.P. Morgan Healthcare Conference in January. It said “state-of-the art optics and faster chemistry” enabled a 10-fold increase in daily throughput over its earlier machines and made possible the analysis of entire human genomes for just under $1,000.

Plummeting prices should broaden the applications and appeal of such tests, in turn enabling large-scale studies that may someday lead to scientific breakthroughs.

The new sequencers are making their way into the marketplace, with samples now running on a handful of systems that have reached early customers, Chief Executive Jay Flatley said in an interview with Re/code last week. Illumina plans to begin “shipping in volume” during the second quarter, he said.

The Human Genome Project, the international effort to map out the entire sequence of human DNA completed in 2003, cost $2.7 billion. Depending on whose metaphor you pick, the $1,000 price point for lab sequencing is akin to breaking the sound barrier or the four-minute mile — a psychological threshold where expectations and, in this case, economics change.

Specifically, a full genomic workup of a person’s three billion DNA base pairs starts to look relatively affordable even for healthy patients. It offers orders of magnitude more information than the so-called SNPs test provided by companies like 23andMe for $99 or so, which just looks at the approximately 10 million “single-nucleotide polymorphisms” that are different in an individual.

With more data, scientists expect to gain greater insights into the relationship between genetic makeup and observable characteristics — including what genes are implicated in which diseases. Among other things, it should improve our understanding of the influences of DNA that doesn’t directly code proteins (once but no longer thought of as junk DNA) and create new research pathways for treatments and cures.

“The $1,000 genome has been the Holy Grail for scientific research for now over a decade,” Flatley said. “It’s enabled a whole new round of very large-scale discovery to get kicked off.”

Cost breakdown for Illumina’s $1,000 genome:

Reagent* cost per genome — $797

Hardware price — $137**

DNA extraction, sample prep and labor — $55-$65

Total Price = $989-$999

* Starting materials for chemical reactions

** Assumes a four-year depreciation with 116 runs per year, per system. Each run can sequence 16 genomes.

Source: Illumina

Some have questioned the $1,000 claim, with Nature noting research centers have to buy 10 systems for a minimum of $10 million — and that the math requires including machine depreciation and excluding the cost of lab overhead.

But Flatley defended the figure, saying it’s impossible to add in overhead since it will vary at every research facility.

“Our math was totally transparent and it is exactly the math used by the (National Human Genome Research Institute),” he said. “It’s a fully apples-to-apples comparison to how people have talked historically about the $1,000 genome.”

He also questioned the conclusions of a recent study published in the Journal of the American Medical Association, where researchers at Stanford University Medical Center compared results of adults who underwent next-generation whole genome sequencing by Illumina and Complete Genomics, the Mountain View, Calif., company acquired last year by BGI.

They found insertions or deletions of DNA base pairs only concurred between 53 percent and 59 percent of the time. In addition, depending on the test, 10 percent to 19 percent of inherited disease genes were not sequenced to accepted standards.

“The use of [whole genome sequencing] was associated with incomplete coverage of inherited disease genes, low reproducibility of detection of genetic variation with the highest potential clinical effects, and uncertainty about clinically reportable findings,” the researchers wrote.

Or as co-author Euan Ashley put it to me: “The test needs some tough love to get it to the point where it’s clinical grade.”

Flatley responded that the sample size was small and that the sequencing platforms were several years old. But he did acknowledge they are still grappling with technology limitations.

“What’s hard is to determine whether there’s a base inserted or deleted,” he said. “That’s abioinformatics problem, not a sequencing problem. That’s a software issue that we and others and the whole world is trying to work on.”

But, he stressed, that shortcoming doesn’t undermine the value of what the tests doread accurately.

“There are many, many, many things where it’s clinically useful today,” he said.

Flatley pointed to several areas where we’re already seeing real-world applications of improving sequencing technology, including cancer treatments targeted to the specific DNA of the tumor rather than the place where it shows up in the body. There are also blood tests under development that can sequence cancer cells, potentially avoiding the need for biopsies, including one from Guardant Health.

Another promising area is noninvasive prenatal testing, which allows expecting parents to screen for genetic defects such as Down syndrome through a blood draw rather than an amniocentesis procedure.

The technology can delineate the DNA from the fetus circulating within the mother’s bloodstream. It’s less invasive and dangerous than amniocentesis, which involves inserting a needle into the amniotic sac and carries a slight risk of miscarriage. Because of that risk it’s generally reserved for high-risk pregnancies, including for women 35 and older.

Illumina, which offers the blood screening for out-of-pocket costs of around $1,500, recently funded a study published in the New England Journal of Medicine that found the so-called cell-free fetal DNA tests produced more accurate results than traditional tests for Down syndrome and Trisomy 18, a more life-threatening condition known as Edwards syndrome.

“It gives some earlier indicators to women in the average risk population if their babies have those problems,” Flatley said. “I think that it will broaden the overall market, and there are other tests that can be added over time.”

But there are ethical issues that arise as prenatal genetic tests become more popular and revealing, including whether parents will one day terminate pregnancies based on intelligence, height, eye color, hair color or minor diseases.

For that reason, Illumnia refuses to disclose those traits that are decipherable in the genome today.

But Flatley said they couldn’t stop purchasers of its machines from doing so, nor competitors like BGI of China (for more on that issue see Michael Specter’s fascinating profile of the company in the New Yorker ). Flatley said there needs to be a public debate on these issues, and he expects that new laws will be put into place establishing commonsense boundaries in the months or years ahead.

“This isn’t something we think we can arbitrate,” he said. “But we won’t be involved directly in delivering [results] that would cross those ethical boundaries.”

Flu Trends fails…

  • “automated arrogance”
  • big data hubris
  • At its best, science is an open, cooperative and cumulative effort. If companies like Google keep their big data to themselves, they’ll miss out on the chance to improve their models, and make big data worthy of the hype. “To harness the research community, they need to be more transparent,” says Lazer. “The models for collaboration around big data haven’t been built.” It’s scary enough to think that private companies are gathering endless amounts of data on us. It’d be even worse if the conclusions they reach from that data aren’t even right.

But then this:
http://www.theatlantic.com/technology/archive/2014/03/in-defense-of-google-flu-trends/359688/

 

http://time.com/23782/google-flu-trends-big-data-problems/

Google’s Flu Project Shows the Failings of Big Data

Google flu trends
GEORGES GOBET/AFP/Getty Images

A new study shows that using big data to predict the future isn’t as easy as it looks—and that raises questions about how Internet companies gather and use information

Big data: as buzzwords go,it’s inescapable. Gigantic corporations like SAS andIBM tout their big data analytics, while experts promise that big data—our exponentially growing ability to collect and analyze information about anything at all—will transform everything from business to sports to cooking. Big data was—no surprise—one of the major themes coming outof this month’s SXSW Interactive conference. It’s inescapable.

One of the most conspicuous examples of big data in action is Google’s data-aggregating tool Google Flu Trends (GFT). The program is designed to provide real-time monitoring of flu cases around the world based on Google searches that match terms for flu-related activity. Here’s how Google explains it:

We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for “flu” is actually sick, but a pattern emerges when all the flu-related search queries are added together. We compared our query counts with traditional flu surveillance systems and found that many search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in different countries and regions around the world.

Seems like a perfect use of the 500 million plus Google searchesmade each day. There’s a reason GFT became the symbol of big data in action, in books like Kenneth Cukier and Viktor Mayer-Schonberger’s Big Data: A Revolution That Will Transform How We Live, Work and Think. But there’s just one problem: as a new article in Science shows, when you compare its results to the real world, GFT doesn’t really work.

GFT overestimated the prevalence of flu in the 2012-2013 and 2011-2012 seasons by more than 50%. From August 2011 to September 2013, GFT over-predicted the prevalence of the flu in 100 out 108 weeks. During the peak flu season last winter, GFTwould have had us believe that 11% of the U.S. had influenza, nearly double the CDC numbers of 6%. If you wanted to project current flu prevalence, you would have done much better basing your models off of 3-week-old data on cases from the CDC than you would have been using GFT’s sophisticated big data methods. “It’s a Dewey beats Truman moment for big data,” says David Lazer, a professor of computer science and politics at Northeastern University and one of the authors of the Sciencearticle.

Just as the editors of the Chicago Tribune believed it could predict the winner of the close 1948 Presidential election—they were wrong—Google believed that its big data methods alone were capable of producing a more accurate picture of real-time flu trends than old methods of prediction from past data. That’s a form of “automated arrogance,” or big data hubris, and it can be seen in a lot of the hype around big data today. Just because companies like Google can amass an astounding amount of information about the world doesn’t mean they’re always capable of processing that information to produce an accurate picture of what’s going on—especially if turns out they’re gathering the wrong information. Not only did the search terms picked by GFT often not reflect incidences of actual illness—thus repeatedly overestimating just how sick the American public was—it also completely missed unexpected events like the nonseasonal 2009 H1N1-A flu pandemic. “A number of associations in the model were really problematic,” says Lazer. “It was doomed to fail.”

Nor did help that GFT was dependent on Google’s top-secret and always changing search algorithm. Google modifies its search algorithm to provide more accurate results, but also to increase advertising revenue. Recommended searches, based on what other users have searched, can throw off the results for flu trends. While GFT assumes that the relative search volume for different flu terms is based in reality—the more of us are sick, the more of us will search for info about flu as we sniffle above our keyboards—in fact Google itself alters search behavior through that ever-shifting algorithim. If the data isn’t reflecting the world, how can it predict what will happen?

GFT and other big data methods can be useful, but only if they’re paired with what the Science researchers call “small data”—traditional forms of information collection. Put the two together, and you can get an excellent model of the world as it actually is. Of course, if big data is really just one tool of many, not an all-purpose path to omniscience, that would puncture the hype just a bit. You won’t get a SXSW panel with that kind of modesty.

A bigger concern, though, is that much of the data being gathered in “big data”—and the formulas used to analyze it—is controlled by private companies that can be positively opaque. Google has never made the search terms used in GFT public, and there’s no way for researchers to replicate how GFT works. There’s Google Correlate, which allows anyone to find search patterns that purport to map real-life trends, but as the Scienceresearchers wryly note: “Clicking the link titled ‘match the pattern of actual flu actvity (this is how we built Google Flu Trends!)’ will not, ironically, produce a replication of the GFT search terms.” Even in the academic papers on GFT written by Google researchers, there’s no clear contact information, other than a generic Google email address. (Academic papers almost always contain direct contact information for lead authors.)

At its best, science is an open, cooperative and cumulative effort. If companies like Google keep their big data to themselves, they’ll miss out on the chance to improve their models, and make big data worthy of the hype. “To harness the research community, they need to be more transparent,” says Lazer. “The models for collaboration around big data haven’t been built.” It’s scary enough to think that private companies are gathering endless amounts of data on us. It’d be even worse if the conclusions they reach from that data aren’t even right.

Ornish on Digital Health

The limitations of high-tech medicine are becoming clearer—e.g., angioplasty, stents, and bypass surgery don’t prolong life or prevent heart attacks in stable patient; only one out of 49 men treated for prostate cancer benefit from the treatment, and the other 48 often become impotent, incontinent or both; and drug treatments of type 2 diabetes don’t work nearly as well as lifestyle changes in preventing the horrible complications.

http://www.forbes.com/sites/johnnosta/2014/03/17/the-stat-ten-dean-ornish-on-digital-health-wisdom-and-the-value-of-meaningful-connections/

3/17/2014 @ 11:09AM |1,095 views

The STAT Ten: Dean Ornish On Digital Health, Wisdom And The Value Of Meaningful Connections

STAT Ten is intended to give a voice to those in digital health. From those resonant voices in the headlines to quiet innovators and thinkers behind the scenes, it’s my intent to feature those individuals who are driving innovation–in both thought and deed. And while it’s not an exhaustive interview, STAT Ten asks 10 quick questions to give this individual a chance to be heard.  

Dean Ornish, MD is a fascinating and important leader in healthcare.  His vision has dared to question convention and look at health and wellness from a comprehensive and unique perspective.  He is a Clinical Professor of Medicine, UCSF Founder & President, nonprofit Preventive Medicine Research Institute.

Dr. Ornish’s pioneering research was the first to prove that lifestyle changes may stop or even reverse the progression of heart disease and early-stage prostate cancer and even change gene expression, “turning on” disease-preventing genes and “turning off” genes that promote cancer, heart disease and premature aging. Recently, Medicare agreed to provide coverage for his program, the first time that Medicare has covered an integrative medicine program. He is the author of six bestselling books and was recently appointed by President Obama to the White House Advisory Group on Prevention, Health Promotion, and Integrative and Public Health. He is a member of the boards of directors of the San Francisco Food Bank and the J. Craig Venter Institute. The Ornish diet was rated #1 for heart health by U.S. News & World Report in 2011 and 2012. He was selected as one of the “TIME 100” in integrative medicine, honored as “one of the 125 most extraordinary University of Texas alumni in the past 125 years,” recognized by LIFE magazine as “one of the 50 most influential members of his generation” and by Forbes magazine as “one of the 7 most powerful teachers in the world.”

The lexicon of his career is filled with words that include innovator, teacher and game-changer.  And with this impressive career and his well-established ability to look at health and medicine in a new light, I thought i would be fun–and informative–to ask Dr. Ornish some questions about digital health.

Dean Ornish, MD

Dean Ornish, MD

 1. Digital health—many definitions and misconceptions.  How would describe this health movement in a sentence or two?

“Digital health” usually refers to the idea that having more quantitative information about your health from various devices will improve your health by changing your behaviors.  Information is important but it’s not usually sufficient to motivate most people to make meaningful and lasting changes in healthful behaviors.  If it were, no one would smoke cigarettes.

2. You’ve spoken of building deep and authentic connection among  patients as key element of your wellness programs.  Can digital health foster that connection or drive more “techno-disconnection”?

Both.  What matters most is the quality and meaning of the interaction, not whether it’s digital or analog (in person).  Study after study have shown that people who are lonely, depressed, and isolated are three to ten times more likely to get sick and die prematurely compared to those who have a strong sense of love and community.  Intimacy is healing.  In our support groups, we create a safe environment in which people can let down their emotional defenses and communicate openly and authentically about what’s really going on in their lives without fear they’ll be rejected, abandoned, or betrayed.  The quality and meaning of this sense of community is often life-transforming.  It can be done digitally, but it’s more effective in person.  A digital hug is not quite as fulfilling, but it’s much better than being alone and feeling lonely.

3. How can we connect clinical validation to the current pop culture trends of “fitness gadgets”?

Awareness is the first step in healing.  In that context, information can raise awareness, but it’s only the first step.

 4. Can digital health help link mind and body wellness?

Yes.  Nicholas Christakis’ research found that if your friends are obese, your risk of obesity if 45% higher.  If your friends’ friends are obese, your risk of obesity if 25% higher.  If your friends’ friends’ friends are obese, your risk is 10% higher—even if you’ve never met them.  That’s how interconnected we are.  Their study also showed that social distance is more important than geographic distance.  Long distance is the next best thing to being there (and in some families, even better…).

5. Are there any particular area of medicine and wellness that might best fit in the context of digital health (diet, exercise, compliance, etc.)?

They all do.

6. There is much talk on the empowerment of the individual and the “democratization of data”.  From your perspective are patients becoming more engaged and involved in their care?

Patients are becoming more empowered in all areas of life, not just with their health care.  Having access to one’s clinical data can be useful, but even more empowering is access to tools and programs that enable people to use the experience of suffering as a catalyst and doorway for transforming their lives for the better.  That’s what our lifestyle program provides.

 7. Is digital health “sticking” in the medical community?  Or are advances being driven more by patients?

Electronic medical records are finally being embraced, in part due to financial incentives.  Also, telemedicine is about to take off, as it allows both health care professionals and patients to leverage their time and resources more efficiently and effectively.  But most doctors are not prescribing digital health devices for their patients.  Not yet.

 8. Do you personally use any devices?  Any success (or failure) stories?

I weigh myself every day, and I work out regularly using weight machines and a treadmill desk.  I feel overloaded by information much of the day, so I haven’t found devices such as FitBit, Nike Plus, and others to be useful.  These days, I find wisdom to be a more precious commodity than information.

 9. What are some of the exciting areas of digital health that you see on the horizon?

The capacity for intimacy using digital platforms is virtually unlimited, but, so far, we’ve only scratched the surface of what’s possible.  It’s a testimony to how primal our need is for love and intimacy that even the rather superficial intimacy of Facebook (or, before that, the chat rooms in AOL, or the lounges in Starbucks) created multi-billion-dollar businesses.

My wife, Anne, is a multidimensional genius who is developing ways of creating intimate and meaningful relationships using the interface of digital technologies and real-world healing environments.  She also designed our web site (www.ornish.com) and created and appears in the guided meditations there; Anne has a unique gift of making everyone and everything around her beautiful.

 10. Medicare is now covering Dr. Dean Ornish’s Program for Reversing Heart Disease as a branded program–a landmark event–and you recently formed a partnership with Healthways to train health care professionals, hospitals, and clinics nationwide.  Why now?

We’re creating a new paradigm of health care—Lifestyle Medicine—instead of sick care, based on lifestyle changes astreatment, not just as prevention.  Lifestyle changes often work better than drugs and surgery at a fraction of the cost—and the only side-effects are good ones.  Like an electric car or an iPhone, this is a disruptive innovation.  After 37 years of doing work in this area, this is the right idea at the right time.

The limitations of high-tech medicine are becoming clearer—e.g., angioplasty, stents, and bypass surgery don’t prolong life or prevent heart attacks in stable patient; only one out of 49 men treated for prostate cancer benefit from the treatment, and the other 48 often become impotent, incontinent or both; and drug treatments of type 2 diabetes don’t work nearly as well as lifestyle changes in preventing the horrible complications.

At the same time, the power of comprehensive lifestyle changes is becoming more well-documented.  In our studies, we proved, for the first time, that intensive lifestyle changes can reverse the progression of coronary heart disease and slow, stop, or reverse the progression of early-stage prostate cancer.  Also, we found that changing your lifestyle changes your genes—turning on hundreds of good genes that protect you while downregulating hundreds of genes that promote heart disease, cancer, and other chronic diseases.  Our most recent research found that these lifestyle changes may begin to reverse aging at a cellular level by lengthening our telomeres, the ends of our chromosomes that control how long we live.

Finally, Obamacare turns economic incentives on their ear, so it becomes economically sustainable for physicians to offer training in comprehensive lifestyle changes to their patients, especially now that CMS is providing Medicare reimbursement and insurance companies such as WellPoint are also doing so.  Ben Leedle, CEO of Healthways, is a visionary leader who has the experience, resources, and infrastructure for us to quickly scale our program to those who most need it.  Recently, we trained UCLA, The Cleveland Clinic, and the Beth Israel Medical Center in New York in our program, and many more are on the way.

 

Machines put half of US work at risk

Great tip from Michael Griffith on the back of last night’s dinner terrific conversation at the Nicholas Gruen organised feast at Hellenic Republic…

http://www.bloomberg.com/news/2014-03-12/your-job-taught-to-machines-puts-half-u-s-work-at-risk.html

Paper (PDF): The_Future_of_Employment

Your Job Taught to Machines Puts Half U.S. Work at Risk

By Aki Ito  Mar 12, 2014 3:01 PM ET
Photographer: Javier Pierini/Getty Images

Who needs an army of lawyers when you have a computer?

When Minneapolis attorney William Greene faced the task of combing through 1.3 million electronic documents in a recent case, he turned to a so-called smart computer program. Three associates selected relevant documents from a smaller sample, “teaching” their reasoning to the computer. The software’s algorithms then sorted the remaining material by importance.

“We were able to get the information we needed after reviewing only 2.3 percent of the documents,” said Greene, a Minneapolis-based partner at law firm Stinson Leonard Street LLP.

Full Coverage: Technology and the Economy

Artificial intelligence has arrived in the American workplace, spawning tools that replicate human judgments that were too complicated and subtle to distill into instructions for a computer. Algorithms that “learn” from past examples relieve engineers of the need to write out every command.

The advances, coupled with mobile robots wired with this intelligence, make it likely that occupations employing almost half of today’s U.S. workers, ranging from loan officers to cab drivers and real estate agents, become possible to automate in the next decade or two, according to a study done at the University of Oxford in the U.K.

Source: Aethon Inc. via Bloomberg

Aethon Inc.’s self-navigating TUG robot transports soiled linens, drugs and meals in…Read More

“These transitions have happened before,” said Carl Benedikt Frey, co-author of the study and a research fellow at the Oxford Martin Programme on the Impacts of Future Technology. “What’s different this time is that technological change is happening even faster, and it may affect a greater variety of jobs.”

Profound Imprint

It’s a transition on the heels of an information-technology revolution that’s already left a profound imprint on employment across the globe. For both physical andmental labor, computers and robots replaced tasks that could be specified in step-by-step instructions — jobs that involved routine responsibilities that were fully understood.

That eliminated work for typists, travel agents and a whole array of middle-class earners over a single generation.

Yet even increasingly powerful computers faced a mammoth obstacle: they could execute only what they’re explicitly told. It was a nightmare for engineers trying to anticipate every command necessary to get software to operate vehicles or accurately recognize speech. That kept many jobs in the exclusive province of human labor — until recently.

Oxford’s Frey is convinced of the broader reach of technology now because of advances in machine learning, a branch of artificial intelligence that has software “learn” how to make decisions by detecting patterns in those humans have made.

Source: Aethon Inc. via Bloomberg

Artificial intelligence has arrived in the American workplace, spawning tools that… Read More

702 Occupations

The approach has powered leapfrog improvements in making self-driving cars and voice search a reality in the past few years. To estimate the impact that will have on 702 U.S. occupations, Frey and colleague Michael Osborne applied some of their own machine learning.

They first looked at detailed descriptions for 70 of those jobs and classified them as either possible or impossible to computerize. Frey and Osborne then fed that data to an algorithm that analyzed what kind of jobs make themselves to automation and predicted probabilities for the remaining 632 professions.

The higher that percentage, the sooner computers and robots will be capable of stepping in for human workers. Occupations that employed about 47 percent of Americans in 2010 scored high enough to rank in the risky category, meaning they could be possible to automate “perhaps over the next decade or two,” their analysis, released in September, showed.

Safe Havens

“My initial reaction was, wow, can this really be accurate?” said Frey, who’s a Ph.D. economist. “Some of these occupations that used to be safe havens for human labor are disappearing one by one.”

Loan officers are among the most susceptible professions, at a 98 percent probability, according to Frey’s estimates. Inroads are already being made by Daric Inc., an online peer-to-peer lender partially funded by former Wells Fargo & Co. Chairman Richard Kovacevich. Begun in November, it doesn’t employ a single loan officer. It probably never will.

The startup’s weapon: an algorithm that not only learned what kind of person made for a safe borrower in the past, but is also constantly updating its understanding of who is creditworthy as more customers repay or default on their debt.

It’s this computerized “experience,” not a loan officer or a committee, that calls the shots, dictating which small businesses and individuals get financing and at what interest rate. It doesn’t need teams of analysts devising hypotheses and running calculations because the software does that on massive streams of data on its own.

Lower Rates

The result: An interest rate that’s typically 8.8 percentage points lower than from a credit card, according to Daric. “The algorithm is the loan officer,” said Greg Ryan, the 29-year-old chief executive officer of the Redwood City, California, company that consists of him and five programmers. “We don’t have overhead, and that means we can pass the savings on to our customers.”

Similar technology is transforming what is often the most expensive part of litigation, during which attorneys pore over e-mails, spreadsheets, social media posts and other records to build their arguments.

Each lawsuit was too nuanced for a standard set of sorting rules, and the string of keywords lawyers suggested before every case still missed too many smoking guns. The reading got so costly that many law firms farmed out the initial sorting to lower-paid contractors.

Training Software

The key to automate some of this was the old adage to show not tell — to have trained attorneys illustrate to the software the kind of documents that make for gold. Programs developed by companies such as San Francisco-based Recommind Inc. then run massive statistics to predict which files expensive lawyers shouldn’t waste their time reading. It took Greene’s team of lawyers 600 hours to get through the 1.3 million documents with the help of Recommind’s software. That task, assuming a speed of 100 documents per hour, could take 13,000 hours if humans had to read all of them.

“It doesn’t mean you need zero people, but it’s fewer people than you used to need,” said Daniel Martin Katz, a professor at Michigan State University’s College of Law in East Lansing who teaches legal analytics. “It’s definitely a transformation for getting people that first job while they’re trying to gain additional skills as lawyers.”

Robot Transporters

Smart software is transforming the world of manual labor as well, propelling improvements in autonomous cars that make it likely machines can replace taxi drivers and heavy truck drivers in the next two decades, according to Frey’s study.

One application already here: Aethon Inc.’s self-navigating TUG robots that transport soiled linens, drugs and meals in now more than 140 hospitals predominantly in the U.S. When Pittsburgh-based Aethon first installs its robots in new facilities, humans walk the machines around. It would have been impossible to have engineers pre-program all the necessary steps, according to Chief Executive Officer Aldo Zini.

“Every building we encounter is different,” said Zini. “It’s an infinite number” of potential contingencies and “you could never ahead of time try to program everything in. That would be a massive effort. We had to be able to adapt and learn as we go.”

Human-level Cognition

To be sure, employers won’t necessarily replace their staff with computers just because it becomes technically feasible to do so, Frey said. It could remain cheaper for some time to employ low-wage workers than invest in expensive robots. Consumers may prefer interacting with people than with self-service kiosks, while government regulators could choose to require human supervision of high-stakes decisions.

Even more, recent advances still don’t mean computers are nearing human-level cognition that would enable them to replicate most jobs. That’s at least “many decades” away, according to Andrew Ng, director of the Stanford Artificial Intelligence Laboratory near Palo Alto, California.

Machine-learning programs are best at specific routines with lots of data to train on and whose answers can be gleaned from the past. Try getting a computer to do something that’s unlike anything it’s seen before, and it just can’t improvise. Neither can machines come up with novel and creative solutions or learn from a couple examples the way people can, said Ng.

Employment Impact

“This stuff works best on fairly structured problems,” said Frank Levy, a professor emeritus at the Massachusetts Institute of Technology in Cambridge who has extensively researched technology’s impact on employment. “Where there’s more flexibility needed and you don’t have all the information in advance, it’s a problem.”

That means the positions of Greene and other senior attorneys, whose responsibilities range from synthesizing persuasive narratives to earning the trust of their clients, won’t disappear for some time. Less certain are prospects for those specializing in lower-paid legal work like document reading, or in jobs that involve other relatively repetitive tasks.

As more of the world gets digitized and the cost to store and process that information continues to decline, artificial intelligence will become even more pervasive in everyday life, says Stanford’s Ng.

“There will always be work for people who can synthesize information, think critically, and be flexible in how they act in different situations,” said Ng, also co-founder of online education provider Coursera Inc. Still, he said, “the jobs of yesterday won’t the same as the jobs of tomorrow.”

Workers will likely need to find vocations involving more cognitively complex tasks that machines can’t touch. Those positions also typically require more schooling, said Frey. “It’s a race between technology and education.”

To contact the reporter on this story: Aki Ito in San Francisco at aito16@bloomberg.net

To contact the editors responsible for this story: Chris Wellisz at cwellisz@bloomberg.net Gail DeGeorge, Mark Rohner

Anne Wojcicki lays out 23andMe’s vision…

 

http://www.engadget.com/2014/03/09/future-of-preventative-medicine/

Anne Wojcicki and her genetic sequencing company 23andMe are locked in abattle with the FDA. Even though it can’t report results to customers right now, Wojcicki isn’t letting herself get bogged down in the present. At SXSW 2014 she laid out her vision of the future of preventative medicine — one where affordable genome sequencing comes together with “big data.” In addition to simply harvesting your genetic code, the company is doing research into how particular genes effect your susceptibility to disease or your reaction to treatments. And 23andMe isn’t keeping this information locked down. It has been building APIs that allow it to share the results of its research as well as the results your genetic tests, should you wish to.

It’s when that data is combined with other information, say that harvested from a fitness tracker, and put in the hands of engineers and doctors. In the future she hopes that you’ll see companies putting the same effort into identifying and addressing health risks as they do for tracking your shopping habits. Targetfamously was able to decode that a woman was pregnant before she told her father, based purely on her purchase history. One day that same sort of predictive power could be harnessed to prevent diabetes or lessen a risk for a heart attack. Whether or not that future is five, 10 or 15 years off is unclear. But if Wojcicki has her way, you’ll be able to pull up health and lifestyle decisions recommended for you with the same ease that you pull up suggested titles on Netflix.

A couple of terrific safety quality presentations

 

Rene Amalberti to a Geneva Quality Conference:

b13-rene-amalberti

http://www.isqua.org/docs/geneva-presentations/b13-rene-amalberti.pdf?sfvrsn=2

 

Some random, but 80 slides, often good

Clapper_ReliabilitySlides

http://net.acpe.org/interact/highReliability/References/powerpoints/Clapper_ReliabilitySlides.pdf

Big data in healthcare

A decent sweep through the available technologies and techniques with practical examples of their applications.

Big data in healthcare

Big data in healthcare

big data in healthcare industrySome healthcare practitioners smirk when you tell them that you used some alternative medication such as homeopathy or naturopathy to cure some illness. However, in the longer run it sometimes really is a much better solution, even if it takes longer, because it encourages and enables the body to fight the disease naturally, and in the process build up the necessary long term defence mechanisms. Likewise, some IT practitioners question it when you don’t use the “mainstream” technologies…  So, in this post, I cover the “alternative” big data technologies. I explore the different types of big data datatypes and the NoSQL databases that cater for them. I illustrate the types of applications and analyses that they are suitable for using healthcare examples.

 

Big data in healthcare

Healthcare organisations have become very interested in big data, no doubt fired on by the hype around Hadoop and the ongoing promises that big data really adds big value.

However, big data really means different things to different people. For example, for a clinical researcher it is unstructured text on a prescription, for a radiologist it is the image of an x-ray, for an insurer it may be the network of geographical coordinates of the hospitals they have agreements with, and for a doctor it may refer to the fine print on the schedule of some newly released drug. For the CMO of a large hospital group, it may even constitute the commentary that patients are tweeting or posting on Facebook about their experiences in the group’s various hospitals. So, big data is a very generic term for a wide variety of data, including unstructured text, audio, images, geospatial data and other complex data formats, which previously were not analysed or even processed.

There is no doubt about that big data can add value in the healthcare field. In fact, it can add a lot of value. Partially because of the different types of big data that is available in healthcare. However, for big data to contribute significant value, we need to be able to apply analytics to it in order to derive new and meaningful insights. And in order to apply those analytics, the big data must be in a processable and analysable format.

Hadoop

Enter yellow elephant, stage left. Hadoop, in particular, is touted as the ultimate big data storage platform, with very efficient parallelised processing through the MapReduce distributed “divide and conquer” programming model. However, in many cases, it is very cumbersome to try and store a particular healthcare dataset in Hadoop and try and get to analytical insights using MapReduce. So even though Hadoop is an efficient storage medium for very large data sets, it is not necessarily the most useful storage structure to use when applying complex analytical algorithms to healthcare data. Quick cameo appearance. Exit yellow elephant, stage right.

There are other “alternative” storage technologies available for big data as well – namely the so-called NoSQL (not only SQL) databases. These specialised databases each support a specialised data structure, and are used to store and analyse data that fits that particular data structure. For specific applications, these data structures are therefore more appropriate to store, process and extract insights from data that suit that storage structure.

Unstructured text

A very large portion of big data is unstructured text, and this definitely applies to healthcare too. Even audio eventually becomes transformed to unstructured text. The NoSQL document databases are very good for storing, processing and analysing documents consisting of unstructured text of varying complexity, typically contained in XML, JSON or even Microsoft Word or Adobe format files. Examples of the document databases are Apache CouchDB and MongoDb. The document databases are good for storing and analysing prescriptions, drug schedules, patient records, and the contracts written up between healthcare insurers and providers.

On textual data you perform lexical analytics such as word frequency distributions, co-occurrence (to find the number of occurrences of particular words in a sentence, paragraph or even a document), find sentences or paragraphs with particular words within a given distance apart, and other text analytics operations such as link and association analysis. The overarching goal is, essentially, to turn unstructured text into structured data, by applying natural language processing (NLP) and analytical methods.

For example, if a co-occurrence analysis found that BRCA1 and breast cancer regularly occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Nowadays co-occurrence in text is often used as a simple baseline when evaluating more sophisticated systems.

Rule-based analyses make use of some a priori information, such as language structure, language rules, specific knowledge about how biologically relevant facts are stated in the biomedical literature, the kinds of relationships or variant forms that they can have with one another, or subsets or combinations of these. Of course the accuracy of a rule-based system depends on the quality of the rules that it operates on.

Statistical or machine-learning–based systems operate by building classifications, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. These are very useful to turn unstructured text into an analysable dataset. However, these systems normally require a substantial amount of already labelled training data. This is often time-consuming to create or expensive to acquire.

However, it’s important to keep in mind that much of the textual data requires disambiguation before you can process, make sense of, and apply analytics to it. The existence of ambiguity, such as multiple relationships between language and meanings or categories makes it very difficult to accurately interpret and analyse textual data. Acronym / slang / shorthand resolution, interpretation, standardisation, homographic resolution, taxonomy ontologies, textual proximity, cluster analysis and various other inferences and translations all form part of textual disambiguation. Establishing and capturing context is also crucial for unstructured text analytics – the same text can have radically different meanings and interpretations, depending on the context where it is used.

As an example of the ambiguities found in healthcare, “fat” is the official symbol of Entrez Gene entry 2195 and an alternate symbol for Entrez Gene entry 948. The distinction is not trivial – the first is associated with tumour suppression and with bipolar disorder, while the second is associated with insulin resistance and quite a few other unrelated phenotypes. If you get the interpretation wrong, you can miss or erroneously extract the wrong information.

Graph structures

An interesting class of big data is graph structures, where entities are related to each other in complex relationships like trees, networks or graphs. This type of data is typically neither large, nor unstructured, but graph structures of undetermined depth are very complex to store in relational or key-value pair structures, and even more complex to process using standard SQL. For this reason this type of data can be stored in a graph-oriented NoSQL database such as Neo4J, InfoGrid, InfiniteGraph, uRiKa, OrientDB or FlockDB.

Examples of graph structures include the networks of people that know each other, as you find on LinkedIn or Facebook. In healthcare a similar example is the network of providers linked to a group of practices or a hospital group. Referral patterns can be analysed to determine how specific doctors and hospitals team together to deliver improved healthcare outcomes. Graph-based analyses of referral patterns can also point out fraudulent behaviour, such as whether a particular doctor is a conservative or a liberal prescriber, and whether he refers patients to a hospital that charges more than double than the one just across the street.

Another useful graph-based analysis is the spread of a highly contagious disease through groups of people who were in contact with each other. An infectious disease clinic, for instance, should strive to have higher infection caseloads across such a network, but with lower actual infection rates.

A more deep-dive application of graph-based analytics is to study network models of genetic inheritance.

Geospatial data

Like other graph-structured data, geospatial data itself is pretty structured – coordinates can simply be represented as pairs of coordinates. However, when analysing and optimising ambulance routes of different lengths, for example, the data is best stored and processed using a graph structures.

Geospatial analyses are also useful for hospital and practice location planning. For example, Epworth HealthCare group teamed up with geospatial group MapData Services to conduct an extensive analysis of demographic and medical services across Victoria. The analysis involved sourcing a range of data including Australian Bureau of Statistics figures around population growth and demographics, details of currently available health services, and the geographical distribution of particular types of conditions. The outcome was that the ideal location and services mix for a new $447m private teaching hospital should be in the much smaller city of Geelong, instead of in the much larger but services-rich city of Melbourne.

Sensor data

Sensor data often are also normally quite structured, with an aspect being measured, a measurement value and a unit of measure. The complexity comes in that for each patient or each blood sample test you often have a variable record structure with widely different aspects being measured and recorded. Some sources of sensor data also produce large volumes of data at high rates. Sensor data are often best stored in key-value databases, such as Riak, DynamoDB, Redis Voldemort, and sure, Hadoop.

Biosensors are now used to enable better and more efficient patient care across a wide range of healthcare operations, including telemedicine, telehealth, and mobile health. Typical analyses compare related sets of measurements for cause and effect, reaction predictions, antagonistic interactions, dependencies and correlations.

For example, biometric data, which includes data such as diet, sleep, weight, exercise, and blood sugar levels, can be collected from mobile apps and sensors. Outcome-oriented analytics applied to this biometric data, when combined with other healthcare data, can help patients with controllable conditions improve their health by providing them with insights on their behaviours that can lead to increases or decreases in the occurrences of diseases. Data-wise healthcare organisations can similarly use analytics to understand and measure wellness, apply patient and disease segmentation, and track health setbacks and improvements. Predictive analytics can be used to inform and drive multichannel patient interaction that can help shape lifestyle choices, and so avoid poor health and costly medical care.

Concluding remarks

Although there are merits in storing and processing complex big data, we need to ensure that the type of analytical processing possible on the big data sets lead to valuable enough new insights. The way in which the big data is structured often has an implication on the type of analytics that can be applied to it. Often, too, if the analytics are not properly applied to big data integrated with existing structured data, the results are not as meaningful and valuable as expected.

We need to be cognisant of the fact that there are many storage and analytics technologies available. We need to apply the correct storage structure that matches the data structure and thereby ensure that the correct analytics can be efficiently and correctly applied, which in turn will deliver new and valuable insights.

Australian Medicare Fraud

The quoted estimate seems a bit under…

http://www.abc.net.au/news/2014-03-06/australians-defrauding-medicare-hundreds-of-thousands-of-dollars/5302584

Video: 

Australian Medicare fraud revealed in new figures, 1,116 tip-offs so far this financial year

By medical reporter Sophie Scott and Alison Branley

Updated Fri 7 Mar 2014, 1:23am AEDT

New figures show Medicare is being defrauded of hundreds of thousands of dollars each year.

Figures released to the ABC show the Federal Government has received more than 1,000 tip-offs of potential Medicare frauds to date this financial year.

It comes as debate continues over a proposal put to the Commission of Audit to charge a $6 co-payment for visits to the doctor, which would reduce costs to the health system.

The Department of Human Services says its hotline has received 1,116 Medicare-related tip-offs since July 1, 2013.

Officers have investigated 275 cases, which has translated into 34 cases submitted to the Commonwealth Department of Public Prosecutions and 12 convictions.

The value of those 12 cases adds up to an estimated $474,000, with fraudsters ripping off an average of almost $40,000 each.

Department figures suggest most of the frauds come from outside the doctor’s office.

Ten of the 12 prosecutions this year were members of the public. One involved a medical practice staff member and one a practice owner.

“The Department of Human Services takes all allegations of fraud seriously and seeks to investigate where sufficient information is provided to do so,” a spokeswoman said.

The annual review of doctors’ use of Medicare, the Professional Services Review, showed at least 19 doctors were required to repay more than $1 million between them in 2012-13.

One doctor billed Medicare for seeing more than 500 patients in a day, and more than 200 patients on several other days.

Other cases uncovered by the ABC include:

  • Former police officer Matthew James Bunning has been charged with 146 Medicare frauds between 2011 and 2013. Investigators allege the 46-year-old removed Medicare slips from rubbish bins behind Medicare offices around Melbourne to produce forged receipts and illegally claimed more than $98,000 from the Government.
  • In January last year Korean student Myung Ho Choi was sentenced in a NSW district court to five years in prison for a series of fraud and identity theft charges that included receiving at least five paper boxes filled with blank Medicare cards intended for use in identity fraud.
  • In August last year NSW man Bin Li was sentenced in district court to seven years in prison for charges that included possessing almost 400 blank cards, including high quality Medicare cards, and machines for embossing cards.

Nilay Patel, a former US-based certified specialist in healthcare compliance and law tutor at Swinburne University of Technology, says the fraud figures are the “tip of the iceberg”.

“There is a lot more that we do not know and that really comes from both camps from the patients and the medical service providers,” he said.

He says Australia is falling behind the United States at preventing, detecting and prosecuting healthcare frauds.

“The safeguards [in Australia] are quite inadequate, the detection is more reactive that proactive and whatever proactive mechanisms that are there I think they are woefully underdeveloped,” he said.

Relatively ‘smallish’ but unacceptable problem: Minister

Federal Government authorities say they do not think Medicare fraud is widespread.

Minister for Human Services Marise Payne says the number of Medicare frauds are low compared to the number of transactions.

“I think when you consider that we have 344 million Medicare transactions a year it is a relatively smallish [problem] but that doesn’t mean it’s acceptable,” she said.

“One person committing a fraud effectively against the Australian taxpayer is one person too many.”

Ms Payne says the department uses sophisticated data matching and analytics to pick up potential frauds as well as its tip-off hotline.

The merger of Medicare with Centrelink also allows the bureaucracies to better share information and leads.

“The work we have done in that area is paying dividends,” Ms Payne said.

“There is more to do. The use of analytical data and risk profiling is highly sophisticated in the Centrelink space and we want to make sure we achieve the same levels in the Medicare space.”

The Australian Federal Police says it does not routinely gather statistics on the number of fake or counterfeit Medicare cards.

However, a spokesman says detections of counterfeit Medicare cards are rare.

“Intelligence to date indicates that the majority of Medicare cards seized that are of sufficient quality, are used as a form of identity, not intentionally to defraud Medicare,” a spokesman said.

A Customs and Border Protection spokeswoman says blank or fraudulent Medicare cards are not controlled under the Customs regulations and it is unable to provide seizure statistics.

The federal Ombudsman says he has not conducted any review or investigations into Medicare but did contribute to a 2009 inquiry into compliance audits on benefits.

The Medicare complaints detailed in the Ombudsman’s annual report relate to customers disputing Medicare refunds, not frauds.

‘People are just looting the money’

Sydney man Tahir Abbas is sceptical about the Government’s claims that Medicare fraud is not widespread.

Mr Abbas detected at least 10 false bulk billing charges on his Medicare statement between November and January valued at almost $750.

He was not in the country when many of the charges were incurred.

The charges were from a western Sydney optometrist who told the ABC they were unable to explain the discrepancies.

They said while Mr Abbas was billed, they never received payment.

How many times do we go and check our statements for Medicare particularly. Maybe with credit cards, bank details but not with Medicare. These people are just looting the money.

Victim of Medicare fraud Tahir Abbas

 

The owner told the ABC the system would not allow them to receive bulk billing payments for more than one check-up in a two-year period.

Mr Abbas said he believed his card had been misused by others for their own benefit.

“I was very disgusted to be honest,” he said.

“It’s all bulk-billed and they are charging the Government. But in a way the Government is charging us so we are paying from our pocket – it’s all taxpayers’ money.”

He has urged people to check their Medicare statements.

“How many times do we go and check our statements for Medicare particularly. Maybe with credit cards, bank details but not with Medicare.

“These people are just looting the money.”

Medicare has told Mr Abbas they are investigating.

High-tech Medicare cards needed?

Technology and crime analyst Nigel Phair from the University of Canberra says the Medicare card is an easy to clone, low-tech card that has been around for three decades.

While it is low in value for identity check points, it is a well-respected document.

 

“The Medicare card carries no technology which gives it additional factors for verification or identification of users,” he said.

“It’s just a mag stripe on the back, very similar to a credit card from the 1990s without any chip or pin technologies, which are well known to be the way of the future.”

He says Medicare is vulnerable to abuse because people’s data is stored in many places such as doctors’ surgeries and pharmacies.

“It’s very easy to sail under the radar if you’re a fraudulent user. And like all good frauds you keep the value of the transactions low but your volume high,” he said.

“Because all we do have is anecdotal evidence and no hard statistics, we really don’t know how bad this issue is.”

Ms Payne does not support upgrading the quality of Medicare cards.

“The advice I have is that that is not really a large source of fraud and inappropriate practices,” she said.

Do you know more? Email investigations@abc.net.au

 

Topics: fraud-and-corporate-crimehealthhealth-administrationhealth-policygovernment-and-politicsfederal-government,law-crime-and-justiceaustralia

First posted Thu 6 Mar 2014, 12:00pm AEDT