Category Archives: rapid learning health systems

Artificial intelligence meets the C-suite

Artificial intelligence meets the C-suite

http://www.mckinsey.com/Insights/Strategy/Artificial_intelligence_meets_the_C-suite

Jeremy Howard: Today, machine-learning algorithms are actually as good as or better than humans at many things that we think of as being uniquely human capabilities. People whose job is to take boxes of legal documents and figure out which ones are discoverable— that job is rapidly disappearing because computers are much faster and better than people at it.

In 2012, a team of four expert pathologists looked through thousands of breast-cancer screening images, and identified the areas of what’s called mitosis, the areas which were the most active parts of a tumor. It takes four pathologists to do that because any two only agree with each other 50 percent of the time. It’s that hard to look at these images; there’s so much complexity. So they then took this kind of consensus of experts and fed those breast-cancer images with those tags to a machine-learning algorithm. The algorithm came back with something that agreed with the pathologists 60 percent of the time, so it is more accurate at identifying the very thing that these pathologists were trained for years to do. And this machine-learning algorithm was built by people with no background in life sciences at all. These are total domain newbies

 

Artificial intelligence meets the C-suite

Technology is getting smarter, faster. Are you? Experts including the authors of The Second Machine Age, Erik Brynjolfsson and Andrew McAfee, examine the impact that “thinking” machines may have on top-management roles.

September 2014

artThe exact moment when computers got better than people at human tasks arrived in 2011, according to data scientist Jeremy Howard, at an otherwise inconsequential machine-learning competition in Germany. Contest participants were asked to design an algorithm that could recognize street signs, many of which were a bit blurry or dark. Humans correctly identified them 98.5 percent of the time. At 99.4 percent, the winning algorithm did even better.Or maybe the moment came earlier that year, when IBM’s Watson computer defeated the two leading human Jeopardy! players on the planet. Whenever or wherever it was, it’s increasingly clear that the comparative advantage of humans over software has been steadily eroding. Machines and their learning-based algorithms have leapt forward in pattern-matching ability and in the nuances of interpreting and communicating complex information. The long-standing debate about computers as complements or substitutes for human labor has been renewed.

The matter is more than academic. Many of the jobs that had once seemed the sole province of humans—including those of pathologists, petroleum geologists, and law clerks—are now being performed by computers.

And so it must be asked: can software substitute for the responsibilities of senior managers in their roles at the top of today’s biggest corporations? In some activities, particularly when it comes to finding answers to problems, software already surpasses even the best managers. Knowing whether to assert your own expertise or to step out of the way is fast becoming a critical executive skill.

Video

Managing in the era of brilliant machines: An interview  

Managing in the era of brilliant machines: An interview

In this interview with McKinsey’s Rik Kirkland, Erik Brynjolfsson and Andrew McAfee explain the organizational challenge posed by the Second Machine Age.

Play video

Yet senior managers are far from obsolete. As machine learning progresses at a rapid pace, top executives will be called on to create the innovative new organizational forms needed to crowdsource the far-flung human talent that’s coming online around the globe. Those executives will have to emphasize their creative abilities, their leadership skills, and their strategic thinking.

To sort out the exponential advance of deep-learning algorithms and what it means for managerial science, McKinsey’s Rik Kirkland conducted a series of interviews in January at the World Economic Forum’s annual meeting in Davos. Among those interviewed were two leading business academics—Erik Brynjolfsson and Andrew McAfee, coauthors of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W. W. Norton, January 2014)—and two leading entrepreneurs: Anthony Goldbloom, the founder and CEO of Kaggle (the San Francisco start-up that’s crowdsourcing predictive-analysis contests to help companies and researchers gain insights from big data); and data scientist Jeremy Howard. This edited transcript captures and combines highlights from those conversations.

The Second Machine Age

What is it and why does it matter?

Andrew McAfee: The Industrial Revolution was when humans overcame the limitations of our muscle power. We’re now in the early stages of doing the same thing to our mental capacity—infinitely multiplying it by virtue of digital technologies. There are two discontinuous changes that will stick in historians’ minds. The first is the development of artificial intelligence, and the kinds of things we’ve seen so far are the warm-up act for what’s to come. The second big deal is the global interconnection of the world’s population, billions of people who are not only becoming consumers but also joining the global pool of innovative talent.

Erik Brynjolfsson: The First Machine Age was about power systems and the ability to move large amounts of mass. The Second Machine Age is much more about automating and augmenting mental power and cognitive work. Humans were largely complements for the machines of the First Machine Age. In the Second Machine Age, it’s not so clear whether humans will be complements or machines will largely substitute for humans; we see examples of both. That potentially has some very different effects on employment, on incomes, on wages, and on the types of companies that are going to be successful.

Video

Putting artificial intelligence to work: An interview with Anthony Goldbloom and Jeremy Howard

Machine-learning experts Anthony Goldbloom and Jeremy Howard tell McKinsey’s Rik Kirkland how smart machines will impact employment.

Jeremy Howard: Today, machine-learning algorithms are actually as good as or better than humans at many things that we think of as being uniquely human capabilities. People whose job is to take boxes of legal documents and figure out which ones are discoverable— that job is rapidly disappearing because computers are much faster and better than people at it.

In 2012, a team of four expert pathologists looked through thousands of breast-cancer screening images, and identified the areas of what’s called mitosis, the areas which were the most active parts of a tumor. It takes four pathologists to do that because any two only agree with each other 50 percent of the time. It’s that hard to look at these images; there’s so much complexity. So they then took this kind of consensus of experts and fed those breast-cancer images with those tags to a machine-learning algorithm. The algorithm came back with something that agreed with the pathologists 60 percent of the time, so it is more accurate at identifying the very thing that these pathologists were trained for years to do. And this machine-learning algorithm was built by people with no background in life sciences at all. These are total domain newbies.

Andrew McAfee: We thought we knew, after a few decades of experience with computers and information technology, the comparative advantages of human and digital labor. But just in the past few years, we have seen astonishing progress. A digital brain can now drive a car down a street and not hit anything or hurt anyone—that’s a high-stakes exercise in pattern matching involving lots of different kinds of data and a constantly changing environment.

Why now?

Computers have been around for more than 50 years. Why is machine learning suddenly so important?

Erik Brynjolfsson: It’s been said that the greatest failing of the human mind is the inability to understand the exponential function. Daniela Rus—the chair of the Computer Science and Artificial Intelligence Lab at MIT—thinks that, if anything, our projections about how rapidly machine learning will become mainstream are too pessimistic. It’ll happen even faster. And that’s the way it works with exponential trends: they’re slower than we expect, then they catch us off guard and soar ahead.

Andrew McAfee: There’s a passage from a Hemingway novel about a man going broke in two ways: “gradually and then suddenly.” And that characterizes the progress of digital technologies. It was really slow and gradual and then, boom—suddenly, it’s right now.

Jeremy Howard: The difference here is each thing builds on each other thing. The data and the computational capability are increasing exponentially, and the more data you give these deep-learning networks and the more computational capability you give them, the better the result becomes because the results of previous machine-learning exercises can be fed back into the algorithms. That means each layer becomes a foundation for the next layer of machine learning, and the whole thing scales in a multiplicative way every year. There’s no reason to believe that has a limit.

Erik Brynjolfsson: With the foundational layers we now have in place, you can take a prior innovation and augment it to create something new. This is very different from the common idea that innovations get used up like low-hanging fruit. Now each innovation actually adds to our stock of building blocks and allows us to do new things.

One of my students, for example, built an app on Facebook. It took him about three weeks to build, and within a few months the app had reached 1.3 million users. He was able to do that with no particularly special skills and no company infrastructure, because he was building it on top of an existing platform, Facebook, which of course is built on the web, which is built on the Internet. Each of the prior innovations provided building blocks for new innovations. I think it’s no accident that so many of today’s innovators are younger than innovators were a generation ago; it’s so much easier to build on things that are preexisting.

Jeremy Howard: I think people are massively underestimating the impact, on both their organizations and on society, of the combination of data plus modern analytical techniques. The reason for that is very clear: these techniques are growing exponentially in capability, and the human brain just can’t conceive of that.

There is no organization that shouldn’t be thinking about leveraging these approaches, because either you do—in which case you’ll probably surpass the competition—or somebody else will. And by the time the competition has learned to leverage data really effectively, it’s probably going to be too late for you to try to catch up. Your competitors will be on the exponential path, and you’ll still be on that linear path.

Let me give you an example. Google announced last month that it had just completed mapping the exact location of every business, every household, and every street number in the entirety of France. You’d think it would have needed to send a team of 100 people out to each suburb and district to go around with a GPS and that the whole thing would take maybe a year, right? In fact, it took Google one hour.

Now, how did the company do that? Rather than programming a computer yourself to do something, with machine learning you give it some examples and it kind of figures out the rest. So Google took its street-view database—hundreds of millions of images—and had somebody manually go through a few hundred and circle the street numbers in them. Then Google fed that to a machine-learning algorithm and said, “You figure out what’s unique about those circled things, find them in the other 100 million images, and then read the numbers that you find.” That’s what took one hour. So when you switch from a traditional to a machine-learning way of doing things, you increase productivity and scalability by so many orders of magnitude that the nature of the challenges your organization faces totally changes.

The senior-executive role

How will top managers go about their day-to-day jobs?

Andrew McAfee: The First Machine Age really led to the art and science and practice of management—to management as a discipline. As we expanded these big organizations, factories, and railways, we had to create organizations to oversee that very complicated infrastructure. We had to invent what management was.

In the Second Machine Age, there are going to be equally big changes to the art of running an organization.

I can’t think of a corner of the business world (or a discipline within it) that is immune to the astonishing technological progress we’re seeing. That clearly includes being at the top of a large global enterprise.

I don’t think this means that everything those leaders do right now becomes irrelevant. I’ve still never seen a piece of technology that could negotiate effectively. Or motivate and lead a team. Or figure out what’s going on in a rich social situation or what motivates people and how you get them to move in the direction you want.

These are human abilities. They’re going to stick around. But if the people currently running large enterprises think there’s nothing about the technology revolution that’s going to affect them, I think they would be naïve.

So the role of a senior manager in a deeply data-driven world is going to shift. I think the job is going to be to figure out, “Where do I actually add value and where should I get out of the way and go where the data take me?” That’s going to mean a very deep rethinking of the idea of the managerial “gut,” or intuition.

It’s striking how little data you need before you would want to switch over and start being data driven instead of intuition driven. Right now, there are a lot of leaders of organizations who say, “Of course I’m data driven. I take the data and I use that as an input to my final decision-making process.” But there’s a lot of research showing that, in general, this leads to a worse outcome than if you rely purely on the data. Now, there are a ton of wrinkles here. But on average, if you second-guess what the data tell you, you tend to have worse results. And it’s very painful—especially for experienced, successful people—to walk away quickly from the idea that there’s something inherently magical or unsurpassable about our particular intuition.

Jeremy Howard: Top executives get where they are because they are really, really good at what they do. And these executives trust the people around them because they are also good at what they do and because of their domain expertise. Unfortunately, this now saddles executives with a real difficulty, which is how to become data driven when your entire culture is built, by definition, on domain expertise. Everybody who is a domain expert, everybody who is running an organization or serves on a senior-executive team, really believes in their capability and for good reason—it got them there. But in a sense, you are suffering from survivor bias, right?

You got there because you’re successful, and you’re successful because you got there. You are going to underestimate, fundamentally, the importance of data. The only way to understand data is to look at these data-driven companies like Facebook and Netflix and Amazon and Google and say, “OK, you know, I can see that’s a different way of running an organization.” It is certainly not the case that domain expertise is suddenly redundant. But data expertise is at least as important and will become exponentially more important. So this is the trick. Data will tell you what’s really going on, whereas domain expertise will always bias you toward the status quo, and that makes it very hard to keep up with these disruptions.

Erik Brynjolfsson: Pablo Picasso once made a great observation. He said, “Computers are useless. They can only give you answers.” I think he was half right. It’s true they give you answers—but that’s not useless; that has some value. What he was stressing was the importance of being able to ask the right questions, and that skill is going to be very important going forward and will require not just technical skills but also some domain knowledge of what your customers are demanding, even if they don’t know it. This combination of technical skills and domain knowledge is the sweet spot going forward.

Anthony Goldbloom: Two pieces are required to be able to do a really good job in solving a machine-learning problem. The first is somebody who knows what problem to solve and can identify the data sets that might be useful in solving it. Once you get to that point, the best thing you can possibly do is to get rid of the domain expert who comes with preconceptions about what are the interesting correlations or relationships in the data and to bring in somebody who’s really good at drawing signals out of data.

The oil-and-gas industry, for instance, has incredibly rich data sources. As they’re drilling, a lot of their drill bits have sensors that follow the drill bit. And somewhere between every 2 and 15 inches, they’re collecting data on the rock that the drill bit is passing through. They also have seismic data, where they shoot sound waves down into the rock and, based on the time it takes for those sound waves to be captured by a recorder, they can get a sense for what’s under the earth. Now these are incredibly rich and complex data sets and, at the moment, they’ve been mostly manually interpreted. And when you manually interpret what comes off a sensor on a drill bit or a seismic survey, you miss a lot of the richness that a machine-learning algorithm can pick up.

Andrew McAfee: The better you get at doing lots of iterations and lots of experimentation—each perhaps pretty small, each perhaps pretty low-risk and incremental—the more it all adds up over time. But the pilot programs in big enterprises seem to be very precisely engineered never to fail—and to demonstrate the brilliance of the person who had the idea in the first place.

That makes for very shaky edifices, even though they’re designed to not fall apart. By contrast, when you look at what truly innovative companies are doing, they’re asking, “How do I falsify my hypothesis? How do I bang on this idea really hard and actually see if it’s any good?” When you look at a lot of the brilliant web companies, they do hundreds or thousands of experiments a day. It’s easy because they’ve got this test platform called the website. And they can do subtle changes and watch them add up over time.

So one of the implications of the manifested brilliance of the crowd applies to that ancient head-scratcher in economics: what the boundary of the firm should be. What should I be doing myself versus what should I be outsourcing? And, now, what should I be crowdsourcing?

Implications for talent and hiring

It’s important to make sure that the organization has the right skills.

Jeremy Howard: Here’s how Google does HR. It has a unit called the human performance analytics group, which takes data about the performance of all of its employees and what interview questions were they asked, where was their office, how was that part of the organization’s structure, and so forth. Then it runs data analytics to figure out what interview methods work best and what career paths are the most successful.

Anthony Goldbloom: One huge limitation that we see with traditional Fortune 500 companies—and maybe this seems like a facile example, but I think it’s more profound than it seems at first glance—is that they have very rigid pay scales.

And they’re competing with Google, which is willing to pay $5 million a year to somebody who’s really great at building algorithms. The more rigid pay scales at traditional companies don’t allow them to do that, and that’s irrational because the return on investment on a $5 million, incredibly capable data scientist is huge. The traditional Fortune 500 companies are always saying they can’t hire anyone. Well, one reason is they’re not willing to pay what a great data scientist can be paid elsewhere. Not that it’s just about money; the best data scientists are also motivated by interesting problems and, probably most important, by the idea of working with other brilliant people.

Machine learning and computers aren’t terribly good at creative thinking, so the idea that the rewards of most jobs and people will be based on their ability to think creatively is probably right.

About the author

This edited roundtable is adapted from interviews conducted by Rik Kirkland, senior managing editor of McKinsey Publishing, who is based in McKinsey’s New York office.

MIT launches wellness advancing technology program…

Potentially very interesting work…

http://www.rwjf.org/en/about-rwjf/newsroom/newsroom-content/2014/09/media-lab-to-launch-wellness-initiative-with–1-million-grant-fr.html

Media Lab to Launch Wellness Initiative with $1 Million Grant from the Robert Wood Johnson Foundation

New program, Advancing Wellness, combines academics with on-the-ground initiatives to prompt cultural shifts toward better health.

Princeton, N.J.—The MIT Media Lab this week launched a wellness initiative designed to spark innovation in the area of health and wellbeing, and to promote healthier workplace and lifestyle behaviors.

With support from the Robert Wood Johnson Foundation (RWJF), which is providing a $1 million, one-year grant, the new initiative will address the role of technology in shaping our health, and explore new approaches and solutions to wellbeing. The program is built around education and student mentoring; prototyping tools and technologies that support physical, mental, social, and emotional wellbeing; and community initiatives that will originate at the Lab, but be designed to scale.

The program begins with the fall course Tools for Wellbeing, followed by Health Change Lab in the spring. In addition to concept and technology development, these courses will feature seminars by noted experts who will address a wide range of topics related to wellness. These talks will be open to the public, and made available online. Speakers include such experts as Walter Willett, noted nutrition and clinical medicine researcher; Chuck Czeisler, physician and sleep expert; Ben Sawyer, game developer for health applications; Matthew Nock, expert in suicide prevention; Dinesh John, researcher on health sciences and workplace activity; Lisa Mosconi, neuroscientist studying the prevention of Alzheimer’s; and Martin Seligman, one of the founders of the field of positive psychology. More information about the courses, speakers, and presentation topics and dates can be found here.

The RWJF grant will also support five graduate-level Research Fellows from the Program in Media Arts and Sciences, who will be part of a year-long training program. The funding will enable each Fellow to design, build and deploy novel tools to promote wellbeing and health behavior change at the Lab in a living lab environment, and then at scale.

One of the significant ways that this program will impact Media Lab culture is in the review of all thesis proposals submitted by students in the Media Arts and Sciences program. The Media Lab faculty recently added a new requirement that all thesis proposals consider the impact of the proposed thesis work on human wellbeing.

Other Lab-wide aspects of the initiative include:

  • A monthly health challenge that would engage the entire Lab, with review and analysis of each month’s deployment to help inform the next month’s initiative
  • A buddy system to pair students at the Lab with one another—to build an awareness of wellbeing as a social function, and not just a personal one, and to draw on people’s inclination to solve the problems of others differently than we would solve our own.
  • The Media Lab will host a special event on October 23, 2014, when the creators of the X-Prize convene at MIT, presenting on a new X-Prize for Wellbeing.

“Wellbeing is a very hard problem that has yet to be solved by psychologists, psychiatrists, neuroscientists, biologists or other experts in the scientific community,” said Rosalind Picard, professor of Media Arts and Sciences and one of the three principal investigators on the initiative. “It’s time to bring MIT ingenuity to the challenge.”

“RWJF is working to build a culture of health in the U.S., where all people have opportunities to make healthy choices and lead healthy lifestyles. Technology has long shaped the patterns of everyday life and it is these patterns—of how we work, eat, sleep, socialize, recreate and get from place to place—that largely determine our health,” said Stephen Downs, chief techonology and information officer at RWJF. “We’re excited to see the Media Lab turn its creative talents and its significant influence to the challenge of developing technologies that will make these patterns of everyday life more healthy.”

The three principal investigators on the Advancing Wellness initiative are: Rosalind Picard, professor of Media Arts and Sciences; Pattie Maes, the Alex W. Dreyfoos Professor of Media Arts and Sciences; and Kevin Slavin, assistant professor.  PhD candidate Karthik Dinakar, Reid Hoffman Fellow at the Media Lab, will co-teach the two courses with the three principal investigators.  Susan Silbey, Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology, will also create independent assessments through the year on the impact of this project.

ABOUT THE ROBERT WOOD JOHNSON FOUNDATION

For more than 40 years the Robert Wood Johnson Foundation has worked to improve the health and health care of all Americans. We are striving to build a national Culture of Health that will enable all Americans to live longer, healthier lives now and for generations to come. For more information, visit www.rwjf.org. Follow the Foundation on Twitter at www.rwjf.org/twitter or on Facebook at www.rwjf.org/facebook.

On medical disinvestment…

Nice and punchy oped on low value care…

https://www.mja.com.au/insight/2014/34/richard-king-what-not-do

Richard King: What not to do

Richard King
Monday, 15 September, 2014

EARLIER this year, articles appeared in the New York Times and the Australian Financial Review on low-value health care and the response from doctors.

These articles reflect to the public the worldwide drive by health care organisations, governments and doctors towards disinvestment in ineffective or inappropriately applied practices in health care. It has been described as a growing priority for health care systems to improve the quality of care and sustainability of resource allocation.

Identification of procedures and practices for disinvestment has increased, particularly in the past 4 years, with the UK National Institute for Health and Care Excellence introducing “do not do” recommendations, followed by the American Board of Internal Medicine’s “Choosing Wisely”campaign.

Choosing Wisely, an initiative that is about to be introduced in Australia, was developed after about 60 medical colleges and societies in the US put together an evidence-based list of five investigations or procedures in each specialty that had little or no value, and that should not be done.

In Australia, a list of 156 practices that had questionable benefit or low value was published in theMJA in 2012.

A second way to identify inappropriate procedures is to find articles in high-impact journals that produce solid evidence showing current procedures should not be done. One team of US researchers identified 146 articles published over a 10-year period to 2010 that reversed established practice.

A third way is to identify the procedures or devices that will be replaced or substituted when a new technology is introduced. Examples of this were identified at the 2013 National Workshop on Disinvestment and outlined in the final report of the Health Policy Advisory Committee on Technology, including endobronchial ultrasound to biopsy and diagnose mediastinal lung tumours, which resulted in significant disinvestment in its pre-existing surgical comparator, mediastinoscopy, saving millions of dollars.

However, implementation of disinvestment in low-value health care is not well developed. We need action at federal, state and hospital levels.

At a federal level, the Medical Services Advisory Committee (MSAC) has the power to review procedures on the Medicare Benefits Schedule (MBS) and recommend their removal if they are not effective. This did happen in 2006 when MSAC recommended the introduction of magnetic resonance cholangiopancreatography and removed the general use of diagnostic endoscopic retrograde cholangiopancreatography. However, there have been no other recommendations since.

The federal Department of Health and Ageing did report at the 2013 Workshop on Disinvestment that it was looking at 20 items on the MBS being considered for removal.

At a state level, the Queensland Health Clinical Senate in 2013 devoted a lot of time to disinvestment, which it regarded as a priority in Queensland.

In Victoria, the Department of Health’s Victorian Policy Advisory Committee on Technology is looking at how a coordinated approach in hospitals might be achieved through cooperation across the sector. This is still in early days.

Monash Health has a disinvestment subcommittee as part of its New Technology Committee which has been active since 2009. It has recommended the cessation of various procedures such as vertebroplasty for osteoporotic vertebral body fractures and stenting of artherosclorotic renal arteries for hypertension, based on a similar method to identifying articles in high-impact journals that show current procedures should not be done.

There are many impediments to stopping existing practices. It has been said that to get a technology onto a schedule such as the MBS requires the same level of evidence as for civil trials — the balance of probabilities.

To take something off a schedule requires the same level of evidences as for a criminal conviction — beyond reasonable doubt.

If our health system is to remain sustainable, disinvestment must become part of the health care process.

As Dr Lowell Shipper, chair of a task force on value in cancer care at the American Society of Clinical Oncology, told the New York Times: “We understand that we doctors should be and are stewards of the larger society as well as of the patient in our examination room.”

Associate Professor Richard King is the medical director of medicine at Monash Health and chair of the Victorian Policy Advisory Committee on Technology.

The “pay less, get more” era of health care

Excellent summary of current US funding situation…

http://www.vox.com/2014/9/10/6121631/the-pay-less-get-more-era-of-health-care

The “pay less, get more” era of health care

Health care spending has, for decades, followed a consistent pattern. America pays more and more for health care — and gets less and less.

Between 1990 and 2012, the insured rate in the United States fell two percentage points, from 86.6 to 84.6 percent. If the insured rate had just held steady, six million more people would have been covered in 2012.

While we were covering less people, we kept spending more on health care. National health spending, over that time period, rose from 12 percent of the economy in 1990 to 17.2 percent in 2012. Adjusted for inflation, health-care spending rose from $1.1 trillion to $2.8 trillion over those 22 years.

health spend more get less

That’s been the typical story of American health care: a lousy deal where we get less and spend more.

But there’s a growing body of evidence that this trend is changing; that we’re starting to get a shockingly better deal in a way that has giant consequences for how America spends money. Call it the “get more, pay less” era.

The “get more, pay less” era of health care spending

There are two big trends that, taken together, suggest we may be fundamentally different era of health care spending.

The first is lots more people getting coverage. This is mostly Obamacare: the health care law is expected to expand insurance coverage to 26 million people by 2024. In 2014 alone, most estimates suggest about 5 million people have gained health coverage through the law. The recovering economy is likely playing a supporting role, too, with those gaining jobs also gaining access to employer-sponsored coverage.

The second big trend is in what we spend: actuaries expect that health care costs will grow slower over the next decade than they did in the 1990s and 2000s.

More specifically: health care costs grew, on average, 2 percent faster than the economy between 1990 and 2008. Health spending took over an ever-growing share of the economy. Workers barely got raises; skyrocketing premiums ate up most of their additional wages.

The next decade is now expected to be different. Actuaries at the Center for Medicare and Medicaid Services project health care costs to grow 1 percent faster than the rest of the economy between 2013 and 2023.

“We are seeing historic moderation in costs now over a considerable period of time,” Kaiser Family Foundation president Drew Altman says. HIs group recently released data showing slow growth of employer-sponsored coverage. “It’s absolutely true we’re seeing that and any expert will tell you that.”

This is startling: over the next decade, forecasters think our health spending will grow at a slower rate, even as millions and millions of Americans gain access to health insurance. After two decades of spending more and getting less, we’re entering an era of spending less and getting more. It’s bizarro health spending world.

There are signs of this throughout the health care system

One thing that’s so striking about the “get more, pay less” trend is that it isn’t limited to one particular insurance plan or program. It’s starting to crop up in lots of new health care data, suggesting this change has become pervasive in the health care industry.

Start with private health insurance: the Kaiser Family Foundation recently published research finding the average price of Obamacare’s benchmark will fall slightly in 2015. As my colleague Ezra Klein wrote recently, this just about unprecedented. “Falling is not a word that people associate with health-insurance premiums,” he writes .”They tend to rise as regularly as the morning sun.”

Lower premiums make health care dollars stretch further: Obamacare shoppers will be able to buy the coverage they had last year at a slightly lower price. That’s a big deal when you’re talking about paying for a health insurance program meant to cover tens of millions of Americans.

Increasingly narrow health insurance networks are another sign of “get more, pay less” era. Over the past few years — and especially under Obamacare —insurers have gravitated towards cheaper premium plans to offer access to a smaller number of doctors.

narrow network graph

These plans’ more limited doctor choice can have a big impact on spending. Research from economists Jon Gruber and Robin McKnight found that, in one example, switching enrollees to these plans cut overall spending by one third. And while patients had access to fewer hospitals, the hospitals that were in network were of equally good quality.

Then there’s the Medicare side of the equation, where there has been a unprecedented decline in per person spending. Margot Sanger-Katz at the Upshot has had two fantastic posts on Medicare’s cost slowdown. One of them points out the fact that, since 2010, per patient spending has grown slower than the rest of the economy. You can see that in this graph, which charts “excess cost growth” in Medicare (health wonk speak for cost growth above and beyond inflation). For the past few years, excess growth has been replaced by slower-than-the-economy growth.

medicare excess cost growth

(The New York Times)

As Sanger-Katz points out, there are two trends at play in Medicare. One is that younger baby boomers keep aging onto the program. They’re younger than Medicare’s really old patients, and typically less expensive to care for. That drives down per person spending for the whole population.

But there’s something else going on that looks to be a more permanent trend: Medicare patients are using less expensive care. They go to the doctor more, and the hospital less. You can see this in new data from the Medicare Trustees’ report, which shows per person spending on Medicare Part A (the program that covers inpatient care) falling over the past few years.

medicare

Because of this shift away from hospital care, Medicare Part A now spends less money to cover more people. It paid $266.8 billion covering 50.3 million people in 2012. In 2013, the the same program spent $266.2 billion to cover 51.9 million people.

Will “pay less, get more” health care stick?

We have had periods of relatively slow health care growth before. In the mid-1990s, for example, there was a stretch of time when health spending grew at the same rate as the rest of the economy. You can see that in this graph.

health spending growth

Most health economists attribute that to the rise of health maintenance organizations, or HMOs, that sharply limited access to specialists. Patients, unsurprisingly, didn’t like those limitations and there was a backlash. HMOs declined and health spending rose again.

But some health economists say that this time feels different. For one, the changes are happening in private insurance and Medicare, suggesting there’s no single — and thus easily reversible — force driving the change.

And while there are more patients in narrow network products, something akin to HMOs, consumers are often choosing to be there. These are shoppers on the Obamacare exchanges who have decided to make a trade off: they’re take lower premiums for less choice of doctor.

“In the 1990s, people were essentially stuck in HMOs,” M.I.T economist Gruber says. “This time, people are given an option and make a choice. That’s why I’m more confident this slower growth will stick.”

Medicare actuaries are not fortune tellers; they do not have a crystal ball that conjures up the future of health care with perfect clarity. But at least at this particular moment, there are lots of signs cropping up to suggest something very important in health care is changing, and it’s for the better.

CARD 3 OF 15LAUNCH CARDS

How does American health-care spending compare to other countries?

The United States has higher per-person health-care spending than all other industrialized nations. The most recent international data from the OECD estimates that the United States puts 17.7 percent of its economy towards health care (slightly higher than CMS’s estimate of 17.2 percent). The OECD average is 9.3 percent.

Health_care_oecd

Much of the difference between health care spending abroad and in the United States has to do with prices. Americans don’t actually go to the doctor a lot more than people in other countries. But when we do, our medical care costs more. Specific services, like MRIs and knee replacements, have significantly higher price tags when delivered in the United States than elsewhere.

Bloomberg: Big Data Knows You’ve Got Diabetes Before You Do

 

http://www.bloomberg.com/news/2014-09-11/how-big-data-peers-inside-your-medicine-chest.html

Did You Know You Had Diabetes? It’s All Over the Internet

Photographer: Rick McFarland/Bloomberg

The headquarters of Acxiom Corp. in Little Rock, Arkansas. The Acxiom list was compiled by various sources, including… Read More

Photographer: Joshua Roberts/Bloomberg

An electronic medical records system.

Photographer: Joe Raedle/Getty Images

An elderly man reached for medication in Florida.

Photographer: Joe Raedle/Getty Images

An elderly woman with her medication in Maine.

The 42-year-old information technology worker’s name recently showed up in a database of millions of people with “diabetes interest” sold by Acxiom Corp. (ACXM), one of the world’s biggest data brokers. One buyer, data reseller Exact Data, posted Abate’s name and address online, along with 100 others, under the header Sample Diabetes Mailing List. It’s just one of hundreds of medical databases up for sale to marketers.

In a year when former National Security Agency contractor Edward Snowden’s revelations about the collection of U.S. phone data have sparked privacy fears, data miners have been quietly using their tools to peek into America’s medicine cabinets. Tapping social media, health-related phone apps and medical websites, data aggregators are scooping up bits and pieces of tens of millions of Americans’ medical histories. Even a purchase at the pharmacy can land a shopper on a health list.

“People would be shocked if they knew they were on some of these lists,” said Pam Dixon, president of the non-profit advocacy group World Privacy Forum, who has testified before Congress on the data broker industry. “Yet millions are.”

They’re showing up in directories with names like “Suffering Seniors” or “Aching and Ailing,” according to a Bloomberg review of this little-known corner of the data mining industry. Other lists are categorized by diagnosis, including groupings of 2.3 million cancer patients, 14 million depression sufferers and 600,000 homes where a child or other member of the household has autism or attention deficit disorder.

The lists typically sell for about 15 cents per name and can be broken down into sub-categories, like ethnicity, income level and geography for a few pennies more.

Diaper Coupons

Some consumers may benefit, like those who find out about a new drug or service that could improve their health. And Americans are already used to being sliced and diced along demographic lines. Lawn-care ads for new homeowners and diaper coupons for expecting moms are as predictable as the arrival of the AARP magazine on the doorsteps of the just-turned 50 set. Yet collecting massive quantities of intimate health data is new territory and many privacy experts say it has gone too far.

“It is outrageous and unfair to consumers that companies profiting off the collection and sale of individuals’ health information operate behind a veil of secrecy,” said U.S. Senator Jay Rockefeller, a West Virginia Democrat. “Consumers deserve to know who is profiting.”

Senators’ Attention

Rockefeller and U.S. Senator Edward Markey, a Democrat from Massachusetts, introducedlegislation in February that would allow consumers to see what information has been collected on them and make it easier to opt out of being included on such lists. In May, the Federal Trade Commission recommended Congress put more protections around the collection of health and other sensitive information to ensure consumers know how the details they are sharing are going to be used.

The companies selling the data say it’s secure and contains only information from consumers who want it shared with marketers so they can learn more about their condition. The data broker trade group, the Direct Marketing Association, said it has its own set of mandatory guidelines to ensure the data is ethically collected and used. It also has a website to allow consumers to opt out of receiving marketing material.

“We have very strong self regulation, we have for more than 40 years,” said Rachel Nyswander Thomas, vice president for government affairs for the DMA. “Regardless of how the practices are evolving, the self-regulation is as strong as ever.”

Yet the ease with which data is discoverable in a simple Google search along with Bloomberg interviews with people who showed up in one such database suggest the process isn’t always secure or transparent.

Open Access

Dan Abate said he never agreed to be included in any list related to diabetes. Two other people on the same mailing list said they didn’t have diabetes either and weren’t aware of consenting to offer their information.

In Abate’s case, neither he nor anyone in his family or household has diabetes and the only connection he can think of for landing on the list are a few cycling events he participated in for a group that raises money for the disease.

“I could understand if I was voluntarily putting this medical information out there,” Abate said. “But I don’t have diabetes, and I don’t want my information out there to be sold.”

Bloomberg found the diabetes mailing list on the website of Exact Data in a section for sample lists that included dozens of other categories, like gamblers and pregnant women. The diabetes list contained 100 names, addresses and e-mails. Bloomberg sent e-mails to all of them, and three consented to interviews. There were no restrictions on who could access the list, available on search engines like Google.

Online Surveys

Exact Data’s Chief Executive Officer Larry Organ said the list posted on its website shouldn’t have included last names and street addresses, and the company has since deleted any identifiable information. He said the data came from Acxiom and Exact Data was reselling it.

The Acxiom list was compiled by various sources, including surveys, registrations, or summaries of retail purchases that indicated someone in the household has an interest in diabetes, said Ines Gutzmer, a spokeswoman for the Little Rock, Arkansas-based company. While Gutzmer said consumers can visit the Acxiom website to see some of the information that has been collected on them, she declined to comment about how any one individual was placed on the list.

Acxiom shares rose less than 1 percent, to $18.66 at the close of New York trading. The company has lost 29 percent of its value in the past 12 months.

Sharing Information

One of the more common ways to end up on a health list is by sharing health information on a mail or online survey, according to interviews with data brokers and the review of dozens of health-related lists. In some cases the surveys are tied to discounts or sweepstakes. Others are sent by a company seeking customer feedback after a purchase. The information is then sold to data brokers who repackage and resell it.

Epsilon, which has data on 54 million households based on information gathered from its Shopper’s Voice survey, has lists containing information on 447,000 households in which someone has Alzheimer’s, 146,000 with Parkinson’s disease, and 41,000 with Lou Gehrig’s disease. The Irving, Texas-based company provides survey respondents with coupons and a chance to win $10,000 in exchange for information on their household’s spending habits and health.

The company will share with individual consumers specific information it has gathered, said Jeanette Fitzgerald, Epsilon’s chief privacy officer.

Suffering Seniors

KBM Group, one of the largest collectors of consumer health data based in Richardson, Texas, has health information on at least 82 million consumers categorized by more than 100 medical conditions obtained from surveys conducted by third-party contractors. The company declined to provide an example of the surveys. KBM uses the information for its own marketing clients, and sells it to other data brokers, said Gary Laben, chief executive officer of KBM.

“None of our clients wants to engage with consumers or businesses who don’t want to engage with them,” he said. “Our business is about creating mutual value and if there is none, the process doesn’t work.”

Data repackaging is extensive and pervasive. The Suffering Seniors Mailing List help marketers push everything from lawn care to financial products. It consists of the names, addresses, and health information of 4.7 million “suffering seniors,” according to promotional material for the list. Beach List Direct Inc. sells the information for 15 cents a name. Marketed as “the perfect list for mailers targeting the ailing elderly,” it contains a breakdown of those with diseases like depression, cancer and Alzheimer’s, according to its seller’s website.

Clay Beach, the contact on Beach List’s website, did not return calls and e-mails over the past month.

‘Confidential’ Clients

Little is known about who buys medical lists since data brokers say their clients are confidential, Rockefeller said at a hearing on the issue in December.

Promotional material for the Suffering Seniors data found by Bloomberg on Beach List’s website initially included a list of users. The names of those users have since been removed.

One customer was magazine publisher Meredith Corp. (MDP), which used the list in a test for a subscription offer for Diabetic Living magazine, said Jenny McCoy, a spokeswoman. Other users have included the American Diabetes Association, which said a small portion of names from the list was given to one of its local chapters, and Remedy Health Media, a publisher of medical websites.

Magazine Advertising

Remedy Health may have used the list to advertise one of its magazines, which has been defunct for several years, said David Lee, the company’s executive vice president of publishing.

A growing source of data fodder are website registration forms that ask for health information in order for a user to access the site or receive an e-mail newsletter.

One such site is Primehealthsolutions.com, which provides basic health information on a variety of conditions. It makes money by collecting data on diseases its users have been diagnosed with and medications they are taking, which people disclose when signing up for the site’s e-mail newsletter.

The site has more than three dozen lists for sale, including a tally of 2.2 million people with depression, 267,000 with Alzheimer’s, 553,000 with impotence, and 2.1 million women going through menopause.

Jason Rines, a co-owner of Prime Health Solutions, said he will share the lists only with those marketing health-related products, like pharmaceutical or medical device makers.

Purchasing Trail

Acxiom said it uses retail purchase history or magazine subscriptions to make assessments about whether someone has a particular disease interest.

Health data collection is troubling to people like Rebecca Price, who has early-stage Alzheimer’s disease. While she now makes no secret of her disease and serves as a member of the Alzheimer’s Association’s early stage advisory group, that wasn’t always the case. Price, a 62-year-old former doctor, said she initially didn’t even tell her husband of her condition for fear word would get out and harm her personally and financially.

“It is a very, very personal diagnosis,” Price said.

Social media is another potential way information can be collected on patients, said Dixon, of the World Privacy Forum, who warns patients to be more careful about what they share on sites like Facebook.

“Don’t ‘like’ the hospital website or comment ‘thank you for the great breast cancer screening you gave me,’” she said. “Under the Facebook policy that is public information and it is in the wild and if someone goes to that site and pulls it off, it is totally public.”

Facebook Policy

While it would be possible for data miners to scrape ‘likes’ and public comments from Facebook Inc. (FB)’s social network, the company said such practice is against company policy and, if discovered, would be blocked.

“We don’t allow third-party data providers to scrape or collect information without our permission,” said Facebook spokeswoman Elisabeth Diana. “Third-party data providers that work with Facebook don’t collect personally identifiable information and are subject to our policies.”

For consumers who want to know what list they may be on, there are limited options. KBM for example doesn’t have the technological capabilities to look up an individual by name and tell them what lists they are on, though they can purge a name from all their lists if requested to do so, said CEO Laben.

Acxiom started a website last year that allows people to view some of the information it has on them. Those who choose to can correct or remove their data.

Epsilon’s Fitzgerald says the best way for consumers to protect themselves is to be more aware of where they are sharing their information and pay more attention to website privacy policies.

“If people are concerned, don’t put the information out there,” Fitzgerald said. “Consumers would be better served if they were educated more on what is going on on the web.”

(A previous version of the story mistated the name of the Direct Marketing Association and corrected the spelling of Facebook spokeswoman Elisabeth Diana.)

To contact the reporters on this story: Shannon Pettypiece in New York atspettypiece@bloomberg.net; Jordan Robertson in San Francisco atjrobertson40@bloomberg.net

To contact the editors responsible for this story: Rick Schine at eschine@bloomberg.net Drew Armstrong

Singapore appoints first Chief Data Scientist

this is a brilliant move….

http://www.futuregov.asia/articles/2014/aug/14/singapore-governments-first-chief-data-scientist-p/

ANALYSIS, CONNECTED GOVERNMENT, GOVERNMENT ANALYTICS

SINGAPORE GOVERNMENT’S FIRST CHIEF DATA SCIENTIST PRABIR SEN ON HIS NEW ROLE AND GOALS

From traffic updates to tax returns, cities and countries have more data than ever before – but how can they manage it?

FutureGov has exclusively interviewed Prabir Sen, Singapore government’s first Chief Data Scientist. He was appointed by the Infocomm Development Authority of Singapore (IDA) in January, and spoke on why his role was created, what he wants to achieve and the challenges he faces.

PHOTOS

View photos

Vision to be a global analytics hub

Singapore aspires to be the world’s centre of data science and analytics. This vision required a dedicated team to guide the development of skills on data sciences and advanced analytic across the government and industry. The role of the Chief Data Scientist and his supporting team, called the Data Sciences Group, were created to drive the private and public sectors’ adoption of data analytics, said Sen.

Sen is excited about the potential for expanding Singapore’s work in this area: “I wonder if it is possible to invite the international sports and games industry, such as the Olympics Association, to collaborate with Singapore-based tech companies and talents on sports analytics right here in Singapore? Is it possible to attract aerospace and logistics companies here to do machine-to-machine data analytics? Is it possible to drive the multinational consumer good corporations to work with local small tech companies on advanced consumer insights?”

Using analytics to improve quality of decisions & lives

The government believes that data analytics has huge opportunities to impact government services and improve citizens’ lives in a wide range of areas, such as healthcare, transportation, education, retail and waste management.

A large volume of data is being generated from sensors and mobile devices today. This includes communication between person-to-person, person-to-machine and machine-to-machine, added Sen. He and his team are tasked to evaluate and apply advanced analytics techniques and models that can help organisations get a “360-degree view on people, technology and policies to improve the quality of decisions and improve citizens’ lives and journey of experience at various touch points.”

Cross agency data analytics

The greatest opportunity for using analytics within government is what Sen calls “cross data analysis”, where one agency can use data of another agency to solve their problem. “For example, the Ministry of Manpower can analyse healthcare data from Ministry of Health to determine skill gaps and future talent development requirements, or, transportation use environment to determine impact of weather in commuters’ behaviour” he said. “Such cross data analyses also require greater attention to and better governance of data protection, privacy and anonymity,” he added.

Some agencies are currently using this strategy and are achieving great results, he said, and the Singapore government is now encouraging them to explore more cross-agency data use.

Innovation therefore requires agencies to be even more ready to experiment: “Data analytics is fanning the flames of entrepreneurship in the Singapore government, to adopt a philosophy called ‘start up’. Government is obviously not a start up but initiatives to effect change are best thought of as start-ups where we should be more ready to trial and be comfortable with small failures.”

“Compare a project that takes months and costs a lot of money; with one that takes two persons and a couple of weeks of effort. If the former fail, it will be catastrophic, while a failure of a small trial is still acceptable. We can adopt a risk management methodology where the cost of failing becomes exceedingly tiny,” he said.

Developing analytics talent

One of Sen’s key performance indicators is to strengthen data talent locally. According to a recent IDA release, “McKinsey forecasts that there will be a shortage of 140,000 to 190,000 data sciences and analytics professionals by 2018 in just the US alone”.

Sen shared that the need to increase the local data talent pool is a real challenge. Most organisations are struggling to recruit enough candidates with the right skills. “We are shorthanded in several areas: data scientists who have both computational experience and business acumen, data visualisers who are skilled in both analytics and graphic design, analytics consultants who hold domain knowledge besides their analytics experience, and data engineers who can source and integrate data from disparate systems.”

Retaining this data talent is even more difficult, he continued. “Most of these data professionals are creative people. They require space and freedom and a stimulating environment to explore new approaches and insights that challenge them. So we need to facilitate and grow this local community, to drive engagement with them, pulling together users, data analytics companies, cloud providers to form an ecosystem to exchange ideas.”

To this end, IDA has launched a Massive Open Online Course (MOOC) on data science and analytics this month, offering locals the chance to develop the vital skills to respond to the growing demand for data professionals. The class has attracted more than 350 registrations from both the private and public sector.

Chief Data Officer vs Chief Data Scientist

Sen also clarified how his role is different from a Chief Data Officer. “A Chief Data Officer typically has responsibility to govern and protect data, and find ways to use data across the agencies. My role, on the other hand, is to find ways to build transformative products using data sciences, analytics and insights; drive rapid development and adoption of analytical techniques, and develop the local data and tech talent.”

The skills and experience that make a good Chief Data Scientist, Sen added, is not limited to quantitative and computation proficiency. The candidate must be good at understanding human behaviour, how people go about solving their problems and making decisions, and able to think laterally to engage in cross-cutting strategic dialogues. Most importantly, he must be able to learn, unlearn and relearn.

Learning will be vital as Singapore seeks to become a global hub for analytics. Agencies are being challenged to work together on trialling new approaches, while the government is seeking to build greater scientific communities and talent in the city state. Ultimately, though, these efforts could lead to greater personalisation of citizen services – a new way in which the government engage and does business with its customers.

What Uber for healthcare might look like

Interesting take on imagining the future of healthcare.

http://www.kevinmd.com/blog/2014/08/uber-health-care-will-look-like.html

What the Uber of health care will look like

 

Medallion owners tend to fall into two categories: private practitioners and fleet owners. Private practitioners own their own car, have responsibility for maintenance, gas and insurance, and tend to use the cash flow to live while allowing the medallion to appreciate over the course of their career. They then cash out as part of their retirement plan.

Fleet owners have dozens of medallions; they lease or buy fleets of automobiles and often have their own mechanics, car washes and gas pumps. They either hire drivers as employees or, more often, rent their cars to licensed taxi drivers who get to keep the balance of their earnings after their car and gas payments.

In London, taxi drivers have to invest 2 to 4 years of apprenticeship before they can take and pass a test called “The Knowledge.” However, like NYC, finally getting that a licence to operate a Black Cab in London is a hard-working but stable way to earn a living.

Now imagine that someone comes along that can offer all the services of the NYC yellow cab or the London Black Cab directly to the general public, but does not have to own the medallion, own the car or employ the driver. With as much as 70% lower overhead, they provide the same service to the consumer; in fact they are so consumer friendly that they become the virtual gatekeeper for all the taxi and car service business in the community.

How, you ask? Outsourcing the overhead and just-in-time inventory management; they convince thousands of people to drive around in their own cars with the promise of a potential payment for services driving someone from point A to point B. All these drivers have to do is meet certain standards of quality and safety. This new company does all the marketing and uses technology to make the connection between the currently active drivers and those in need of a ride; they provide simple and transparent access to a host of cars circulating in your neighborhood, let you know the price and send a picture and customer rating of the driver, all before he or she arrives, and they process the payment so no money ever changes hands.

This is the premise behind Uber, a very disruptive take on the taxi business. As a recent article in Bloomberg noted, the slower rate of growth in medallion value is already attribute to the very young company; a recent protest by Black Cab drivers in London resulting in an eight-fold increase in Uber registrations.

Now imagine that a new health care services company comes to your community offering population health management services on a bundled payment or risk basis. They guarantee otherwise inaccessible metrics of quality and safety to both large employers and individual consumers. They employ only a handful of doctors, but do not own any hospitals, imaging centers or ambulatory care facilities.

However, they are masters at consumer engagement, creating levels of affinity and loyalty usually found with consumer products and soft drinks. They use a don’t make me think approach to their technology, seamlessly integrating analytics and communications platforms into their customers lives, and offer consumers without a digital footprint a host of options for communications, including access to information and services via their land lines or their cable TV box. They leverage high-level marketing analytics to determine who will be responsive to non-personal tools for engagement, like digital coaching, and who requires a human touch.

Care planning is done based on clinical stratification and evidence; population specific data is used to determine the actual resources required to achieve clinical, quality and financial goals. (A Midwest ACO has more problems with underweight than obesity, do they need to maintain their bariatric surgery center?) Physicians serve as “clinical intelligence officers,” creating standing orders across the entire population, implemented by non-clinical personnel; they also create criteria for escalation and de-escalation of services and resource allocation based on individual patients progress towards goals. They employ former actors and actresses as health coaches and navigators, invest heavily in home care and nurse care managers and use dieticians in local supermarkets to support lifestyle changes (while accessing and analyzing the patients point-of-purchase data to see what they are really buying).

The primary relationship between patients and their health systems is with a low cost, personal health concierge: Primary care physicians are only accessed based on predetermined eligibility criteria and only with those physician who agree to standards of quality and accountability are in the network. Multi-tiered scenario planning for emergencies is built into the system. For professional resources only required on an as-needed basis, such as hospital beds, surgeons and medical specialists, access is negotiated in advance based on a formula of quality standards and best pricing but only used on a just-in-time basis.

They are not a payer, although a professional relationship with them is on a business-to-business basis. They are a completely new type of health system, guaranteeing health and well being, transparent in their operations and choosing their vendors based on their willingness and ability to achieve those goals. In doing so, they significantly reduce the resources necessary to achieve goals for quality of care and quality of health across the entire population; they treat quality achievement as an operational challenge and manage their supply chain accordingly.

Am I suggesting this a new model of care? No, I am personally an advocate for physician-driven systems of care. But this kind of system is very possible, and there are companies working on models of national ACOs using many of these principles.

The Uber of health care will have much less to do with the mobile app; and far more to do with creating value by minimizing overhead, designing flexible operations, supporting goal-directed innovation and bringing supply-chain discipline to the idea of resource-managed care delivery. It will involve embracing models of care delivery that leverage emerging evidence on non-clinical approaches to health status and quality improvement, and focusing on designing goal-directed interactions between people, platforms, programs and partners.

I can hear more than a few of you creating very good reasons why it wont work (“You can’t put an ICU bed out to bid!”), but these scenarios are very doable. If we want to revitalize the experience of care for patients and professionals, we must be willing to acknowledge and embrace dramatically different, often counter-intuitive, new operating models for care that will require new competencies, forms of collaboration and reengineering the roles and responsibilities of those who comprise a patients’ health resource community.

Steven Merahn is director, Center for Population Health Management, Clinovations. He blogs at MedCanto.

Outsource physician behaviour change to the experts: Big Pharma

So pay for performance doesn’t work. This is hardly surprising when you see the compromise and mediocrity forced upon policy makers to get ideas through. There have been instances of success in health care. Indeed, one could argue that the exemplary success of big pharma in changing physician behaviour has provided a rod for its own back. Why not harness this expertise in getting under the skin of doctors, and pay big pharma sales outfits to guide physician practice in constructive directions, rather than being distracted by flogging pills that don’t really work that well anyway, and potentially harm? Might have a chat with Christian.

http://www.nytimes.com/2014/07/29/upshot/the-problem-with-pay-for-performance-in-medicine.html

CreditMagoz
Continue reading the main storyShare This Page

“Pay for performance” is one of those slogans that seem to upset no one. To most people it’s a no-brainer that we should pay for quality and not quantity. We all know that paying doctors based on the amount of care they provide, as we do with a traditional fee-for-service setup, creates incentives for them to give more care. It leads to increased health care spending. Changing the payment structure to pay them for achieving goals instead should reduce wasteful spending.

So it’s no surprise that pay for performance has been an important part of recent reform efforts. But in reality we’re seeing disappointingly mixed results. Sometimes it’s because providers don’t change the way they practice medicine; sometimes it’s because even when they do, outcomes don’t really improve.

The idea behind pay for performance is simple. We will give providers more money for achieving a goal. The goal can be defined in various ways, but at its heart, we want to see the system hit some target. This could be a certain number of patients receiving preventive care, a certain percentage of people whose chronic disease is being properly managed or even a certain number of people avoiding a bad outcome. Providers who reach these targets earn more money.

The problem, one I’ve noted before, is that changing physician behavior is hard. Sure, it’s possible to find a study in the medical literature that shows that pay for performance worked in some small way here or there. For instance, a study published last fall found that paying doctors $200 more per patient for hitting certain performance criteria resulted in improvements in care. It found that the rate of recommendations for aspirin or for prescriptions for medications to prevent clotting for people who needed it increased 6 percent in clinics without pay for performance but 12 percent in clinics with it.

Good blood pressure control increased 4.3 percent in clinics without pay for performance but 9.7 percent in clinics with it. But even in the pay-for-performance clinics, 35 percent of patients still didn’t have the appropriate anti-clotting advice or prescriptions, and 38 percent of patients didn’t have proper hypertensive care. And that’s success!

It’s also worth noting that the study was only for one year, and many improvements in actual outcomes would need to be sustained for much longer to matter. It’s not clear whether that will happen. A study published in Health Affairs examined the effects of a government partnership with Premier Inc., a national hospital system, and found that while the improvements seen in 260 hospitals in a pay-for-performance project outpaced those of 780 not in the project, five years later all those differences were gone.

The studies showing failure are also compelling. A study in The New England Journal of Medicine looked at 30-day mortality in the hospitals in the Premier pay-for-performance program compared with 3,363 hospitals that weren’t part of a pay-per-performance intervention. We’re talking about a study of millions of patients taking place over a six-year period in 12 states. Researchers found that 30-day mortality, or the rate at which people died within a month after receiving certain procedures or care, was similar at the start of the study between the two groups, and that the decline in mortality over the next six years was also similar.

Moreover, they found that even among the conditions that were explicitly linked to incentives, like heart attacks and coronary artery bypass grafts, pay for performance resulted in no improvements compared with conditions without financial incentives.

In Britain, a program was begun over a decade ago that would pay general practitioners up to 25 percent of their income in bonuses if they met certain benchmarks in the management of chronic diseases. The program made no difference at all in physician practice or patient outcomes, and this was with a much larger financial incentive than most programs in the United States offer.

Even refusing to pay for bad outcomes doesn’t appear to work as well as you might think. A 2012 study published in The New England Journal of Medicine looked at how the 2008 Medicare policy to refuse to pay for certain hospital-acquired conditions affected the rates of such infections. Those who devised the policy imagined that it would lead hospitals to improve their care of patients to prevent these infections. That didn’t happen. The policy had almost no measurable effect.

There have even been two systematic reviews in this area. The first of them suggested that there is some evidence that pay for performance could change physicians’ behavior. It acknowledged, though, that the studies were limited in how they could be generalized and might not be able to be replicated. It also noted there was no evidence that pay for performance improved patient outcomes, which is what we really care about. The secondreview found that with respect to primary care physicians, there was no evidence that pay for performance could even change physician behavior, let alone patient outcomes.

One of the reasons that paying for quality is hard is that we don’t even really know how to define “quality.” What is it, really? Far too often we approach quality like a drunkard’s search, looking where it’s easy rather than where it’s necessary. But it’s very hard to measure the things we really care about, like quality of life and improvements in functioning.

In fact, the way we keep setting up pay for performance demands easy-to-obtain metrics. Otherwise, the cost of data gathering could overwhelm any incentives. Unfortunately, as a recent New York Times article described, this has drawbacks.

The National Quality Forum, described in the article as an influential nonprofit, nonpartisan organization that endorses health care standards, reported that the metrics chosen by Medicare for their programs included measurements that were outside the control of a provider. In other words, factors like income, housing and education can affect the metrics more than what doctors and hospitals do.

This means that hospitals in resource-starved settings, caring for the poor, might be penalized because what we measure is out of their hands. A panel commissioned by the Obama administration recommended that the Department of Health and Human Services change the program to acknowledge the flaw. To date, it hasn’t agreed to do so.

Some fear that pay for performance could even backfireStudies in other fields show that offering extrinsic rewards (like financial incentives) can undermine intrinsic motivations (like a desire to help people). Many physicians choose to do what they do because of the latter. It would be a tragedy if pay for performance wound up doing more harm than good.

AMA rejects price transparency

AMA at its best (worst).

It doesn’t want price transparency because its too hard to predict how much things should cost charging by the hour instead of by the procedure.

I’d want to know more about my surgeon than my dishwasher when making a purchasing decision.

So lets put up all those metrics and allow people to compare what matters.

http://www.smh.com.au/federal-politics/political-news/ama-rejects-call-for-more-fee-disclosure-20140729-3cs2y.html#ixzz396F0w0H6

AMA rejects call for more fee disclosure

July 30, 2014 Dan Harrison and Daisy Dumas

AMA president Brian Owler at the National Press Club in Canberra on Wednesday.

AMA president Brian Owler at the National Press Club in Canberra on Wednesday. Photo: Alex Ellinghausen

The Australian Medical Association has rejected calls for greater transparency on surgical fees, saying it was not possible for patients to compare prices for operations in the same way they might shop around for a dishwasher.

Appearing at a Senate hearing on Tuesday, AMA president Brian Owler, who is a neurosurgeon, said his organisation did not support the charging of excessive fees, but said the appropriate fee for a procedure depended on the patient’s condition.

”It is not possible to put up on a website all of our fees … and be able to go, like you’re buying a dishwasher, and be able to work out which doctor you’re going to on the basis of the fee that they charge,” Associate Professor Owler said.

”The experience and qualifications of many of the doctors will vary, their practices will vary, and really you need to see a patient to understand what their problem is, then formulate with that patient the best plan of management.”

Professor Owler’s comments follow a statement from the Australasian College of Surgeons in which it expressed concern about some surgeons, including some of its own members, charging ”extortionate” fees.

The president of the college, Michael Grigg, said it was working with the Australian Competition and Consumer Commission on ways to allow greater disclosure of surgeons’ fees without breaching competition rules.

Prominent neurosurgeon Charlie Teo said there were some surgeons who abused the ignorance of their patients.

But he said the recommended fee for some procedures, such as the approximately $2500 fee for the removal of a brain tumour, undervalued the work involved in complex cases.

”I can be operating on the world’s most difficult brain tumour and it takes eight to 10 hours and I still get paid $2500. Versus a plastic surgeon who charges $8000 to $12,000 for a breast augmentation,” Dr Teo said.

Dr Teo recommended Australia adopt a the American Medical Association’s ”22 Modifier” policy, which requires surgeons to supply evidence that the service provided was substantially greater than the work typically required for a certain procedure if they charged higher fees.

Greens senator Richard Di Natale, who initiated the Senate inquiry into out-of-pocket health costs, said patients needed greater transparency on costs.

”At the moment, the problem is that people become aware of the out-of-pocket costs when it’s too late, when they’re well advanced down the treatment pathway, and often there’s no way of turning back.”

The president of the Australian Society of Plastic Surgeons, Tony Kane, said its members were required to make a full written disclosure to patients of what the cost of their treatment would be, including the possibility of further costs, should revision surgery be necessary.

Dr Kane said members were required to make this disclosure at a sufficiently early stage to enable patients to take cost considerations into account when deciding whether to undergo the treatment.