Category Archives: complex adaptive systems

Dr Atul Gawande – 2014 Reith Lectures

Lecture 1: Why Do Doctors Fail?

Lecture 2: The Century of the System

Lecture 3: The Problem of Hubris

Lecture 4: The Idea of Wellbeing

http://www.bbc.co.uk/programmes/articles/6F2X8TpsxrJpnsq82hggHW/dr-atul-gawande-2014-reith-lectures

Dr Atul Gawande – 2014 Reith Lectures

Atul Gawande, MD, MPH is a practicing surgeon at Brigham and Women’s Hospital and Professor at both the Harvard School of Public Health and Harvard Medical School.

In his lecture series, The Future of Medicine, Dr Atul Gawande will examine the nature of progress and failure in medicine, a field defined by what he calls ‘the messy intersection of science and human fallibility’.

Known for both his clear analysis and vivid storytelling, he will explore the growing importance of systems in medicine and argue that the future role of the medical profession in our lives should be bigger than simply assuring health and survival.

The 2014 Reith Lectures

The first lecture, Why do Doctors Fail?, will explore the nature of imperfection in medicine. In particular, Gawande will examine how much of failure in medicine remains due to ignorance (lack of knowledge) and how much is due to ineptitude (failure to use existing knowledge) and what that means for where medical progress will come from in the future.

In the second lecture, The Century of the System, Gawande will focus on the impact that the development of systems has had – and should have in the future – on medicine and overcoming failures of ineptitude. He will dissect systems of all kinds, from simple checklists to complex mechanisms of many parts. And he will argue for how they can be better designed to transform care from the richest parts of the world to the poorest.

The third lecture, The Problem of Hubris, will examine the great unfixable problems in life and healthcare – aging and death. Gawande will argue that the reluctance of society and medical institutions to recognise the limits of what professionals can do is producing widespread suffering. But research is revealing how this can change.

The fourth and final lecture, The Idea of Wellbeing, will argue that medicine must shift from a focus on health and survival to a focus on wellbeing – on protecting, insofar as possible, people’s abilities to pursue their highest priorities in life. And, as he will suggest from the story of his father’s life and death from cancer, those priorities are nearly always more complex than simply to live longer.

Five things to know about Dr Atul Gawande

Find out about Atul Gawande ahead of his 2014 Reith Lectures…

1.

In 2010, Time Magazine named him as one of the world’s most influential thinkers.

2.

His 2009 New Yorker article – The Cost Conundrum – made waves when it compared the health care of two towns in Texas and suggested that more expensive care is often worse care. Barack Obama cited the article during his attempt to get Obamacare passed by the US Congress.

3.

Atul Gawande’s 2012 TED talk – How do we heal medicine? – has been watched over 1m times.

4.

Atul Gawande has written three bestselling books: Complications, Better and The Checklist Manifesto.

The Checklist Manifesto is about the importance of having a process for whatever you are doing. Better focuses on the drive for better medicine and health care systems. Complications was based on his training as a surgeon.

5.

In 2013, Atul launched Ariadne Labs – a new health care innovation lab aiming ‘to provide scalable solutions that produce better care at the most critical moments in people’s lives everywhere’.

Blumenthal on Health Reform: Foolish, Courageous, or Both

http://www.commonwealthfund.org/publications/blog/2014/dec/health-reform-foolish-courageous

Health Reform: Foolish, Courageous, or Both

Thursday, December 4, 2014

Some supporters of the Affordable Care Act (ACA) are worried they’re paying a political price for health care reform. The political fallout should come as no surprise.

The history of comprehensive health reform shows unequivocally that it’s a short-term political disaster. That’s why so many political leaders have either avoided the issue, or regretted engaging it. Franklin D. Roosevelt, arguably one of our most politically adept presidents, turned his back on national health insurance in 1934 when advisors argued for including it in the Social Security program. He continued to dodge it for most of his long presidency. Both Jimmy Carter and Bill Clinton paid heavy political prices for their proposed national health care programs.

Health reform’s political toxicity is all about math and voting.  Even prior to the ACA, more than 80 percent of Americans under 65 had health insurance, and most were satisfied with their coverage and regular care. These are people—better educated, employed, with middle to higher incomes—who vote, especially in mid-terms. The elderly, of course, have Medicare and they too are generally satisfied with their insurance and care. The 20 percent who didn’t have insurance before the law was passed were—and are—much less likely to show up at the polls. They tend to be younger, less-educated, and less well-off.

Then there’s the nature of health care as an issue: highly personal, highly consequential, and incredibly complex and confusing. Health care is about people’s deepest hopes and fears, for themselves and for their loved ones. And the health care system has become a multi-layered maze of huge insurance chains, enormous and acquisitive provider organizations, government regulation, and constantly changing therapeutics.

This makes it easy for opponents of health reform to stir opposition by arguing—fairly or not—that any new program will make things worse for people who are satisfied with their insurance and their care. This is precisely why President Obama felt the need to promise, inaccurately as it turned out, that every American who liked their insurance plan would be able to keep it under the ACA.

And supporters of reform have difficulty explaining any new program and motivating its beneficiaries to take advantage of it. Witness the large numbers of uninsured Americans who remain unaware of the availability of subsidized insurance through the ACA marketplaces.

So, to put it crudely, why would any sane politician push a program likely to scare and confuse large numbers of people who vote, in order to help small numbers who don’t?

There are two possible responses. One is that it’s the right thing to do, since a lack of insurance is essentially a death sentence for millions of Americans. Doing the right thing, however, can be politically costly: when Lyndon Johnson pushed through the Civil Rights Act in 1965, he gave away the southern United States to the other party for a generation.

A second argument for braving health reform is practical: it simply has to be done to make our health system viable. The private health insurance industry in the United States, and our health system as a whole, have been in a downward spiral that threatens the interests of all Americans, including the now contentedly insured. Prior to the ACA’s enactment, more and more people were losing insurance, or being forced—because of huge premium increases—to purchase coverage that offers less and less protection.

For some years now, insured Americans have been the proverbial frog in the cooking pot, barely noticing as the water slowly approaches the boiling point. A health care system in which, year after year, the cost of insurancerises faster than workers’ wages is not sustainable for anyone.

Relatively little attention has been paid to ACA reforms that attempt to make the system sustainable by tackling fundamental problems with the health care delivery system and with the structure of the private insurance markets. The reason may be that insurance markets and delivery systems—their problems and solutions—are complex and much less interesting than the political battles surrounding covering uninsured Americans, and whether currently insured Americans may face cancellation of their plans. While the major long-term political gains to supporters of health reform may lie in these delivery system and insurance reforms, President Obama and many current congressmen and senators will likely be long gone when and if those gains materialize.

So ACA supporters have every right to be concerned about the politics of health reform. Each will have to decide for themselves whether health reform was foolish, courageous, or both.

In the meantime, millions of Americans now have health insurance who didn’t before, and the cost of health care is increasing at the lowest rate in 50 years.

Population Health: A riddle wrapped in an enigma

PN: The health sector is very happy to take full responsibility for the health of the population for as long as substantial monies are tied to that claim. The moment the health sector is asked to account for it, they get nervous.

Tying funding to value is a terrifying prospect for the health sector as having to account for the benefit they deliver would inevitably lead to a diminution in income and status.

“Because so many factors lie outside clinicians’ control, we need to understand what factors the healthcare system can reasonably be expected to act on, given professionals’ training, infrastructure and scope of practice,” they said. “We also need to determine the appropriate levels of health system accountability for population health outcomes.

http://www.modernhealthcare.com/article/20150108/BLOG/301089997/population-health-improvement-still-a-riddle-wrapped-in-an-enigma

Population health improvement still a riddle wrapped in an enigma

The push to invest more of the healthcare industry’s time and money into promoting good health is, so far, uneven and uncertain in terms of effectiveness. Perhaps nowhere is that more apparent than in federal initiatives to broadly improve health by extending care beyond clinics and pharmacies into neighborhoods and homes.Federal funding for population-health efforts—the management of health and medical care for an entire group of patients or a community—has expanded under the Affordable Care Act. It’s included financing for states and providers to experiment with ways to better coordinate healthcare and other needs that affect health, such as housing and transportation. But the initiatives are not without risk or challenges, a point three federal officials underscored in the latest issue of the New England Journal of Medicine.

Efforts are still underway to identify what works and how to make widespread use of the most effective strategies, write Dr. William Kassler, Naomi Tomoyasu and Dr. Patrick Conway of the agency that oversees Medicare and Medicaid. The CMS Innovation Center, in a report to Congress last month, also said results were largely not yet available for nearly two dozen initiatives to bolster population health, improve quality and increase efficiency in healthcare, financed with $2.6 billion through last year.

Calculating a dividend from those investments presents another challenge, the trio wrote. Kassler is one of the CMS’ chief medical officers; Tomoyasu is deputy director of the prevention and population health care models group within the CMS Innovation Center; and Conway is the CMS’ deputy administrator for innovation and quality.

The return on any investment in prevention will necessarily take time, raising the risk that “current actuarial methods used to evaluate return on investment may underestimate potential savings,” they warned.

Investment at the federal level is not small. Medicare and Medicaid—which combined account for $1 of every $3 the nation spends on healthcare—have increasingly poured money into strategies for disease prevention and health promotion.

Those strategies extend the reach of healthcare beyond hospitals, clinics and pharmacies into neighborhoods, homes and schools. Such extended investment can include help with housing, transportation, literacy, day care and groceries, the officials wrote.

But with that expanded reach comes a debate “regarding the specific population-based activities that fall within healthcare providers’ scope of practice,” wrote the CMS officials. “Because so many factors lie outside clinicians’ control, we need to understand what factors the healthcare system can reasonably be expected to act on, given professionals’ training, infrastructure and scope of practice,” they said. “We also need to determine the appropriate levels of health system accountability for population health outcomes.”

Follow Melanie Evans on Twitter: @MHmevans

Yach: Changing the Landscape for Prevention and Health Promotion

 

http://www.huffingtonpost.com/dr-derek-yach/changing-the-landscape-fo_1_b_6439328.html

Changing the Landscape for Prevention and Health Promotion

Posted: Updated:

By Bridget B. Kelly and Derek Yach*

Chronic diseases like heart disease, diabetes, and cancer are major contributors to poor health and rising health care costs in the U.S. The cost of treating these conditions is estimated to account for 80 percent of annual health care expenditures. More and more, experts agree on the great potential for preventing or delaying many cases of costly chronic diseases by focusing on environmental, social, and behavioral root influences on health. Yet the U.S. has been slow to complement its considerable spending on biomedical treatments with investments in population-based and non-clinical prevention interventions.

What is getting in the way of strengthening our investments in prevention and health promotion? A few consistent themes emerged across multiple expert consensus studies conducted by the Institute of Medicine (IOM), which were summarized in the report Improving Support for Heath Promotion and Chronic Disease Prevention — developed in support of the recent Vitality Institute Commission on Health Promotion and Prevention of Chronic Disease in Working-Age Americans.

First, prevention is challenging — chronic health problems are complex, and so are the solutions. Second, decision-makers who allocate resources have tough choices to make among many competing pressures and priorities; prevention and promotion can be at a disadvantage because their benefits are delayed. Third, there is a need for better, more usable evidence related to the effectiveness, the implementation at scale, and the economics of prevention interventions. Decision-makers need information that makes it easier to understand, identify, and successfully implement prevention strategies and policies. As noted in a recent opinion piece in the Journal of the American Medical Association (JAMA), limited investment in prevention research has resulted in an inaccurate perception that investing in preventive measures is of limited value. This has profound implications for federal funding allocations.

The mismatch in funding allocations is seen right at the source of our nation’s major investment in new health-related knowledge: the National Institutes of Health (NIH). A new paper in the American Journal of Preventive Medicine found that less than 10 percent of the NIH annual budget for chronic diseases is allocated to improving our knowledge base for effective behavioral interventions to prevent chronic diseases. This means that despite the immense potential for prevention science to reduce the burden of chronic diseases in the U.S., it is woefully underfunded compared to what we invest in researching biomedical treatment interventions for these conditions. NIH investments affect what evidence is ultimately available to those who decide how to allocate resources to improve the health of our nation, and they also affect the kinds of health experts we train as a country. By not investing in prevention science and in a future generation of scientists capable of doing high quality research in prevention, we are perpetually caught in the same vicious cycle where prevention continues to lag behind in our knowledge and therefore our actions.

There is hope that the landscape is slowly changing. Initiatives such as the NIH Office of Disease Prevention‘s Strategic Plan for 2014-2018 and the Affordable Care Act’s mandated Patient-Centered Outcomes Research Institute (PCORI) have the potential to strengthen prevention science and build the evidence-base for effective prevention interventions. Innovations in personalized health technologies and advances in behavioral economics also show great promise in improving health behaviors for chronic disease prevention.

The Vitality Institute Commission’s report emphasized the need for faster and more powerful research and development cycles for prevention interventions through increased federal funding for prevention science as well as the fostering of stronger public-private partnerships. It is essential to generate and communicate evidence in a way that enables decision-makers to understand the value of investing in prevention while taking into account their priorities, interests and constituencies. This will lead us to more balanced investments, make prevention a national priority, and boost the health of the nation.

*The authors are responsible for the content of this article, which does not necessarily represent the views of the Institute of Medicine.

Torch: Facebook Offers Artificial Intelligence Tech to Open Source Group

 

Facebook Offers Artificial Intelligence Tech to Open Source Group

Mark Zuckerberg, chief executive of Facebook. By releasing tools for computers to researchers, Facebook will also be able to accelerate its own Artificial Intelligence projects.
Mark Zuckerberg, chief executive of Facebook. By releasing tools for computers to researchers, Facebook will also be able to accelerate its own Artificial Intelligence projects.CreditJose Miguel Gomez/Reuters

Facebook wants the world to see a lot more patterns and predictions.

The company said Friday that it was donating for public use several powerful tools for computers, including the means to go through huge amounts of data, looking for common elements of information. The products, used in a so-called neural network of machines, can speed pattern recognition by up to 23.5 times, Facebook said.

The tools will be donated to Torch, an open source software project that is focused on a kind of data analysis known as deep learning. Deep learning is a type of machine learning that mimics how scientists think the brain works, over time making associations that separate meaningless information from meaningful signals.

Companies like Facebook, Google, Microsoft and Twitter use Torch to figure out things like the probable contents of an image, or what ad to put in front of you next.

“It’s very useful for neural nets and artificial intelligence in general,” said Soumith Chintala, a research engineer at Facebook AI Research, Facebook’s lab for advanced computing. He is also one of the creators of the Torch project. Aside from big companies, he said, Torch can be useful for “start-ups, university labs.”

Certainly, Facebook’s move shows a bit of enlightened self-interest. By releasing the tools to a large community of researchers and developers, Facebook will also be able to accelerate its own AI projects. Mark Zuckerberg has previously cited such open source tactics as his reason for starting the Open Compute Initiative, an open source effort to catch up with Google, Amazon and Yahoo on building big data centers.

Torch is also useful in computer vision, or the recognition of objects in the physical world, as well as question answering systems. Mr. Chintala said his group had fed a machine a simplified version of “The Lord of the Rings” novels and the computer can understand and answer basic questions about the book.

“It’s very early, but it shows incredible promise,” he said. Facebook can already look at some sentences, he said, and figure out what kind of hashtag should be associated with the words, which could be useful in better understanding people’s intentions. Such techniques could also be used in determining the intention behind an Internet search, something Google does not do on its regular search.

Besides the tools for training neural nets faster, Facebook’s donations include a new means of training multiple computer processors at the same time, a means of cataloging words when analyzing language and tools for better speech recognition software.

The Economist: The end of the population pyramid

http://www.economist.com/blogs/graphicdetail/2014/11/daily-chart-10?fsrc=scn/fb/wl/dc/vi/endofpopulationpyramid

Daily chart: The end of the population pyramid | The Economist

Graphic detail
Charts, maps and infographics
Daily chart

The end of the population pyramid

The shape of the world’s demography is changing

THE pyramid is a traditional way of visualising and explaining the age structure of a society. If you draw a chart with each age group represented by a bar, and each bar ranged one above the other—youngest at the bottom, oldest at the top, and with the sexes separated—that is the shape you get. The pyramid was characteristic of human populations since the day organised societies emerged. With lifespans short and mortality rates high, children were always the most numerous group, and old people the least. Now the shape of the global population is changing. Between 1970 and 2015 the dominating influence on the global population was the fertility rate, the number of children a woman would typically bear during her lifetime. It fell dramatically over the period, meaning that the world shifted from having larger to smaller families. The age groups start to become markedly smaller only about the age of 40, so the incline starts much further up the chart than with the pyramid. The shape looks more like the dome of the Capitol building in Washington, DC. Between 2015 and 2060 the biggest influence upon the population will be ageing. Small families are already becoming the norm, the fall in fertility is slowing down and now almost everyone is living longer than their parents—dramatically so in developing countries. So, by 2060, the dome will have come and gone and the shape of the population will look more like a column (or perhaps an old-fashioned beehive).

Read the full article from The World In 2015.

How to Make Health Care Accountable When We Don’t Know What Works

 

https://hbr.org/2014/11/how-to-make-health-care-accountable-when-we-dont-know-what-works

How to Make Health Care Accountable When We Don’t Know What Works

NOVEMBER 25, 2014
How to Make Health Care Accountable When We Don’t Know What Works
NOV14_25_83286050

Accountable care organizations (ACOs) are widely regarded as part of the solution to a fragmented health care system — one plagued by duplicative services, avoidable errors, and other impediments to efficiency and quality. But 20 years of reform efforts have led to a wave of provider consolidation that has made little headway in efficiently coordinating care. Providers continue to follow a strategy that has shown minimal evidence of success.

We should admit that we don’t know what works and, instead, test a variety of potential solutions that could address fragmentation. Before I explore the concrete steps we can take to encourage that kind of innovation, let me provide some important historical context.

Payment Reform’s First Life

Early efforts to promote coordinated care emphasized payment reform. Toward that end, managed-care and health maintenance organizations used payment schedules and gatekeeper physicians to create provider networks. In addition, the Clinton administration introduced proposals to implement “pay for performance” and dedicated quality-improvement initiatives, suggesting that financial pressures might force the coordination and rationalization of care. But Congress rejected payment-focused reform, and market preferences eliminated managed-care pressures.

Commentators then suggested that payment reform could happen only in conjunction with provider-based reforms, and the Institute of Medicine later issued a series of reports calling for pairing payment solutions with structural reform. Then, when the Affordable Care Act instituted Medicare’s Shared Savings Program in 2010, it invited providers to create ACOs and to accept changes in reimbursement that allowed them recoup part of any savings they generated. Eventually, however, prospective ACOs were given the option ofcontinuing under Medicare’s traditional fee-for-service payments. In other words, providers were encouraged to pursue structural reform while being permitted to avoid any constraint from payment reform.

INSIGHT CENTER

  • Innovating for Value in Health Care
    SPONSORED BY MEDTRONIC

    A collaboration of the editors of Harvard Business Review and the New England Journal of Medicine, exploring best practices for improving patient outcomes while reducing costs.

The Disappointments of Provider Reform

The continued failure of payment-driven reform has sadly given provider-based reform a blank check. The U.S. health sector has been in a merger-and-acquisition frenzy for nearly 20 years, and much of the integration has been justified as an effort to construct ACOs. Buzz phrases such as “clinical integration” and “eliminating fragmentation” are routinely paraded before regulators who scrutinize proposed mergers.

The problem, of course, is that after waves of acquisitions, most hospital markets are now highly concentrated and lack meaningful competition. And, consistent with basic economic theory, hospital systems that acquired dominant market shares dramatically increased prices for health care services. Perhaps even worse is that these large entities have shown little capacity for achieving the efficiencies they promised through coordinated care. Newly integrated delivery systems retain their inefficiencies and bring higher prices without any evident reduction in costs or errors.

We don’t know exactly why efforts at integration have not yielded efficiencies, and it seems we simply didn’t think very hard about it. The health reform debate focused primarily on a handful of success stories we all can repeat in our sleep: Kaiser, Geisinger, Intermountain. The plan was to have other hospital systems mimic them. That is like instructing all high-tech companies to mimic Apple, as if what makes Apple successful is an easy-to-follow cookbook for large-scale structural change.

It is a curiosity about the U.S. health system that producers with better outcomes and lower costs than their competitors cannot dominate the market. Kaiser, for example, has tried but failed to enter more local markets. But it is foolhardy to think that the systems that have not achieved Kaiser’s success can replicate it simply with the help of government regulators. This duplication strategy at best seems mindless, and at worst smacks of a Khrushchev-era economic policy.

The truth is, despite a glut of business press and how-to manuals, we still understand very little about why certain organizations succeed and others do not. With all the complexities of delivering medical care, we should expect even more variation among health care providers than among manufacturers. We likewise should be very hesitant to claim we understand what works and prescribe nationwide structural reforms.

Concrete Steps for the Future of ACOs

Precisely because we don’t know what works at this juncture, we cannot continue encouraging the formation of vast integrated systems that are difficult to disentangle. Until we have more evidence that integration yields efficiencies, regulators should continue to halt mergers that harm competition.

But scrutinizing mergers will only prevent further damage. We also must improve our delivery system, and we cannot give up on the ACO as a potential source of innovative configurations. Specifically, we should:

  1. Redefine and broaden our concept of an ACO. Too much ACO formation has emphasized linking hospitals with other providers. Instead of this top-down approach, we should work from the bottom up by linking providers withconsumers and payors, so that the focus is on serving patients’ needs and managing budgets.
  2. Encourage nontraditional parties — such as social workers, professionals who help people navigate the health care system (often called “navigators”), and IT companies — to lead efforts at ACO formation. These parties would be well equipped to construct networks that provide accountability, given their expertise in connecting consumers to complex organizations and advocating on behalf of those consumers.
  3. Use contract-based and virtual provider collaborations instead of relying on mergers. Joining providers under common ownership might not be necessary. Electronic health records (EHRs) and other information technologies have the potential to create platforms that enable coordination without incurring the high costs of integration. EHR tools can also allow patients to control their own information and tailor collaborations to individual patients’ needs.
  4. Entertain disruptively innovative reconstructions of the health care delivery system — ones that make use of mobile health, medical tourism, and informatics. Many technology companies that traditionally have not participated in the health sector are now offering improvements to our delivery system. Because business scholarship tells us that outsiders frequently introduce the most valuable innovations to a market, we should ensure that regulatory barriers do not preclude participation from unconventional participants.
  5. Perhaps most important, we cannot pursue structural reform withoutpayment reform. We will distinguish valuable provider reforms from ineffective ones only if sustained revenue pressures force ACOs to be truly “accountable” to consumer demands and other economic realities.

Not every solution we try will work, but we’re likely to have more success letting providers figure out what works than telling them how to do it.


Barak Richman is the Edgar P. and Elizabeth C. Bartlett Professor of Law
 and a professor of business administration at Duke University.

 

McKinsey’s Plan to fight obesity…

http://www.mckinsey.com/Insights/Economic_Studies/How_the_world_could_better_fight_obesity

Executive Summary: Innovation vs Obesity_McKinsey

MGI Obesity_Full report_November 2014

Sensible stuff. Possibly the most sensible stuff I’ve seen on this. Good for them…

How the world could better fight obesity

November 2014 | byRichard Dobbs, Corinne Sawers, Fraser Thompson, James Manyika, Jonathan Woetzel, Peter Child, Sorcha McKenna, and Angela Spatharou

Obesity is a critical global issue that requires a comprehensive, international intervention strategy. More than 2.1 billion people—nearly 30 percent of the global population—are overweight or obese.1 That’s almost two and a half times the number of adults and children who are undernourished. Obesity is responsible for about 5 percent of all deaths a year worldwide, and its global economic impact amounts to roughly $2 trillion annually, or 2.8 percent of global GDP—nearly equivalent to the global impact of smoking or of armed violence, war, and terrorism.

Podcast

Implementing an Obesity Abatement Program

MGI’s Richard Dobbs and Corinne Sawers discuss how a holistic strategy, using a number of interventions, could reverse rising rates of obesity around the world.

And the problem—which is preventable—is rapidly getting worse. If the prevalence of obesity continues on its current trajectory, almost half of the world’s adult population will be overweight or obese by 2030.

Much of the global debate on this issue has become polarized and sometimes deeply antagonistic. Obesity is a complex, systemic issue with no single or simple solution. The global discord surrounding how to move forward underscores the need for integrated assessments of potential solutions. Lack of progress on these fronts is obstructing efforts to address rising rates of obesity.

A new McKinsey Global Institute (MGI) discussion paper,Overcoming obesity: An initial economic analysis, seeks to overcome these hurdles by offering an independent view on the components of a potential strategy. MGI has studied 74 interventions (in 18 areas) that are being discussed or piloted somewhere around the world to address obesity, including subsidized school meals for all, calorie and nutrition labeling, restrictions on advertising high-calorie food and drinks, and public-health campaigns. We found sufficient data on 44 of these interventions, in 16 areas.

Although the research offers an initial economic analysis of obesity, our analysis is by no means complete. Rather, we see our work on a potential program to address obesity as the equivalent of the maps used by 16th-century navigators. Some islands were missing and some continents misshapen in these maps, but they were still helpful to the sailors of that era. We are sure that we have missed some interventions and over- or underestimated the impact of others. But we hope that our work will be a useful guide and a starting point for efforts in the years to come, as we and others develop this analysis and gradually compile a more comprehensive evidence base on this topic.

We have focused on understanding what it takes to address obesity by changing the energy balance of individuals through adjustments in eating habits or physical activity. However, some important questions we have not yet addressed require considerable further research. These questions include the role of different nutrients in affecting satiety hormones and metabolism, as well as the relationship between the gut microbiome and obesity. As more clarity develops in these research areas, we look forward to the emergence of important insights about which interventions are likely to work and how to integrate them into an antiobesity drive.

The main findings of this discussion paper include:

  • Existing evidence indicates that no single intervention is likely to have a significant overall impact. A systemic, sustained portfolio of initiatives, delivered at scale, is needed to reverse the health burden. Almost all the identified interventions (exhibit) are cost effective for society—savings on healthcare costs and higher productivity could outweigh the direct investment required by the intervention when assessed over the full lifetime of the target population. In the United Kingdom, for instance, such a program could reverse rising obesity, saving the National Health Service about $1.2 billion a year.
  • Education and personal responsibility are critical elements of any program aiming to reduce obesity, but they are not sufficient on their own. Other required interventions rely less on conscious choices by individuals and more on changes to the environment and societal norms. They include reducing default portion sizes, changing marketing practices, and restructuring urban and education environments to facilitate physical activities.
  • No individual sector in society can address obesity on its own—not governments, retailers, consumer-goods companies, restaurants, employers, media organizations, educators, healthcare providers, or individuals. Capturing the full potential impact requires engagement from as many sectors as possible. Successful precedents suggest that a combination of top-down corporate and government interventions, together with bottom-up community-led ones, will be required to change public-health outcomes. Moreover, some kind of coordination will probably be required to capture potentially high-impact industry interventions, since any first mover faces market-share risks.
  • Implementing an obesity-abatement program on the required scale will not be easy. We see four imperatives: (1) as many interventions as possible should be deployed at scale and delivered effectively by the full range of sectors in society; (2) understanding how to align incentives and build cooperation will be critical to success; (3) there should not be an undue focus on prioritizing interventions, as this can hamper constructive action; and (4) while investment in research should continue, society should also engage in trial and error, particularly where risks are low.

Exhibit

Cost-effective interventions to reduce obesity in the United Kingdom include controlling portion sizes and reducing the availability of high-calorie foods.

The evidence base on the clinical and behavioral interventions to reduce obesity is far from complete, and ongoing investment in research is an imperative. However, in many cases this requirement is proving a barrier to action. It need not be so. Rather than wait for perfect proof of what works, we should experiment with solutions, especially in the many areas where interventions are low risk. We have enough knowledge to do more.

About the authors

Richard Dobbs, James Manyika, and Jonathan Woetzel are directors of the McKinsey Global Institute, where Corinne Sawers is a fellow and Fraser Thompson is a senior fellow; Peter Child is a director in McKinsey’s London office; Sorcha McKenna is a principal in the Dublin office; and Angela Spatharou is a principal in the Mexico City office.

 

MGI_Implementing_an_Obesity_Abatement_Program_Exibit18 MGI_Implementing_an_Obesity_Abatement_Program_Exibit3 MGI_Implementing_an_Obesity_Abatement_Program_ExibitE3 MGI_Implementing_an_Obesity_Abatement_Program_Exibit1

NYT: Can Big Data Tell Us What Clinical Trials Don’t?

 

http://www.nytimes.com/2014/10/05/magazine/can-big-data-tell-us-what-clinical-trials-dont.html?src=twr

MAGAZINE

Photo

CreditIllustration by Christopher Brand
Continue reading the main storyShare This Page

When a helicopter rushed a 13-year-old girl showing symptoms suggestive of kidney failure to Stanford’s Packard Children’s Hospital, Jennifer Frankovich was the rheumatologist on call. She and a team of other doctors quickly diagnosed lupus, an autoimmune disease. But as they hurried to treat the girl, Frankovich thought that something about the patient’s particular combination of lupus symptoms — kidney problems, inflamed pancreas and blood vessels — rang a bell. In the past, she’d seen lupus patients with these symptoms develop life-threatening blood clots. Her colleagues in other specialties didn’t think there was cause to give the girl anti-clotting drugs, so Frankovich deferred to them. But she retained her suspicions. “I could not forget these cases,” she says.

Back in her office, she found that the scientific literature had no studies on patients like this to guide her. So she did something unusual: She searched a database of all the lupus patients the hospital had seen over the previous five years, singling out those whose symptoms matched her patient’s, and ran an analysis to see whether they had developed blood clots. “I did some very simple statistics and brought the data to everybody that I had met with that morning,” she says. The change in attitude was striking. “It was very clear, based on the database, that she could be at an increased risk for a clot.”

The girl was given the drug, and she did not develop a clot. “At the end of the day, we don’t know whether it was the right decision,” says Chris Longhurst, a pediatrician and the chief medical information officer at Stanford Children’s Health, who is a colleague of Frankovich’s. But they felt that it was the best they could do with the limited information they had.

A large, costly and time-consuming clinical trial with proper controls might someday prove Frankovich’s hypothesis correct. But large, costly and time-consuming clinical trials are rarely carried out for uncommon complications of this sort. In the absence of such focused research, doctors and scientists are increasingly dipping into enormous troves of data that already exist — namely the aggregated medical records of thousands or even millions of patients to uncover patterns that might help steer care.

The Tatonetti Laboratory at Columbia University is a nexus in this search for signal in the noise. There, Nicholas Tatonetti, an assistant professor of biomedical informatics — an interdisciplinary field that combines computer science and medicine — develops algorithms to trawl medical databases and turn up correlations. For his doctoral thesis, he mined the F.D.A.’s records of adverse drug reactions to identify pairs of medications that seemed to cause problems when taken together. He found an interaction between two very commonly prescribed drugs: The antidepressant paroxetine (marketed as Paxil) and the cholesterol-lowering medication pravastatin were connected to higher blood-sugar levels. Taken individually, the drugs didn’t affect glucose levels. But taken together, the side-effect was impossible to ignore. “Nobody had ever thought to look for it,” Tatonetti says, “and so nobody had ever found it.”

The potential for this practice extends far beyond drug interactions. In the past, researchers noticed that being born in certain months or seasons appears to be linked to a higher risk of some diseases. In the Northern Hemisphere, people with multiple sclerosis tend to be born in the spring, while in the Southern Hemisphere they tend to be born in November; people with schizophrenia tend to have been born during the winter. There are numerous correlations like this, and the reasons for them are still foggy — a problem Tatonetti and a graduate assistant, Mary Boland, hope to solve by parsing the data on a vast array of outside factors. Tatonetti describes it as a quest to figure out “how these diseases could be dependent on birth month in a way that’s not just astrology.” Other researchers think data-mining might also be particularly beneficial for cancer patients, because so few types of cancer are represented in clinical trials.

As with so much network-enabled data-tinkering, this research is freighted with serious privacy concerns. If these analyses are considered part of treatment, hospitals may allow them on the grounds of doing what is best for a patient. But if they are considered medical research, then everyone whose records are being used must give permission. In practice, the distinction can be fuzzy and often depends on the culture of the institution. After Frankovich wrote about her experience in The New England Journal of Medicine in 2011, her hospital warned her not to conduct such analyses again until a proper framework for using patient information was in place.

In the lab, ensuring that the data-mining conclusions hold water can also be tricky. By definition, a medical-records database contains information only on sick people who sought help, so it is inherently incomplete. Also, they lack the controls of a clinical study and are full of other confounding factors that might trip up unwary researchers. Daniel Rubin, a professor of bioinformatics at Stanford, also warns that there have been no studies of data-driven medicine to determine whether it leads to positive outcomes more often than not. Because historical evidence is of “inferior quality,” he says, it has the potential to lead care astray.

Yet despite the pitfalls, developing a “learning health system” — one that can incorporate lessons from its own activities in real time — remains tantalizing to researchers. Stefan Thurner, a professor of complexity studies at the Medical University of Vienna, and his researcher, Peter Klimek, are working with a database of millions of people’s health-insurance claims, building networks of relationships among diseases. As they fill in the network with known connections and new ones mined from the data, Thurner and Klimek hope to be able to predict the health of individuals or of a population over time. On the clinical side, Longhurst has been advocating for a button in electronic medical-record software that would allow doctors to run automated searches for patients like theirs when no other sources of information are available.

With time, and with some crucial refinements, this kind of medicine may eventually become mainstream. Frankovich recalls a conversation with an older colleague. “She told me, ‘Research this decade benefits the next decade,’ ” Frankovich says. “That was how it was. But I feel like it doesn’t have to be that way anymore.”

Artificial intelligence meets the C-suite

Artificial intelligence meets the C-suite

http://www.mckinsey.com/Insights/Strategy/Artificial_intelligence_meets_the_C-suite

Jeremy Howard: Today, machine-learning algorithms are actually as good as or better than humans at many things that we think of as being uniquely human capabilities. People whose job is to take boxes of legal documents and figure out which ones are discoverable— that job is rapidly disappearing because computers are much faster and better than people at it.

In 2012, a team of four expert pathologists looked through thousands of breast-cancer screening images, and identified the areas of what’s called mitosis, the areas which were the most active parts of a tumor. It takes four pathologists to do that because any two only agree with each other 50 percent of the time. It’s that hard to look at these images; there’s so much complexity. So they then took this kind of consensus of experts and fed those breast-cancer images with those tags to a machine-learning algorithm. The algorithm came back with something that agreed with the pathologists 60 percent of the time, so it is more accurate at identifying the very thing that these pathologists were trained for years to do. And this machine-learning algorithm was built by people with no background in life sciences at all. These are total domain newbies

 

Artificial intelligence meets the C-suite

Technology is getting smarter, faster. Are you? Experts including the authors of The Second Machine Age, Erik Brynjolfsson and Andrew McAfee, examine the impact that “thinking” machines may have on top-management roles.

September 2014

artThe exact moment when computers got better than people at human tasks arrived in 2011, according to data scientist Jeremy Howard, at an otherwise inconsequential machine-learning competition in Germany. Contest participants were asked to design an algorithm that could recognize street signs, many of which were a bit blurry or dark. Humans correctly identified them 98.5 percent of the time. At 99.4 percent, the winning algorithm did even better.Or maybe the moment came earlier that year, when IBM’s Watson computer defeated the two leading human Jeopardy! players on the planet. Whenever or wherever it was, it’s increasingly clear that the comparative advantage of humans over software has been steadily eroding. Machines and their learning-based algorithms have leapt forward in pattern-matching ability and in the nuances of interpreting and communicating complex information. The long-standing debate about computers as complements or substitutes for human labor has been renewed.

The matter is more than academic. Many of the jobs that had once seemed the sole province of humans—including those of pathologists, petroleum geologists, and law clerks—are now being performed by computers.

And so it must be asked: can software substitute for the responsibilities of senior managers in their roles at the top of today’s biggest corporations? In some activities, particularly when it comes to finding answers to problems, software already surpasses even the best managers. Knowing whether to assert your own expertise or to step out of the way is fast becoming a critical executive skill.

Video

Managing in the era of brilliant machines: An interview  

Managing in the era of brilliant machines: An interview

In this interview with McKinsey’s Rik Kirkland, Erik Brynjolfsson and Andrew McAfee explain the organizational challenge posed by the Second Machine Age.

Play video

Yet senior managers are far from obsolete. As machine learning progresses at a rapid pace, top executives will be called on to create the innovative new organizational forms needed to crowdsource the far-flung human talent that’s coming online around the globe. Those executives will have to emphasize their creative abilities, their leadership skills, and their strategic thinking.

To sort out the exponential advance of deep-learning algorithms and what it means for managerial science, McKinsey’s Rik Kirkland conducted a series of interviews in January at the World Economic Forum’s annual meeting in Davos. Among those interviewed were two leading business academics—Erik Brynjolfsson and Andrew McAfee, coauthors of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W. W. Norton, January 2014)—and two leading entrepreneurs: Anthony Goldbloom, the founder and CEO of Kaggle (the San Francisco start-up that’s crowdsourcing predictive-analysis contests to help companies and researchers gain insights from big data); and data scientist Jeremy Howard. This edited transcript captures and combines highlights from those conversations.

The Second Machine Age

What is it and why does it matter?

Andrew McAfee: The Industrial Revolution was when humans overcame the limitations of our muscle power. We’re now in the early stages of doing the same thing to our mental capacity—infinitely multiplying it by virtue of digital technologies. There are two discontinuous changes that will stick in historians’ minds. The first is the development of artificial intelligence, and the kinds of things we’ve seen so far are the warm-up act for what’s to come. The second big deal is the global interconnection of the world’s population, billions of people who are not only becoming consumers but also joining the global pool of innovative talent.

Erik Brynjolfsson: The First Machine Age was about power systems and the ability to move large amounts of mass. The Second Machine Age is much more about automating and augmenting mental power and cognitive work. Humans were largely complements for the machines of the First Machine Age. In the Second Machine Age, it’s not so clear whether humans will be complements or machines will largely substitute for humans; we see examples of both. That potentially has some very different effects on employment, on incomes, on wages, and on the types of companies that are going to be successful.

Video

Putting artificial intelligence to work: An interview with Anthony Goldbloom and Jeremy Howard

Machine-learning experts Anthony Goldbloom and Jeremy Howard tell McKinsey’s Rik Kirkland how smart machines will impact employment.

Jeremy Howard: Today, machine-learning algorithms are actually as good as or better than humans at many things that we think of as being uniquely human capabilities. People whose job is to take boxes of legal documents and figure out which ones are discoverable— that job is rapidly disappearing because computers are much faster and better than people at it.

In 2012, a team of four expert pathologists looked through thousands of breast-cancer screening images, and identified the areas of what’s called mitosis, the areas which were the most active parts of a tumor. It takes four pathologists to do that because any two only agree with each other 50 percent of the time. It’s that hard to look at these images; there’s so much complexity. So they then took this kind of consensus of experts and fed those breast-cancer images with those tags to a machine-learning algorithm. The algorithm came back with something that agreed with the pathologists 60 percent of the time, so it is more accurate at identifying the very thing that these pathologists were trained for years to do. And this machine-learning algorithm was built by people with no background in life sciences at all. These are total domain newbies.

Andrew McAfee: We thought we knew, after a few decades of experience with computers and information technology, the comparative advantages of human and digital labor. But just in the past few years, we have seen astonishing progress. A digital brain can now drive a car down a street and not hit anything or hurt anyone—that’s a high-stakes exercise in pattern matching involving lots of different kinds of data and a constantly changing environment.

Why now?

Computers have been around for more than 50 years. Why is machine learning suddenly so important?

Erik Brynjolfsson: It’s been said that the greatest failing of the human mind is the inability to understand the exponential function. Daniela Rus—the chair of the Computer Science and Artificial Intelligence Lab at MIT—thinks that, if anything, our projections about how rapidly machine learning will become mainstream are too pessimistic. It’ll happen even faster. And that’s the way it works with exponential trends: they’re slower than we expect, then they catch us off guard and soar ahead.

Andrew McAfee: There’s a passage from a Hemingway novel about a man going broke in two ways: “gradually and then suddenly.” And that characterizes the progress of digital technologies. It was really slow and gradual and then, boom—suddenly, it’s right now.

Jeremy Howard: The difference here is each thing builds on each other thing. The data and the computational capability are increasing exponentially, and the more data you give these deep-learning networks and the more computational capability you give them, the better the result becomes because the results of previous machine-learning exercises can be fed back into the algorithms. That means each layer becomes a foundation for the next layer of machine learning, and the whole thing scales in a multiplicative way every year. There’s no reason to believe that has a limit.

Erik Brynjolfsson: With the foundational layers we now have in place, you can take a prior innovation and augment it to create something new. This is very different from the common idea that innovations get used up like low-hanging fruit. Now each innovation actually adds to our stock of building blocks and allows us to do new things.

One of my students, for example, built an app on Facebook. It took him about three weeks to build, and within a few months the app had reached 1.3 million users. He was able to do that with no particularly special skills and no company infrastructure, because he was building it on top of an existing platform, Facebook, which of course is built on the web, which is built on the Internet. Each of the prior innovations provided building blocks for new innovations. I think it’s no accident that so many of today’s innovators are younger than innovators were a generation ago; it’s so much easier to build on things that are preexisting.

Jeremy Howard: I think people are massively underestimating the impact, on both their organizations and on society, of the combination of data plus modern analytical techniques. The reason for that is very clear: these techniques are growing exponentially in capability, and the human brain just can’t conceive of that.

There is no organization that shouldn’t be thinking about leveraging these approaches, because either you do—in which case you’ll probably surpass the competition—or somebody else will. And by the time the competition has learned to leverage data really effectively, it’s probably going to be too late for you to try to catch up. Your competitors will be on the exponential path, and you’ll still be on that linear path.

Let me give you an example. Google announced last month that it had just completed mapping the exact location of every business, every household, and every street number in the entirety of France. You’d think it would have needed to send a team of 100 people out to each suburb and district to go around with a GPS and that the whole thing would take maybe a year, right? In fact, it took Google one hour.

Now, how did the company do that? Rather than programming a computer yourself to do something, with machine learning you give it some examples and it kind of figures out the rest. So Google took its street-view database—hundreds of millions of images—and had somebody manually go through a few hundred and circle the street numbers in them. Then Google fed that to a machine-learning algorithm and said, “You figure out what’s unique about those circled things, find them in the other 100 million images, and then read the numbers that you find.” That’s what took one hour. So when you switch from a traditional to a machine-learning way of doing things, you increase productivity and scalability by so many orders of magnitude that the nature of the challenges your organization faces totally changes.

The senior-executive role

How will top managers go about their day-to-day jobs?

Andrew McAfee: The First Machine Age really led to the art and science and practice of management—to management as a discipline. As we expanded these big organizations, factories, and railways, we had to create organizations to oversee that very complicated infrastructure. We had to invent what management was.

In the Second Machine Age, there are going to be equally big changes to the art of running an organization.

I can’t think of a corner of the business world (or a discipline within it) that is immune to the astonishing technological progress we’re seeing. That clearly includes being at the top of a large global enterprise.

I don’t think this means that everything those leaders do right now becomes irrelevant. I’ve still never seen a piece of technology that could negotiate effectively. Or motivate and lead a team. Or figure out what’s going on in a rich social situation or what motivates people and how you get them to move in the direction you want.

These are human abilities. They’re going to stick around. But if the people currently running large enterprises think there’s nothing about the technology revolution that’s going to affect them, I think they would be naïve.

So the role of a senior manager in a deeply data-driven world is going to shift. I think the job is going to be to figure out, “Where do I actually add value and where should I get out of the way and go where the data take me?” That’s going to mean a very deep rethinking of the idea of the managerial “gut,” or intuition.

It’s striking how little data you need before you would want to switch over and start being data driven instead of intuition driven. Right now, there are a lot of leaders of organizations who say, “Of course I’m data driven. I take the data and I use that as an input to my final decision-making process.” But there’s a lot of research showing that, in general, this leads to a worse outcome than if you rely purely on the data. Now, there are a ton of wrinkles here. But on average, if you second-guess what the data tell you, you tend to have worse results. And it’s very painful—especially for experienced, successful people—to walk away quickly from the idea that there’s something inherently magical or unsurpassable about our particular intuition.

Jeremy Howard: Top executives get where they are because they are really, really good at what they do. And these executives trust the people around them because they are also good at what they do and because of their domain expertise. Unfortunately, this now saddles executives with a real difficulty, which is how to become data driven when your entire culture is built, by definition, on domain expertise. Everybody who is a domain expert, everybody who is running an organization or serves on a senior-executive team, really believes in their capability and for good reason—it got them there. But in a sense, you are suffering from survivor bias, right?

You got there because you’re successful, and you’re successful because you got there. You are going to underestimate, fundamentally, the importance of data. The only way to understand data is to look at these data-driven companies like Facebook and Netflix and Amazon and Google and say, “OK, you know, I can see that’s a different way of running an organization.” It is certainly not the case that domain expertise is suddenly redundant. But data expertise is at least as important and will become exponentially more important. So this is the trick. Data will tell you what’s really going on, whereas domain expertise will always bias you toward the status quo, and that makes it very hard to keep up with these disruptions.

Erik Brynjolfsson: Pablo Picasso once made a great observation. He said, “Computers are useless. They can only give you answers.” I think he was half right. It’s true they give you answers—but that’s not useless; that has some value. What he was stressing was the importance of being able to ask the right questions, and that skill is going to be very important going forward and will require not just technical skills but also some domain knowledge of what your customers are demanding, even if they don’t know it. This combination of technical skills and domain knowledge is the sweet spot going forward.

Anthony Goldbloom: Two pieces are required to be able to do a really good job in solving a machine-learning problem. The first is somebody who knows what problem to solve and can identify the data sets that might be useful in solving it. Once you get to that point, the best thing you can possibly do is to get rid of the domain expert who comes with preconceptions about what are the interesting correlations or relationships in the data and to bring in somebody who’s really good at drawing signals out of data.

The oil-and-gas industry, for instance, has incredibly rich data sources. As they’re drilling, a lot of their drill bits have sensors that follow the drill bit. And somewhere between every 2 and 15 inches, they’re collecting data on the rock that the drill bit is passing through. They also have seismic data, where they shoot sound waves down into the rock and, based on the time it takes for those sound waves to be captured by a recorder, they can get a sense for what’s under the earth. Now these are incredibly rich and complex data sets and, at the moment, they’ve been mostly manually interpreted. And when you manually interpret what comes off a sensor on a drill bit or a seismic survey, you miss a lot of the richness that a machine-learning algorithm can pick up.

Andrew McAfee: The better you get at doing lots of iterations and lots of experimentation—each perhaps pretty small, each perhaps pretty low-risk and incremental—the more it all adds up over time. But the pilot programs in big enterprises seem to be very precisely engineered never to fail—and to demonstrate the brilliance of the person who had the idea in the first place.

That makes for very shaky edifices, even though they’re designed to not fall apart. By contrast, when you look at what truly innovative companies are doing, they’re asking, “How do I falsify my hypothesis? How do I bang on this idea really hard and actually see if it’s any good?” When you look at a lot of the brilliant web companies, they do hundreds or thousands of experiments a day. It’s easy because they’ve got this test platform called the website. And they can do subtle changes and watch them add up over time.

So one of the implications of the manifested brilliance of the crowd applies to that ancient head-scratcher in economics: what the boundary of the firm should be. What should I be doing myself versus what should I be outsourcing? And, now, what should I be crowdsourcing?

Implications for talent and hiring

It’s important to make sure that the organization has the right skills.

Jeremy Howard: Here’s how Google does HR. It has a unit called the human performance analytics group, which takes data about the performance of all of its employees and what interview questions were they asked, where was their office, how was that part of the organization’s structure, and so forth. Then it runs data analytics to figure out what interview methods work best and what career paths are the most successful.

Anthony Goldbloom: One huge limitation that we see with traditional Fortune 500 companies—and maybe this seems like a facile example, but I think it’s more profound than it seems at first glance—is that they have very rigid pay scales.

And they’re competing with Google, which is willing to pay $5 million a year to somebody who’s really great at building algorithms. The more rigid pay scales at traditional companies don’t allow them to do that, and that’s irrational because the return on investment on a $5 million, incredibly capable data scientist is huge. The traditional Fortune 500 companies are always saying they can’t hire anyone. Well, one reason is they’re not willing to pay what a great data scientist can be paid elsewhere. Not that it’s just about money; the best data scientists are also motivated by interesting problems and, probably most important, by the idea of working with other brilliant people.

Machine learning and computers aren’t terribly good at creative thinking, so the idea that the rewards of most jobs and people will be based on their ability to think creatively is probably right.

About the author

This edited roundtable is adapted from interviews conducted by Rik Kirkland, senior managing editor of McKinsey Publishing, who is based in McKinsey’s New York office.