FT: Big data: are we making a big mistake?

A very good article on the ins and outs of big data.

http://www.ft.com/intl/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html#axzz3AVZ1Wv00

March 28, 2014 11:38 am

Big data: are we making a big mistake?

Big data is a vague term for a massive phenomenon that has rapidly become an obsession with entrepreneurs, scientists, governments and the media
Illustration by Ed Nacional depicting big data©Ed Nacional

F

ive years ago, a team of researchers from Google announced a remarkable achievement in one of the world’s top scientific journals, Nature. Without needing the results of a single medical check-up, they were nevertheless able to track the spread of influenza across the US. What’s more, they could do it more quickly than the Centers for Disease Control and Prevention (CDC). Google’s tracking had only a day’s delay, compared with the week or more it took for the CDC to assemble a picture based on reports from doctors’ surgeries. Google was faster because it was tracking the outbreak by finding a correlation between what people searched for online and whether they had flu symptoms.

Not only was “Google Flu Trends” quick, accurate and cheap, it was theory-free. Google’s engineers didn’t bother to develop a hypothesis about what search terms – “flu symptoms” or “pharmacies near me” – might be correlated with the spread of the disease itself. The Google team just took their top 50 million search terms and let the algorithms do the work.

The success of Google Flu Trends became emblematic of the hot new trend in business, technology and science: “Big Data”. What, excited journalists asked, can science learn from Google?

As with so many buzzwords, “big data” is a vague term, often thrown around by people with something to sell. Some emphasise the sheer scale of the data sets that now exist – the Large Hadron Collider’s computers, for example, store 15 petabytes a year of data, equivalent to about 15,000 years’ worth of your favourite music.

But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast. Google Flu Trends was built on found data and it’s this sort of data that ­interests me here. Such data sets can be even bigger than the LHC data – Facebook’s is – but just as noteworthy is the fact that they are cheap to collect relative to their size, they are a messy collage of datapoints collected for disparate purposes and they can be updated in real time. As our communication, leisure and commerce have moved to the internet and the internet has moved into our phones, our cars and even our glasses, life can be recorded and quantified in a way that would have been hard to imagine just a decade ago.

Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.

Illustration by Ed Nacional depicting big data©Ed Nacional

Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”

Found data underpin the new internet economy as companies such as Google, Facebook and Amazon seek new ways to understand our lives through our data exhaust. Since Edward Snowden’s leaks about the scale and scope of US electronic surveillance it has become apparent that security services are just as fascinated with what they might learn from our data exhaust, too.

Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.

But while big data promise much to scientists, entrepreneurs and governments, they are doomed to disappoint us if we ignore some very familiar statistical lessons.

“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”

. . .

Four years after the original Nature paper was published, Nature News had sad tidings to convey: the latest flu outbreak had claimed an unexpected victim: Google Flu Trends. After reliably providing a swift and accurate account of flu outbreaks for several winters, the theory-free, data-rich model had lost its nose for where flu was going. Google’s model pointed to a severe outbreak but when the slow-and-steady data from the CDC arrived, they showed that Google’s estimates of the spread of flu-like illnesses were overstated by almost a factor of two.

The problem was that Google did not know – could not begin to know – what linked the search terms with the spread of flu. Google’s engineers weren’t trying to figure out what caused what. They were merely finding statistical patterns in the data. They cared about ­correlation rather than causation. This is common in big data analysis. Figuring out what causes what is hard (impossible, some say). Figuring out what is correlated with what is much cheaper and easier. That is why, according to Viktor Mayer-Schönberger and Kenneth Cukier’s book, Big Data, “causality won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning”.

But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down. One explanation of the Flu Trends failure is that the news was full of scary stories about flu in December 2012 and that these stories provoked internet searches by people who were healthy. Another possible explanation is that Google’s own search algorithm moved the goalposts when it began automatically suggesting diagnoses when people entered medical symptoms.

Illustration by Ed Nacional depicting big data©Ed Nacional

Google Flu Trends will bounce back, recalibrated with fresh data – and rightly so. There are many reasons to be excited about the broader opportunities offered to us by the ease with which we can gather and analyse vast data sets. But unless we learn the lessons of this episode, we will find ourselves repeating it.

Statisticians have spent the past 200 years figuring out what traps lie in wait when we try to understand the world through data. The data are bigger, faster and cheaper these days – but we must not pretend that the traps have all been made safe. They have not.

. . .

In 1936, the Republican Alfred Landon stood for election against President Franklin Delano Roosevelt. The respected magazine, The Literary Digest, shouldered the responsibility of forecasting the result. It conducted a postal opinion poll of astonishing ambition, with the aim of reaching 10 million people, a quarter of the electorate. The deluge of mailed-in replies can hardly be imagined but the Digest seemed to be relishing the scale of the task. In late August it reported, “Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totalled.”

After tabulating an astonishing 2.4 million returns as they flowed in over two months, The Literary Digest announced its conclusions: Landon would win by a convincing 55 per cent to 41 per cent, with a few voters favouring a third candidate.

The election delivered a very different result: Roosevelt crushed Landon by 61 per cent to 37 per cent. To add to The Literary Digest’s agony, a far smaller survey conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt. Mr Gallup understood something that The Literary Digest did not. When it comes to data, size isn’t everything.

Opinion polls are based on samples of the voting population at large. This means that opinion pollsters need to deal with two issues: sample error and sample bias.

Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The “margin of error” reported in opinion polls reflects this risk and the larger the sample, the smaller the margin of error. A thousand interviews is a large enough sample for many purposes and Mr Gallup is reported to have conducted 3,000 interviews.

But if 3,000 interviews were good, why weren’t 2.4 million far better? The answer is that sampling error has a far more dangerous friend: sampling bias. Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all. George Gallup took pains to find an unbiased sample because he knew that was far more important than finding a big one.

The Literary Digest, in its quest for a bigger data set, fumbled the question of a biased sample. It mailed out forms to people on a list it had compiled from automobile registrations and telephone directories – a sample that, at least in 1936, was disproportionately prosperous. To compound the problem, Landon supporters turned out to be more likely to mail back their answers. The combination of those two biases was enough to doom The Literary Digest’s poll. For each person George Gallup’s pollsters interviewed, The Literary Digest received 800 responses. All that gave them for their pains was a very precise estimate of the wrong answer.

The big data craze threatens to be The Literary Digest all over again. Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is.

Professor Viktor Mayer-Schönberger of Oxford’s Internet Institute, co-author of Big Data, told me that his favoured definition of a big data set is one where “N = All” – where we no longer have to sample, but we have the entire background population. Returning officers do not estimate an election result with a representative tally: they count the votes – all the votes. And when “N = All” there is indeed no issue of sampling bias because the sample includes everyone.

But is “N = All” really a good description of most of the found data sets we are considering? Probably not. “I would challenge the notion that one could ever have all the data,” says Patrick Wolfe, a computer scientist and professor of statistics at University College London.

An example is Twitter. It is in principle possible to record and analyse every message on Twitter and use it to draw conclusions about the public mood. (In practice, most researchers use a subset of that vast “fire hose” of data.) But while we can look at all the tweets, Twitter users are not representative of the population as a whole. (According to the Pew Research Internet Project, in 2013, US-based Twitter users were disproportionately young, urban or suburban, and black.)

There must always be a question about who and what is missing, especially with a messy pile of found data. Kaiser Fung, a data analyst and author of Numbersense, warns against simply assuming we have everything that matters. “N = All is often an assumption rather than a fact about the data,” he says.

Consider Boston’s Street Bump smartphone app, which uses a phone’s accelerometer to detect potholes without the need for city workers to patrol the streets. As citizens of Boston download the app and drive around, their phones automatically notify City Hall of the need to repair the road surface. Solving the technical challenges involved has produced, rather beautifully, an informative data exhaust that addresses a problem in a way that would have been inconceivable a few years ago. The City of Boston proudly proclaims that the “data provides the City with real-time in­formation it uses to fix problems and plan long term investments.”

Yet what Street Bump really produces, left to its own devices, is a map of potholes that systematically favours young, affluent areas where more people own smartphones. Street Bump offers us “N = All” in the sense that every bump from every enabled phone can be recorded. That is not the same thing as recording every pothole. As Microsoft researcher Kate Crawford points out, found data contain systematic biases and it takes careful thought to spot and correct for those biases. Big data sets can seem comprehensive but the “N = All” is often a seductive illusion.

. . .

Who cares about causation or sampling bias, though, when there is money to be made? Corporations around the world must be salivating as they contemplate the uncanny success of the US discount department store Target, as famously reported by Charles Duhigg in The New York Times in 2012. Duhigg explained that Target has collected so much data on its customers, and is so skilled at analysing that data, that its insight into consumers can seem like magic.

Duhigg’s killer anecdote was of the man who stormed into a Target near Minneapolis and complained to the manager that the company was sending coupons for baby clothes and maternity wear to his teenage daughter. The manager apologised profusely and later called to apologise again – only to be told that the teenager was indeed pregnant. Her father hadn’t realised. Target, after analysing her purchases of unscented wipes and magnesium supplements, had.

Statistical sorcery? There is a more mundane explanation.

“There’s a huge false positive issue,” says Kaiser Fung, who has spent years developing similar approaches for retailers and advertisers. What Fung means is that we didn’t get to hear the countless stories about all the women who received coupons for babywear but who weren’t pregnant.

Illustration by Ed Nacional depicting big data©Ed Nacional

Hearing the anecdote, it’s easy to assume that Target’s algorithms are infallible – that everybody receiving coupons for onesies and wet wipes is pregnant. This is vanishingly unlikely. Indeed, it could be that pregnant women receive such offers merely because everybody on Target’s mailing list receives such offers. We should not buy the idea that Target employs mind-readers before considering how many misses attend each hit.

In Charles Duhigg’s account, Target mixes in random offers, such as coupons for wine glasses, because pregnant customers would feel spooked if they realised how intimately the company’s computers understood them.

Fung has another explanation: Target mixes up its offers not because it would be weird to send an all-baby coupon-book to a woman who was pregnant but because the company knows that many of those coupon books will be sent to women who aren’t pregnant after all.

None of this suggests that such data analysis is worthless: it may be highly profitable. Even a modest increase in the accuracy of targeted special offers would be a prize worth winning. But profitability should not be conflated with omniscience.

. . .

In 2005, John Ioannidis, an epidemiologist, published a research paper with the self-explanatory title, “Why Most Published Research Findings Are False”. The paper became famous as a provocative diagnosis of a serious issue. One of the key ideas behind Ioannidis’s work is what statisticians call the “multiple-comparisons problem”.

It is routine, when examining a pattern in data, to ask whether such a pattern might have emerged by chance. If it is unlikely that the observed pattern could have emerged at random, we call that pattern “statistically significant”.

The multiple-comparisons problem arises when a researcher looks at many possible patterns. Consider a randomised trial in which vitamins are given to some primary schoolchildren and placebos are given to others. Do the vitamins work? That all depends on what we mean by “work”. The researchers could look at the children’s height, weight, prevalence of tooth decay, classroom behaviour, test scores, even (after waiting) prison record or earnings at the age of 25. Then there are combinations to check: do the vitamins have an effect on the poorer kids, the richer kids, the boys, the girls? Test enough different correlations and fluke results will drown out the real discoveries.

There are various ways to deal with this but the problem is more serious in large data sets, because there are vastly more possible comparisons than there are data points to compare. Without careful analysis, the ratio of genuine patterns to spurious patterns – of signal to noise – quickly tends to zero.

Worse still, one of the antidotes to the ­multiple-comparisons problem is transparency, allowing other researchers to figure out how many hypotheses were tested and how many contrary results are languishing in desk drawers because they just didn’t seem interesting enough to publish. Yet found data sets are rarely transparent. Amazon and Google, Facebook and Twitter, Target and Tesco – these companies aren’t about to share their data with you or anyone else.

New, large, cheap data sets and powerful ­analytical tools will pay dividends – nobody doubts that. And there are a few cases in which analysis of very large data sets has worked miracles. David Spiegelhalter of Cambridge points to Google Translate, which operates by statistically analysing hundreds of millions of documents that have been translated by humans and looking for patterns it can copy. This is an example of what computer scientists call “machine learning”, and it can deliver astonishing results with no preprogrammed grammatical rules. Google Translate is as close to theory-free, data-driven algorithmic black box as we have – and it is, says Spiegelhalter, “an amazing achievement”. That achievement is built on the clever processing of enormous data sets.

But big data do not solve the problem that has obsessed statisticians and scientists for centuries: the problem of insight, of inferring what is going on, and figuring out how we might intervene to change a system for the better.

“We have a new resource here,” says Professor David Hand of Imperial College London. “But nobody wants ‘data’. What they want are the answers.”

To use big data to produce such answers will require large strides in statistical methods.

“It’s the wild west right now,” says Patrick Wolfe of UCL. “People who are clever and driven will twist and turn and use every tool to get sense out of these data sets, and that’s cool. But we’re flying a little bit blind at the moment.”

Statisticians are scrambling to develop new methods to seize the opportunity of big data. Such new methods are essential but they will work by building on the old statistical lessons, not by ignoring them.

Recall big data’s four articles of faith. Uncanny accuracy is easy to overrate if we simply ignore false positives, as with Target’s pregnancy predictor. The claim that causation has been “knocked off its pedestal” is fine if we are making predictions in a stable environment but not if the world is changing (as with Flu Trends) or if we ourselves hope to change it. The promise that “N = All”, and therefore that sampling bias does not matter, is simply not true in most cases that count. As for the idea that “with enough data, the numbers speak for themselves” – that seems hopelessly naive in data sets where spurious patterns vastly outnumber genuine discoveries.

“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever.

——————————————-

Tim Harford’s latest book is ‘The Undercover Economist Strikes Back’. To comment on this article please post below, or email magazineletters@ft.com

 

HBR Blog: Preventive Health Care Markets

 

https://hbr.org/2014/11/what-the-u-s-can-learn-from-india-and-brazil-about-preventive-health-care

What the U.S. Can Learn From India and Brazil About Preventive Health Care

NOVEMBER 14, 2014

media companies, automakers, clothing retailers, and other industries have for decades looked abroad to find ideas and innovations they can adapt for the US market. But in one of America’s largest, fastest growing, and sometimes most confounding sectors — healthcare — the situation is different.

Imports like aspirin (Germany) and the heart transplant (South Africa) have become almost as American as apple pie. But in preventive health — keeping people from getting sick, or helping them manage the conditions they do have — we adapt too few of the best foreign innovations and models that have proven to be effective and sustainable at scale.

The U.S. spends far more per capita on healthcare than any other nation. Clearly we need to adopt cost-effective prevention efforts where we can. And we have to do so in a way that fits our health care infrastructure, including reliance on the private sector — a mix of for-profit and non-profit payers and providers — as the bedrock of our system. Two tactics that do fit, and can both lower costs and improve patient care, include more expansive use of mobile technology and of lay health workers. Both can be supported by non-profit intermediaries. Scalable models for these interventions are in use and successful in emerging economies, and are particularly germane where it comes to preventing illness and disease in low-income or geographically or linguistically hard-to-reach patient populations.

India’s Telemedicine

Take telemedicine for example, an approach to getting information to remote populations at a fraction of the cost of circuit-riding physicians. In India, 70% of the population lives in rural areas, but only 3% of the country’s specialist physicians practice in those areas. A nonprofit called World Health Partners (WHP) is working to bridge the gap by identifying informal health providers at the village level and using live streaming over the internet to connect them to highly qualified specialists far away. These lay workers, compensated through consultation fees and a reasonable mark up on drugs sold, measure blood pressure, temperature, heart rate, respiratory rate, and can assess EKGs and transmit the results directly to the specialist physicians.

The University of California at Berkeley has studied the program and reported a dramatic increase in access to reproductive health services among six million villagers at a cost of $5.84 per adult for a couple of years protection from pregnancy. Perhaps the most important lesson for the U.S. in WHP’s telemedicine initiatives in India is its approach to scale. Rather than implementing a program and figuring out later how it might be brought to very large numbers of people, WHP is building scalability into the design through low-cost approaches, and a reliance on for-profit rural practitioners — effectively working with the private sector to build a new market for preventative health.

INSIGHT CENTER

  • Innovating for Value in Health Care
    SPONSORED BY MEDTRONIC

    A collaboration of the editors of Harvard Business Review and the New England Journal of Medicine, exploring best practices for improving patient outcomes while reducing costs.

Brazil’s Integration of Lay Health Workers

More deeply integrating lay workers into our health system offers another path to lowering costs and broadening the reach of preventative health care. Most nations, including the U.S., make some use of lay or community health workers, but Brazil is notable for the scale at which it does this, and its success in integrating such workers into its larger healthcare system. A recent Johns Hopkins study notes that Brazil now deploys over 220,000 Community Health Agents (CHAs) to reach more than half of its 200 million residents. They work as members of health teams, including at least one doctor, one nurse, an assistant nurse and six CHAs to serve approximately 1,000 families. All the team members are salaried, full-time employees, and the CHAs must live in the communities they serve, promoting and delivering preventative health practices such as breastfeeding, prenatal care, immunizations, and screening for diseases including HIV and tuberculosis. In tandem with this approach, Brazil now has one of the most rapidly declining childhood mortality rates in the world, and has made striking gains in immunization coverage and other measures of preventive health addressed by the CHAs.

While the U.S., too, has some promising community health worker models, such as “health coaches” at AtlantiCare in Atlantic City, N.J., and “ promotoras” at Latino Health Access in Santa Ana, CA, Brazil’s experience offers us a path to scale, one that no longer views community health workers as “non-traditional,” but integrates them into the healthcare system, and, ultimately, pays for them in the same way that care in clinical settings is remunerated.

Mindset Before Model

 The “market” for preventive services is almost nothing like the market for automobiles; we can’t rely on market forces alone to increase the flow of global preventive health innovations into the U.S. But we should recall that Japanese automakers had been innovating for a long time before American automakers got serious about exploring and adapting these innovations. The first change may need to be mindset: expanding our view of where we might find powerful models for improving preventive health in the U.S., expanding our idea of who should be involved in identifying, prototyping, and scaling these models, and thinking big — designing for scale — from the outset.


Nidhi Sahni is a Manager in the public health and global development practice with The Bridgespan Group, a nonprofit advisor to other nonprofits and philanthropy.


Michael Myers is Managing Director at The Rockefeller Foundationand leads its global health work.

RWJF: Making Sense of the Medicare Physician Payment Data Release: Uses, Limitations, and Potential – The Commonwealth Fund

Making Sense of the Medicare Physician Payment Data Release: Uses, Limitations, and Potential – The Commonwealth Fund.

PDF: 1789_Patel_making_sense_Medicare_phys_payment_data_release_ib

Overview

In April 2014, the Centers for Medicare and Medicaid Services released a data file containing information on Medicare payments made to physicians and other providers. Though an important achievement in promoting greater health system transparency, limitations in the data have hindered key users, including consumers, payers, and providers, from discerning meaningful information from the file. This brief outlines the significance of the data release, the limitations of the dataset, the current uses of the information, and proposals for rendering the file more meaningful for public use.

Fat adults, fat kids, fat pets: how we’re driving the obesity pandemic

 

http://www.smh.com.au/national/health/fat-adults-fat-kids-fat-pets-how-were-driving-the-obesity-pandemic-20141205-120cbb.html

Fat adults, fat kids, fat pets: how we’re driving the obesity pandemic

 Science Editor

New research finds that obesity has become a major pandemic and looks set to get worse – in animals as well as humans, writes Nicky Phillips.

Bulging issue: Processed foods and climate change are hastening the obesity pandemic.Bulging issue: Processed foods and climate change are hastening the obesity pandemic. Photo: iStock

In the year 2000 when Cathy Freeman smashed the women’s 400-metre record at the Sydney Olympics, showcasing the best of our species’ physical abilities, the physique of many others crossed another, less auspicious line.

In that year the number of overweight people surpassed the number of people who were underweight.

While malnutrition remains a scourge in many parts of the third world, obesity elsewhere is now considered a pandemic – a global epidemic that has emerged in recent decades and costs Australia about $21 billion a year.

But it’s not just people battling the bulge.

The data is showing that pets and companion animals – such as cats, dogs and horses – have also dramatically increased in girth.

“There is something about our shared environment that is generating obesity in both humans and our companion animals,” says Professor David Raubenheimer, a nutritional ecologist at the University of Sydney’s Charles Perkins Centre.

A chief driver is economics, he says. Since the 1980s ultra-processed foods, which are cheaper than whole foods but far less nutritious, have flooded supermarkets and fast-food stores.

Climate change is also a factor, as higher concentrations of carbon dioxide diminish  the nutrient quality of plants and crops, which are the basis of human and many animals’ diets.

The idea that environment plays a major role in the obesity epidemic is not new. But Raubenheimer’s work is trying to unravel the complex mechanisms that make the modern world we’ve created for ourselves an uneasy fit for our bodies, which evolved in a very different landscape more than 100,000 years ago.

“It’s only by properly understanding problems that we can hope to predict, avert or manage them,” says Raubenheimer, who notes that not a single country has yet reversed its obesity epidemic.

To understand the role of the environment on obesity, there are a few things to note about our internal workings.

All animals, including humans, have sophisticated internal appetite systems that influence food intake to ensure the body receives the correct balance of each major nutrient group: protein, fat and carbohydrates.

Research by Raubenheimer and his colleagues found protein to be the most dominant of these nutrient “appetites”. Their studies in animals and people consistently show individuals will overeat fats and carbohydrates in order to meet their protein requirement.

Given that early humans evolved in an environment where meals likely consisted of lean game and root plants, both of which contained little fat or sugars, a strong protein appetite makes sense.

But now think of the modern world, where sugary, fatty and highly-processed foods – such as pizza, muesli bars, cereals, burgers and biscuits – are cheap and plentiful. Eating a greater quantity of those foodstuffs will not satisfy the protein appetite, but they are often consumed in place of protein because “protein costs more”, says Raubenheimer.

Studies in middle- and high-income countries consistently find that people living in poorer communities are more likely to be overweight or obese.

“The global rise of ultra-processed products, largely driven by powerful trans-national corporations, began in the 1980s and thus coincides closely with the period in which there has been a doubling in the rates of obesity,” wrote Raubenheimer, in his study published in theBritish Journal of Nutrition in November.

This may also suggest why dogs have beefed up by a whopping 33 percent, on average, and cats by 25 percent over the past few decades.

“If it’s more expensive to buy protein balanced foods for ourselves, imagine economically stressed families’ response when they feed their pets,” he says.

But it’s not just multinationals affecting food quality.

Climate change is diluting the nutrients in plants, because when exposed to high temperatures and a carbon dioxide-rich environment, the percentage of protein and fibre in plant leaves drops, while the concentration of sugars and starches increases.

“There is an immense amount of research showing that one consistent impact of climate change is the nutritional composition of plants,” Raubenheimer says.

Given that plants make up 80 per cent of the human diet, Raubenheimer predicts that vegetables and crops diluted of protein will become another factor in encouraging humans to overeat fats and carbohydrates to satiate their protein requirements.

Raubenheimer’s analysis suggests the impact of global warming on obesity rates will reach beyond plants to livestock animals that eat the plants, which people in turn consume as a major protein source.

If cattle and sheep graze on grasses with lower concentrations of the nutrient, they in turn will overeat to satiate their protein appetite, increasing their body weight, he says.

And while protein is a major driver of appetite, exposure to too much early in life may do more harm than good.

Numerous studies have found that babies fed formula, which has a higher concentration of protein than breast milk, are more likely to become obese later in life than breast-fed infants.

Raubenheimer says one explanation for this trend is that feeding infants high protein foods may be conditioning them to have a higher protein appetite for life.

“[This] is potentially causing those infants to overeat fats and carbs to a greater extent to satisfy their protein requirements,” he says.

While Raubenheimer and his collaborators at the Charles Perkins Centre know obesity emerges from a complex set of interactions between the environment, genetics and lifestyle factors, new approaches are desperately needed to tackle the problem, he says.

“We need interdisciplinary research, where approaches and concepts from multiple areas are applied to this major global crisis.”

Creating a Market for Disease Prevention

 

http://thevitalityinstitute.org/news/focus-on-pharma-creating-a-market-for-disease-prevention/

Focus on Pharma: Creating a Market for Disease Prevention

SustainAbility Newsletter “Radar” | Oct 30, 2014

Should pharmaceutical companies be in the business of producing pills, or of making people well? The answer is both. Elvira Thissen argues that with diminishing returns in medicines it is time for pharma companies to move away from philosophical discussions on prevention and adapt to new realities instead.

[…]

The Business Case for Prevention

A recent report by The Vitality Institute – founded by South Africa’s largest health insurance company – estimates potential annual savings in the US of $217–303 billion on healthcare costs by 2023 if evidence-based approaches to NCD prevention are rolled out.

At an estimated global cost of illness of nearly US$1.4 trillion in 2010 for cardiovascular disease and diabetes alone, there is a market for prevention. In the UK, the NHS spends 10% of its budget on treating diabetes, 80% of which goes to managing (partly preventable) complications. Reducing disease incidence represents a considerable value to governments, insurance companies and employers.

Some sectors are already eyeing the value of this market.

[…]

For access to the full article and SustainAbility newsletter, click here.

On PSA Testing

http://www.australiandoctor.com.au/opinions/guest-view/why-do-doctors-keep-silent-about-their-own-prostat

Simon Chapman’s ebook: Let-sleeping-dogs-lie

http://www.australiandoctor.com.au/news/latest-news/nhrmc-finally-releases-its-psa-advice

For every 1000 low-risk, 60-year-old men tested annually over a decade:

  • Two will avoid dying of prostate cancer before age 85
  • Two will avoid metastatic prostate cancer before age 85
  • 87 will have a false-positive test leading to an unnecessary biopsy, and 28 will suffer significant side effects as a result
  • 28 will be “overdiagnosed” with a prostate cancer that would likely otherwise have remained asymptomatic
  • 25 will be “overtreated”, 7-10 of whom will be left impotent or incontinent as a result
  • PSA testing has “no discernible effect” on overall mortality

The figures are largely unchanged from a draft version released last

Not sure what to say about PSA testing?

6 comments

The NHMRC has finalised its PSA testing advice for doctors, in what is claimed to be the best summary of the evidence to date.

Released Tuesday, the document provides a backgrounder for GPs to discuss both the benefits and harms of PSA testing with asymptomatic men.

Following an extensive literature review, with input spanning general practice, urology and oncology, the guide provides a list of statistics to use in conversation with patients (see below box).

Professor Ian Olver (pictured), a member of the NHMRC’s expert advisory group, said the group was “as confident as we can be” in the figures.

“We’re trying to say that the reason this can’t be promoted as a population test for everyone is that there are benefits and risks that have to be balanced. Every man has to decide where that balance lies for him,” said Professor Olver, CEO of the Cancer Council Australia.

“We’re providing an evidence-based tool for practitioners to be able to have that discussion.”

For every 1000 low-risk, 60-year-old men tested annually over a decade:

  • Two will avoid dying of prostate cancer before age 85
  • Two will avoid metastatic prostate cancer before age 85
  • 87 will have a false-positive test leading to an unnecessary biopsy, and 28 will suffer significant side effects as a result
  • 28 will be “overdiagnosed” with a prostate cancer that would likely otherwise have remained asymptomatic
  • 25 will be “overtreated”, 7-10 of whom will be left impotent or incontinent as a result
  • PSA testing has “no discernible effect” on overall mortality

The figures are largely unchanged from a draft version released last year, although the NHMRC has now stressed that the document “is not a substitute for relevant clinical practice guidelines and therefore does not contain recommendations”.

Meanwhile, GPs will have to wait until December for full consensus clinical practice guidelines, which are currently being developed by the Cancer Council Australia and Prostate Cancer Foundation of Australia.

These guidelines also have broad, multidisciplinary representation, and it is hoped they will provide some resolution to a debate that has divided Australia’s medical colleges in recent years.

731/8 Point Street

 

PDF: 731_8 Point Street, Pyrmont _ Feldi Property Agents – Pyrmont

http://www.feldiproperty.com/6304865

731/8 Point Street, Pyrmont

Offers over $4,500,000

  • Property For Sale in Pyrmont
  • Pyrmont Properties For Sale
  • Pyrmont real estate For Sale
  • Real Estate in Pyrmont

AN OPPORTUNITY NOT TO BE MISSED

  • 3
  • 2
  • 2

Those looking for one of the most dramatic harbourbrige and city views available combined with huge living and entertaining balcony must inspect this property.

Boasting Mirvac quality interiors framed in floor-to-ceiling glass and surrounded by a wraparound balcony, this stunning penthouse captures harbour district and Bridge views from the prestigious Promontory building.

Close to restaurants and cafes in waterfront precinct.

Open plan living and dining flooded in all day north east sunshine.

-Marble kitchen with Miele appliances -featured bathroom and deluxe ensuite
-Three bedrooms, all bedrooms have built-ins, with high ceilings throughout.
-Huge, is the only way to describe the vast entertainers terrace, the backdrop will ensure you and your guests gaze across one of the worlds most dynamic harbour panorama
-Internal laundry and pet friendly building.
-Occupying the entire top floor giving you direct lift access to your own private foyer and entry.
-Video intercom and double security parking plus storage.

Facilities include pool, gymnasium, spa and sauna.

A stunning location with amazing views in every sense of the word,

INSPECT BY APPOINTMENT ONLY

*Whilst we make every endeavour to ensure the information provided is correct, we do not guarantee or give any warranty as to the accuracy of the details provided, interested parties must rely on their own enquires.

Property Features
Property ID 26413
Bedrooms 3
Bathrooms 2
Garage 2
Building Size 350 Square Mtr approx.

Map Information

Satellite
Robert Alfeldi

Robert Alfeldi

Mobile: 0418982688

Office: 02 8259 3333

Dr Atul Gawande – 2014 Reith Lectures

Lecture 1: Why Do Doctors Fail?

Lecture 2: The Century of the System

Lecture 3: The Problem of Hubris

Lecture 4: The Idea of Wellbeing

http://www.bbc.co.uk/programmes/articles/6F2X8TpsxrJpnsq82hggHW/dr-atul-gawande-2014-reith-lectures

Dr Atul Gawande – 2014 Reith Lectures

Atul Gawande, MD, MPH is a practicing surgeon at Brigham and Women’s Hospital and Professor at both the Harvard School of Public Health and Harvard Medical School.

In his lecture series, The Future of Medicine, Dr Atul Gawande will examine the nature of progress and failure in medicine, a field defined by what he calls ‘the messy intersection of science and human fallibility’.

Known for both his clear analysis and vivid storytelling, he will explore the growing importance of systems in medicine and argue that the future role of the medical profession in our lives should be bigger than simply assuring health and survival.

The 2014 Reith Lectures

The first lecture, Why do Doctors Fail?, will explore the nature of imperfection in medicine. In particular, Gawande will examine how much of failure in medicine remains due to ignorance (lack of knowledge) and how much is due to ineptitude (failure to use existing knowledge) and what that means for where medical progress will come from in the future.

In the second lecture, The Century of the System, Gawande will focus on the impact that the development of systems has had – and should have in the future – on medicine and overcoming failures of ineptitude. He will dissect systems of all kinds, from simple checklists to complex mechanisms of many parts. And he will argue for how they can be better designed to transform care from the richest parts of the world to the poorest.

The third lecture, The Problem of Hubris, will examine the great unfixable problems in life and healthcare – aging and death. Gawande will argue that the reluctance of society and medical institutions to recognise the limits of what professionals can do is producing widespread suffering. But research is revealing how this can change.

The fourth and final lecture, The Idea of Wellbeing, will argue that medicine must shift from a focus on health and survival to a focus on wellbeing – on protecting, insofar as possible, people’s abilities to pursue their highest priorities in life. And, as he will suggest from the story of his father’s life and death from cancer, those priorities are nearly always more complex than simply to live longer.

Five things to know about Dr Atul Gawande

Find out about Atul Gawande ahead of his 2014 Reith Lectures…

1.

In 2010, Time Magazine named him as one of the world’s most influential thinkers.

2.

His 2009 New Yorker article – The Cost Conundrum – made waves when it compared the health care of two towns in Texas and suggested that more expensive care is often worse care. Barack Obama cited the article during his attempt to get Obamacare passed by the US Congress.

3.

Atul Gawande’s 2012 TED talk – How do we heal medicine? – has been watched over 1m times.

4.

Atul Gawande has written three bestselling books: Complications, Better and The Checklist Manifesto.

The Checklist Manifesto is about the importance of having a process for whatever you are doing. Better focuses on the drive for better medicine and health care systems. Complications was based on his training as a surgeon.

5.

In 2013, Atul launched Ariadne Labs – a new health care innovation lab aiming ‘to provide scalable solutions that produce better care at the most critical moments in people’s lives everywhere’.

 

Professor Guy Maddern’s tips on protecting yourself in surgery

1. If you are away from a major hospital, get yourself to one. A particular problem, Professor Maddern says, exists when rural patients resist transfers to major hospitals because they don’t want to leave their families.

2. Lose weight and don’t smoke.The proportion of deaths where obesity was a factor increased slightly this year. “An operation done on a thin person relative to a fat person can have a completely different outcome,” Professor Maddern says. This is particularly important for older people, who have the most operations.

3. Go to a hospital that performs a lot of the type of surgery you are going to have, particularly if it is complex. Remember, practice makes perfect.

http://www.canberratimes.com.au/national/health/one-in-10-surgery-deaths-due-to-flawed-care-or-injury-caused-by-treatment-20141203-11z5y1.html

One in 10 surgery deaths due to flawed care or injury caused by treatment

Date December 3, 2014

Health Editor, Sydney Morning Herald

View more articles from Amy Corderoy

Dangerous: Surgery risks can outweigh benefits.

Dangerous: Surgery risks can outweigh benefits. Photo: Nic Walker

More than one in 10 deaths during or after surgery involved flawed care or serious injury caused by the treatment, a national audit has found.

The Australian and New Zealand Audits of Surgical Mortality shows delays in treatment or decisions by surgeons to perform futile surgeries are still the most common problems linked to surgical deaths.

But surgery also appears to be getting a little safer, with the audit, which covers almost every surgery death in Australia, finding fewer faults with the medical care provided to patients than it has in the past.

Audit chair Guy Maddern said of the deaths where there were concerns, about 5 per cent involved serious adverse events that were likely to have contributed to the person’s death.

In about 8 per cent of cases, the audit found some area of care could have been delivered better.

“These are the sorts of deaths where it was a difficult surgery, and instead of going straight to an operation, maybe additional X-rays and imaging should have been pursued, or maybe the skill set of the team that was operating could have been more appropriate,” he said.

“Sometimes, of course, the result would have been exactly the same.”

Professor Maddern said some surgeons, particularly in general surgery, orthopaedics, and, to a lesser extent, neurosurgery, still needed to work on deciding not to proceed with surgeries where the risks outweighed the benefits.

“People are thinking a little bit longer and harder about whether an operation is really going to alter the outcome,” he said. “These are the types of cases where you know before you begin that it is not going to end well.”

However, in some areas with many patients with complex conditions, things were just more likely to go wrong.

The report, which includes data from nearly 18,600 deaths over five years, found in 2013 the decision to operate was the most common reason a death was reviewed.

Overall, delays in treatment, linked to issues such as patients needing to be transferred or surgeons delaying the decision to operate, were still the most common problem, and in about 26 per cent of the deaths no surgery was performed.

Between 2009 and 2013, the report shows a decrease in the proportion of patients who died with serious infection causing sepsis from 12 per cent to 9 per cent, while significant post-operative bleeding decreased from 12 per cent to 11 per cent. Serious adverse events halved from 6 per cent of deaths in 2009 to 3 per cent in 2013.

Every public hospital now participates in the audit, along with all private hospitals in every state except NSW. However, Professor Maddern said he was pleased NSW private hospitals had agreed to participate in future.

Doctors are now provided with regular case studies from the audit, in which de-identified information about the death is provided, so they can learn from any mistakes.

“What we are seeing is an overall decrease in deaths associated with surgical care, which may be due to many things, and we think the audit is helping,” he said. “It’s making people think twice.”

Professor Guy Maddern’s tips on protecting yourself in surgery

1. If you are away from a major hospital, get yourself to one. A particular problem, Professor Maddern says, exists when rural patients resist transfers to major hospitals because they don’t want to leave their families.

2. Lose weight and don’t smoke.The proportion of deaths where obesity was a factor increased slightly this year. “An operation done on a thin person relative to a fat person can have a completely different outcome,” Professor Maddern says. This is particularly important for older people, who have the most operations.

3. Go to a hospital that performs a lot of the type of surgery you are going to have, particularly if it is complex. Remember, practice makes perfect.