All posts by blackfriar

Machines put half of US work at risk

Great tip from Michael Griffith on the back of last night’s dinner terrific conversation at the Nicholas Gruen organised feast at Hellenic Republic…

http://www.bloomberg.com/news/2014-03-12/your-job-taught-to-machines-puts-half-u-s-work-at-risk.html

Paper (PDF): The_Future_of_Employment

Your Job Taught to Machines Puts Half U.S. Work at Risk

By Aki Ito  Mar 12, 2014 3:01 PM ET
Photographer: Javier Pierini/Getty Images

Who needs an army of lawyers when you have a computer?

When Minneapolis attorney William Greene faced the task of combing through 1.3 million electronic documents in a recent case, he turned to a so-called smart computer program. Three associates selected relevant documents from a smaller sample, “teaching” their reasoning to the computer. The software’s algorithms then sorted the remaining material by importance.

“We were able to get the information we needed after reviewing only 2.3 percent of the documents,” said Greene, a Minneapolis-based partner at law firm Stinson Leonard Street LLP.

Full Coverage: Technology and the Economy

Artificial intelligence has arrived in the American workplace, spawning tools that replicate human judgments that were too complicated and subtle to distill into instructions for a computer. Algorithms that “learn” from past examples relieve engineers of the need to write out every command.

The advances, coupled with mobile robots wired with this intelligence, make it likely that occupations employing almost half of today’s U.S. workers, ranging from loan officers to cab drivers and real estate agents, become possible to automate in the next decade or two, according to a study done at the University of Oxford in the U.K.

Source: Aethon Inc. via Bloomberg

Aethon Inc.’s self-navigating TUG robot transports soiled linens, drugs and meals in…Read More

“These transitions have happened before,” said Carl Benedikt Frey, co-author of the study and a research fellow at the Oxford Martin Programme on the Impacts of Future Technology. “What’s different this time is that technological change is happening even faster, and it may affect a greater variety of jobs.”

Profound Imprint

It’s a transition on the heels of an information-technology revolution that’s already left a profound imprint on employment across the globe. For both physical andmental labor, computers and robots replaced tasks that could be specified in step-by-step instructions — jobs that involved routine responsibilities that were fully understood.

That eliminated work for typists, travel agents and a whole array of middle-class earners over a single generation.

Yet even increasingly powerful computers faced a mammoth obstacle: they could execute only what they’re explicitly told. It was a nightmare for engineers trying to anticipate every command necessary to get software to operate vehicles or accurately recognize speech. That kept many jobs in the exclusive province of human labor — until recently.

Oxford’s Frey is convinced of the broader reach of technology now because of advances in machine learning, a branch of artificial intelligence that has software “learn” how to make decisions by detecting patterns in those humans have made.

Source: Aethon Inc. via Bloomberg

Artificial intelligence has arrived in the American workplace, spawning tools that… Read More

702 Occupations

The approach has powered leapfrog improvements in making self-driving cars and voice search a reality in the past few years. To estimate the impact that will have on 702 U.S. occupations, Frey and colleague Michael Osborne applied some of their own machine learning.

They first looked at detailed descriptions for 70 of those jobs and classified them as either possible or impossible to computerize. Frey and Osborne then fed that data to an algorithm that analyzed what kind of jobs make themselves to automation and predicted probabilities for the remaining 632 professions.

The higher that percentage, the sooner computers and robots will be capable of stepping in for human workers. Occupations that employed about 47 percent of Americans in 2010 scored high enough to rank in the risky category, meaning they could be possible to automate “perhaps over the next decade or two,” their analysis, released in September, showed.

Safe Havens

“My initial reaction was, wow, can this really be accurate?” said Frey, who’s a Ph.D. economist. “Some of these occupations that used to be safe havens for human labor are disappearing one by one.”

Loan officers are among the most susceptible professions, at a 98 percent probability, according to Frey’s estimates. Inroads are already being made by Daric Inc., an online peer-to-peer lender partially funded by former Wells Fargo & Co. Chairman Richard Kovacevich. Begun in November, it doesn’t employ a single loan officer. It probably never will.

The startup’s weapon: an algorithm that not only learned what kind of person made for a safe borrower in the past, but is also constantly updating its understanding of who is creditworthy as more customers repay or default on their debt.

It’s this computerized “experience,” not a loan officer or a committee, that calls the shots, dictating which small businesses and individuals get financing and at what interest rate. It doesn’t need teams of analysts devising hypotheses and running calculations because the software does that on massive streams of data on its own.

Lower Rates

The result: An interest rate that’s typically 8.8 percentage points lower than from a credit card, according to Daric. “The algorithm is the loan officer,” said Greg Ryan, the 29-year-old chief executive officer of the Redwood City, California, company that consists of him and five programmers. “We don’t have overhead, and that means we can pass the savings on to our customers.”

Similar technology is transforming what is often the most expensive part of litigation, during which attorneys pore over e-mails, spreadsheets, social media posts and other records to build their arguments.

Each lawsuit was too nuanced for a standard set of sorting rules, and the string of keywords lawyers suggested before every case still missed too many smoking guns. The reading got so costly that many law firms farmed out the initial sorting to lower-paid contractors.

Training Software

The key to automate some of this was the old adage to show not tell — to have trained attorneys illustrate to the software the kind of documents that make for gold. Programs developed by companies such as San Francisco-based Recommind Inc. then run massive statistics to predict which files expensive lawyers shouldn’t waste their time reading. It took Greene’s team of lawyers 600 hours to get through the 1.3 million documents with the help of Recommind’s software. That task, assuming a speed of 100 documents per hour, could take 13,000 hours if humans had to read all of them.

“It doesn’t mean you need zero people, but it’s fewer people than you used to need,” said Daniel Martin Katz, a professor at Michigan State University’s College of Law in East Lansing who teaches legal analytics. “It’s definitely a transformation for getting people that first job while they’re trying to gain additional skills as lawyers.”

Robot Transporters

Smart software is transforming the world of manual labor as well, propelling improvements in autonomous cars that make it likely machines can replace taxi drivers and heavy truck drivers in the next two decades, according to Frey’s study.

One application already here: Aethon Inc.’s self-navigating TUG robots that transport soiled linens, drugs and meals in now more than 140 hospitals predominantly in the U.S. When Pittsburgh-based Aethon first installs its robots in new facilities, humans walk the machines around. It would have been impossible to have engineers pre-program all the necessary steps, according to Chief Executive Officer Aldo Zini.

“Every building we encounter is different,” said Zini. “It’s an infinite number” of potential contingencies and “you could never ahead of time try to program everything in. That would be a massive effort. We had to be able to adapt and learn as we go.”

Human-level Cognition

To be sure, employers won’t necessarily replace their staff with computers just because it becomes technically feasible to do so, Frey said. It could remain cheaper for some time to employ low-wage workers than invest in expensive robots. Consumers may prefer interacting with people than with self-service kiosks, while government regulators could choose to require human supervision of high-stakes decisions.

Even more, recent advances still don’t mean computers are nearing human-level cognition that would enable them to replicate most jobs. That’s at least “many decades” away, according to Andrew Ng, director of the Stanford Artificial Intelligence Laboratory near Palo Alto, California.

Machine-learning programs are best at specific routines with lots of data to train on and whose answers can be gleaned from the past. Try getting a computer to do something that’s unlike anything it’s seen before, and it just can’t improvise. Neither can machines come up with novel and creative solutions or learn from a couple examples the way people can, said Ng.

Employment Impact

“This stuff works best on fairly structured problems,” said Frank Levy, a professor emeritus at the Massachusetts Institute of Technology in Cambridge who has extensively researched technology’s impact on employment. “Where there’s more flexibility needed and you don’t have all the information in advance, it’s a problem.”

That means the positions of Greene and other senior attorneys, whose responsibilities range from synthesizing persuasive narratives to earning the trust of their clients, won’t disappear for some time. Less certain are prospects for those specializing in lower-paid legal work like document reading, or in jobs that involve other relatively repetitive tasks.

As more of the world gets digitized and the cost to store and process that information continues to decline, artificial intelligence will become even more pervasive in everyday life, says Stanford’s Ng.

“There will always be work for people who can synthesize information, think critically, and be flexible in how they act in different situations,” said Ng, also co-founder of online education provider Coursera Inc. Still, he said, “the jobs of yesterday won’t the same as the jobs of tomorrow.”

Workers will likely need to find vocations involving more cognitively complex tasks that machines can’t touch. Those positions also typically require more schooling, said Frey. “It’s a race between technology and education.”

To contact the reporter on this story: Aki Ito in San Francisco at aito16@bloomberg.net

To contact the editors responsible for this story: Chris Wellisz at cwellisz@bloomberg.net Gail DeGeorge, Mark Rohner

Reflections on trackers…

It’s about healthy living, not quantifying oneself…

http://www.medgadget.com/2014/03/an-interview-with-the-monitored-man-albert-sun.html

An Interview with “The Monitored Man”: Albert Sun

Posted: 13 Mar 2014 12:04 PM PDT

Albert Sun, a young journalist at the New York Times, recently authored an article entitled “The Monitored Man” chronicling his experience using a multitude of health fitness trackers over the last few months. I wanted to ask him about his fitness tracking adventure and gain further insight into this booming sector from a “super user” who at times was simultaneously wearing up to four fitness tracking devices.

Tom Fowler, Medgadget: Albert, tell me about why you decided to put fitness tracking devices to the test.

Albert Sun An Interview with The Monitored Man: Albert SunAlbert Sun: I think it started with a really simple graphic that my colleague Alastair put together last year listing a few interesting wearable health monitors and what things they measured. For that he put together this google spreadsheet and we sort of tried to keep it up to date with all the different gadgets as we heard about them. I was constantly adding things to it and at a point felt that if I was having this much trouble keeping track of all of them that probably other people were as well. My original idea was actually to put them all to the test in accuracy and be able to chart which ones were the most accurate. I had plans to reverse engineer their drivers and access the raw data they were recording. But once I actually started wearing them I realized that, yes there was a lot of data, but it was actually this idea of motivation and behavior change and how you understand the data that was much more interesting.

 

Medgadget: You mentioned that many trackers were lacking in detecting exertion and activities like biking and fidgeting. Are the device makers missing the point, or are these merely due to current technical limitations?

Albert Sun: It’s definitely due to current technical limitations. If companies could make devices that could track everything perfectly, I think they absolutely would. And I think a lot of people see that kind of tracking as a kind of holy grail and are trying very hard to make it to that goal. I’m not so sure that’s a good idea. No tracker is going to be able to fully track everything about you and we’ve all already got a perfectly good “tracker” that’s wired in to every part of our body: our brain. My colleague Gretchen Reynolds writes about that in her article on why she decides to remain a “tech nudie.”

Yes an objective measure of your activity level is useful, but it’s just one view, and it has to be integrated into the broader subjective view of how you feel.

 

Medgadget: If every fitness tracking device producing company CEO was reading this interview, what tips would you like to give them?

Albert Sun: I think many of these CEO’s are already thinking about the things and experiences I wrote about. From talking to their users they know what experiences people are having and they’re definitely improving rapidly. Just in the time I’ve been using them they’ve improved a lot.

There are two things that I think they could do that would improve people’s experiences though. First is they could be a little more upfront in their marketing of these devices about what they can and can’t do instead of presenting them as magic.

The other thing that I think would be really helpful would be for them to put some error bars on the data they show and indicate that they are estimates and the true values lie somewhere in a range. I think that would go a long way towards helping people interpret their data in the proper context.

I might be sounding overly pessimistic about activity tracking, but I actually really like these devices and think they’re very cool and useful. But to be very cool and useful I think people have to approach them the right way and that means having realistic expectations of how they work. Otherwise people will be disappointed.

 

Medgadget: Would you say your conclusion “I don’t need a monitor anymore. I’m tracking me.” is a reflection of a large part of the market, in that many will initially use but then no longer have a need for trackers?

Albert Sun: Yes, absolutely. It’s maybe not a permanent thing, but it could be a now and again thing. I mean, are we really expecting people to start now and wear something that tracks their movement continually until they’re in the grave?

The goal here is to be healthier and happier — to live well — not to be perfectly quantified. Once an activity tracker has helped you do that it should ideally fade to the background to the point where you can almost forget about it. I obviously haven’t been able to do that while I’ve been working on this story, I’ve been juggling a lot of different gadgets and apps and chargers trying to keep everything straight. It’s quite taxing and it takes a toll on all the other things that life is about.

Anne Wojcicki lays out 23andMe’s vision…

 

http://www.engadget.com/2014/03/09/future-of-preventative-medicine/

Anne Wojcicki and her genetic sequencing company 23andMe are locked in abattle with the FDA. Even though it can’t report results to customers right now, Wojcicki isn’t letting herself get bogged down in the present. At SXSW 2014 she laid out her vision of the future of preventative medicine — one where affordable genome sequencing comes together with “big data.” In addition to simply harvesting your genetic code, the company is doing research into how particular genes effect your susceptibility to disease or your reaction to treatments. And 23andMe isn’t keeping this information locked down. It has been building APIs that allow it to share the results of its research as well as the results your genetic tests, should you wish to.

It’s when that data is combined with other information, say that harvested from a fitness tracker, and put in the hands of engineers and doctors. In the future she hopes that you’ll see companies putting the same effort into identifying and addressing health risks as they do for tracking your shopping habits. Targetfamously was able to decode that a woman was pregnant before she told her father, based purely on her purchase history. One day that same sort of predictive power could be harnessed to prevent diabetes or lessen a risk for a heart attack. Whether or not that future is five, 10 or 15 years off is unclear. But if Wojcicki has her way, you’ll be able to pull up health and lifestyle decisions recommended for you with the same ease that you pull up suggested titles on Netflix.

On bureaucracies

The American economist William A. Niskanen considered the organisation of bureaucracies and proposed a budget maximising model now influential in public choice theory. It stated that rational bureaucrats will “always and everywhere seek to increase their budgets in order to increase their own power.”

An unfettered bureaucracy was predicted to grow to twice the size of a comparable firm that faces market discipline, incurring twice the cost.

http://theconversation.com/reform-australian-universities-by-cutting-their-bureaucracies-12781

Reform Australian universities by cutting their bureaucracies

Australian universities need to trim down their bureaucracies. University image from www.shutterstock.com

Universities drive a knowledge economy, generate new ideas and teach people how to think critically. Anything other than strong investment in them will likely harm Australia.

But as Australian politicians are preparing to reform the university sector, there is an opportunity to take a closer look at the large and powerful university bureaucracy.

Adam Smith argued it would be preferable for students to directly pay academics for their tuition, rather than involve university bureaucrats. In earlier times, Oxford dons received all tuition revenue from their students and it’s been suggested that they paid between 15% and 20% for their rooms and administration. Subsequent central collection of tuition fees removed incentives for teachers to teach and led to the rise of the university bureaucracy.

Today, the bureaucracy is very large in Australian universities and only one third of university spending is allocated to academic salaries.

 

The money (in billions) spent by the top ten Australian research universities from 2003 to 2010 (taken from published financial statements).Authors
Click to enlarge

 

Across all the universities in Australia, the average proportion of full-time non-academic staff is 55%. This figure is relatively consistent over time and by university grouping (see graph below).

Australia is not alone as data for the United Kingdom shows a similar staffing profile with 48% classed as academics. A recent analysis of US universities’ spending argues:

Boards of trustees and presidents need to put their collective foot down on the growth of support and administrative costs. Those costs have grown faster than the cost of instruction across most campuses. In no other industry would overhead costs be allowed to grow at this rate – executives would lose their jobs.

We know universities employ more non-academics than academics. But, of course, “non-academic” is a heterogeneous grouping. Many of those classified as “non-academic” directly produce academic outputs, but this rubs both ways with academics often required to produce bureaucratic outputs.

An explanation for this strange spending allocation is that academics desire a large bureaucracy to support their research efforts and for coping with external regulatory requirements such as the Excellence in Research for Australia (ERA) initiative, theAustralian Qualifications Framework (AQF) and the Tertiary Education Quality and Standards Agency (TEQSA).

 

Staffing profile (% of total FTE classed as academic) of Australian universities 2001-2010, overall and by university groupings/ alliances.Authors

 

Another explanation is that university bureaucracies enjoy being big and engage in many non-academic transactions to perpetuate their large budget and influence.

The theory to support the latter view came from Cyril Northcote Parkinson, a naval historian who studied the workings of the British civil service. While not an economist, he had great insight into bureaucracy and suggested:

There need be little or no relationship between the work to be done and the size of the staff to which it may be assigned.

Parkinson’s Law rests on two ideas: an official wants to multiply subordinates, not rivals; and, officials make work for each other. Inefficient bureaucracy is likely not restricted to universities but pervades government and non-government organisations who escape traditional market forces.

Using Admiralty Statistics for the period between 1934 and 1955, Parkinson calculated a mean annual growth rate of spending on bureaucrats to be 5.9%. The top ten Australian research universities between 2003 and 2010 report mean annual growth in spending on non-academic salary costs of 8.8%. After adjusting for inflation the annual growth rate is 5.9%.

The American economist William A. Niskanen considered the organisation of bureaucracies and proposed a budget maximising model now influential in public choice theory. It stated that rational bureaucrats will “always and everywhere seek to increase their budgets in order to increase their own power.”

An unfettered bureaucracy was predicted to grow to twice the size of a comparable firm that faces market discipline, incurring twice the cost. Some insight and anecdotal evidence to support this comes from a recent analysis of the paperwork required for doctoral students to progress from admission to graduation at an Australian university.

In that analysis, the two authors of this article (Clarke and Graves) found that 270 unique data items were requested on average 2.27 times for 13 different forms. This implies the bureaucracy was operating at more than twice the size it needs to. The university we studied has since slimmed down the process.

Further costs from a large bureaucracy arise because academics are expected to participate in activities initiated by the bureaucracy. These tend to generate low or zero academic output. Some academics also adopt the behaviour of bureaucrats and stop or dramatically scale back their academic work.

The irony is that those in leadership positions, such as heads of departments, are most vulnerable, yet they must have been academically successful to achieve their position.

Evidence of this can be seen from the publication statistics of the professors who are heads of schools among nine of the top ten Australian research universities. Between 2006 and 2011, these senior academics published an average of 1.22 papers per year per person as first author.

This level of output would not be acceptable for an active health researcher at a professor, associate professor or even lecturer level.

The nine heads of school are likely tied up with administrative tasks, and hence their potential academic outputs are lost to signing forms, attending meetings and pushing bits of paper round their university.

If spending on the costs of employing non-academics could be reduced by 50% in line with a Niskanen level of over-supply, universities could employ additional academic staff. A further boost to productivity could be expected as old and new staff benefit from a decrease in the amount of time they must dedicate to bureaucratic transactions.

If all Australian universities adopted the staffing profile of the “Group of 8” institutions, which have the highest percentage of academics (at 51.6%), there would have been up to nearly 6,500 extra academics in 2010.

While no economist would question the need for some administration, there needs to be a focus on incentives to ensure efficient operation. It’s possible to run a tight ship in academic research as shown by Alan Trounson, president of the California Institute for Regenerative Medicine (CIRM).

In 2009, Trounson pledged to spend less than 6% of revenues on administration costs, a figure that is better than most firms competing in markets. So far, this commitment has been met.

It’s clear then that finding solutions to problems in modern Australian universities calls for a better understanding of economics and a reduction in bureaucracy.

Dream Food Label

Nice sounding food labelling system. As Bittman suggests, at least a decade away…

 

http://www.nytimes.com/2012/10/14/opinion/sunday/bittman-my-dream-food-label.html

My Dream Food Label
By 

Published: October 13, 2012

WHAT would an ideal food label look like? By “ideal,” I mean from the perspective of consumers, not marketers.

Multimedia
The Proposed Nutrition Label: A Quick Read, Out Front
 Right now, the labels required on food give us loads of information, much of it useful. What they don’t do is tell us whether something is really beneficial, in every sense of the word. With a different set of criteria and some clear graphics, food packages could tell us much more.

Even the simplest information — a red, yellow or green “traffic light,” for example — would encourage consumers to make healthier choices. That might help counter obesity, a problem all but the most cynical agree is closely related to the consumption of junk food.

Of course, labeling changes like this would bring cries of hysteria from the food producers who argue that all foods are fine, although some should be eaten in moderation. To them, a red traffic-light symbol on chips and soda might as well be a skull and crossbones. But traffic lights could work: indeed, in one study, sales of red-lighted soda fell by 16.5 percentin three months.

A mandate to improve compulsory food labels is unlikely any time soon. Front-of-package labeling is sacred to big food companies, a marketing tool of the highest order, a way to encourage purchasing decisions based not on the truth but on what manufacturers would have consumers believe.

So think of the creation of a new food label as an exercise. Even if some might call it a fantasy, the world is moving this way. Traffic-light labeling came close to passing in Britain, and our own Institute of Medicine is proposing something similar. The basic question is, how might we augment current food labeling (which, in its arcane detail, serves many uses, including alerting allergic people to every specific ingredient) to best serve not only consumers but all contributors to the food cycle?

As desirable as the traffic light might be, it’s merely a first step toward allowing consumers to make truly enlightened decisions about foods. Choices based on dietary guidelines are all well and good — our health is certainly an important consideration — but they don’t go nearly far enough. We need to consider the well-being of the earth (and all that that means, like climate, and soil, water and air quality), the people who grow and prepare our food, the animals we eat, the overall wholesomeness of the food — what you might call its “foodness” (once the word “natural” might have served, but that’s been completely co-opted), as opposed to its fakeness. (“Foodness” is a tricky, perhaps even silly word, but it expresses what it should. Think about the spectrum from fruit to Froot Loops or from chicken to Chicken McNuggets and you understand it.) These are considerations that even the organic label fails to take into account.

Beyond honest and accurate nutrition and ingredient information, it would serve us well to know at a glance whether food contains trans fats; residues from hormones, antibiotics,pesticides or other chemicals; genetically modified ingredients; or indeed any ingredients not naturally occurring in the food. It would also be nice to be able to quickly discern how the production of the food affected the welfare of the workers and the animals involved and the environment. Even better, it could tell us about its carbon footprint and its origins.

A little of this is covered by the label required for organic food. Some information is voluntarily being provided by producers — though they’re most often small ones — and retailers like Whole Foods. But only when this kind of information is required will consumers be able to express preferences for health, sustainability and fairness through our buying patterns.

Still, one can hardly propose covering the front of packages with 500-word treatises about the product’s provenance. On the other hand, allowing junk food to be marketed as healthy is unacceptable, or at least would be in a society that valued the rights of consumers over those of the corporation. (The “low-fat” claim is the most egregious — plenty of high-calorie, nutritionally worthless foods are in fact fat-free — but it’s not alone.)

All of this may sound like it’s asking a lot from a label, but creating a model wasn’t that difficult. Over the last few months, I’ve worked with Werner Design Werks of St. Paul to devise a food label that, at perhaps little more than a glance (certainly in less than 10 seconds), can tell a story about three key elements of any packaged food and can provide an overall traffic-light-style recommendation or warning.

How such a labeling system could be improved, which agency would administer it (it’s now the domain of the F.D.A.), which producers would be required to use it, whether foods should carry quick-response codes that let your phone read the package and link to a Web site — all of those questions can be debated freely. Suffice it to say we went through numerous iterations to arrive at the label we are proposing. We put it out here not as an end but as a beginning.

Every packaged food label would feature a color-coded bar with a 15-point scale so that almost instantly the consumer could determine whether the product’s overall rating fell between 11 and 15 (green), 6 and 10 (yellow) or 0 and 5 (red). This alone could be enough for a fair snap decision. (We’ve also got a box to indicate the presence or absence of G.M.O.’s.)

We arrive at the score by rating three key factors, each of which comprises numerous subfactors. The first is the obvious “Nutrition,” about which little needs to be said. High sugar, trans fats, the presence of micronutrients and fiber, and so on would all be taken into account. Thus soda would rate a zero and frozen broccoli might rate a five. (It’s hard to imagine labeling fresh vegetables.)

The second is “Foodness.” This assesses just how close the product is to real food. White bread made with bleached flour, yeast conditioners and preservatives would get a zero or one; so would soda; a candy bar high in sugar but made with real ingredients would presumably score low on nutrition but could get a higher score on “foodness”; here, frozen broccoli would rate a four.

The third is the broadest (and trickiest); we’re calling it “Welfare.” This would include the treatment of workers, animals and the earth. Are workers treated like animals? Are animals produced like widgets? Is environmental damage significant? If the answer to those three questions is “yes” — as it might be, for example, with industrially produced chickens — then the score would be zero, or close to it. If the labor force is treated fairly and animals well, and waste is insignificant or recycled, the score would be higher.

These are not simple calculations, but neither can one honestly say that they’re impossible to perform. It may well be that there are wiser ways to sort through this information and get it across. The main point here is: let’s get started.

<nyt_correction_bottom>

<nyt_update_bottom>

A version of this op-ed appeared in print on October 14, 2012, on page SR6 of the New York edition with the headline: My Dream Food Label.

 

Published: October 13, 2012

The Proposed Nutrition Label: A Quick Read, Out Front

MAMA C’S ORGANIC TOMATO SAUCE This contains organic tomatoes, extra virgin olive oil, and fresh herbs; it’s even refrigerated, so it contains no preservatives.

 

Since Mama C runs an organic operation with a full-time labor force receiving benefits, the score here is superhigh all around, and the label is green.

0-5

6-10

11-15 points

CHOCOLATE FROSTED SUPER KRISPY KRUNCHIES Fifty percent sugar; almost all nutrients come from additives. But it does contain 10 percent of the daily allowance of fiber.

 

It’s barely recognizable as food in any near-natural form, and it’s made from hyper-processed commodity crops. However, workers in the plant are full time and receive benefits (and no animals are harmed), so a couple of points there (environmentally, however, the welfare is negative, so these points are mitigated): 2. Thus, red.

 

With US food labeling, the times, they are a changing…

Impressive changes in US food labeling.

Introducing the label, Mrs. Obama said, “Our guiding principle here is very simple: that you as a parent and a consumer should be able to walk into your local grocery store, pick up an item off the shelf, and be able to tell whether it’s good for your family.”

http://www.nytimes.com/2014/03/05/opinion/bittman-some-progress-on-eating-and-health.html

The Opinion Pages|CONTRIBUTING OP-ED WRITER

Some Progress on Eating and Health

For those concerned about eating and health, the glass was more than half full last week; some activists were actually exuberant. First, there wasevidence that obesity rates among pre-school children had fallen significantly. Then Michelle Obama announced plans to further reduce junk food marketing in public schools. Finally, she unveiled the Food and Drug Administration’s proposed revision of the nutrition label that appears on (literally, incredibly) something like 700,000 packaged foods (many of which only pretend to be foods); the new label will include a line for “added sugars” and makes other important changes, too.

If the 43 percent plunge in obesity in young children holds true, it’s fantastic news, a tribute to the improved Special Supplemental Nutrition Program for Women, Infants and Children (WIC), which encourages the consumption of fruits and vegetables; to improved nutrition guidelines; to a slight reduction in the marketing of junk to children; and probably to the encouragement of breast-feeding. Practically everyone in this country who speaks English or Spanish has heard or read the message that junk food is bad for you, and that patterns set in childhood mostly determine eating habits for a lifetime.

None of this happened by accident, and the lesson is that policy works.

The further limitations on marketing junk are more complicated. Essentially, producers won’t be able to promote what they already can’t sell (per new Department of Agriculture regulations), meaning that vending machines or scoreboards cannot encourage the consumption of sugar-sweetened beverages. (Promotion of increasingly beleaguered diet sodas would be allowed.)

Mrs. Obama’s tendency to see the reformulation of packaged foods as an important goal is on display here: Snacks sold in schools (both in vending machines and out) will have to meet one of four requirements, like containing at least 50 percent whole grain or a quarter-cup of fruits or vegetables.

These proposed rules are better than nothing but filled with loopholes. Manufacturers will quickly figure out how to meet the new standards, and the improvements, though not insignificant, will not go far in teaching kids that the best snack is an apple or a handful of nuts. (One way to really clobber junk food would be to prevent companies from taking tax deductions on the marketing of unhealthy foods, a move that’s in a bill sponsored by Congresswoman Rosa DeLauro of Connecticut.)

Still. It beats calling ketchup a vegetable.

The label change is huge. Yes: It could be huge-er. Yes: It’s long overdue. Yes: It may be fought by industry and won’t be in place for a long time. And yes: The real key is to be eating whole foods that don’t need to be labeled.

But by including “added sugars” on the label, the F.D.A. is siding with those who recognize that science shows that added sugars are dangerous. “This is an acknowledgment by the agency that sugar is a big problem,” says the former F.D.A. commissioner David Kessler, who presided over the development of the last label change, 20 years ago. “It will allow the next generation to grow up with far more awareness.”

Big Food has long maintained that it doesn’t matter where sugar or indeed calories come from — that they’re all the same. But “added sugars” declares the industry’s strategy of pumping up the volume on “palatability,” making ketchup, yogurt and granola bars, for example, as sweet and high-calorie as jam, ice cream and Snickers. Added sugar turns sparkling water into soda and food-like objects into candy. Added sugar, if you can forgive the hyperbole, is the enemy. This is not to say you shouldn’t eat a granola bar, but if you know what’s in it you’re less likely to think of it as “health food.”

There are a couple of other significant changes, including more realistic “serving sizes” (a serving of ice cream will now be a more realistic cup instead of a half-cup, for example), the deletion of the “calories from fat” line, which recognizes that not all fats are “bad,” and some changes in daily recommended values for various nutrients.

Mrs. Obama, who is sometimes seen (by me among many others) as overly industry-friendly, was behind the push for these changes, or at least highly supportive of them. And she deserves credit: It’s a victory, and no one on the progressive side of this struggle should see it as otherwise.

The label is hardly messianic. In fact, the F.D.A. tacitly acknowledges this by offering an alternative, stronger label, which approaches the kind of “traffic light” labeling I’ve advocated for, and which there’s evidence to support. The alternative has four sections, including “Avoid Too Much” and “Get Enough”; the first includes added sugars and trans fat, for example, and the second, fiber and vitamin D.

Michael Taylor, the F.D.A.’s deputy commissioner for foods and veterinary medicine — and the guy who supervised the new label’s development — told me that the alternative label is essentially a way to further “stimulate comments.” It may be that it’s also a demonstration of the agency’s will, designed to show industry how threatening things could get so Big Food will swallow the primary label without much complaint.

Although the ultimate decision is the F.D.A.’s, the Grocery Manufacturers’ Association statement last week said in part, “It is critical that any changes are based on the most current and reliable science.” These are, and marketers are going to have a tough time claiming otherwise. In other words, we’re going to see some form of new and stronger label, period.

Introducing the label, Mrs. Obama said, “Our guiding principle here is very simple: that you as a parent and a consumer should be able to walk into your local grocery store, pick up an item off the shelf, and be able to tell whether it’s good for your family.”

This label moves in that direction, but it could be much more powerful. Kessler would like to see a pie chart on the front of the package: “That would help people know what’s real food and what’s not.” Michael Pollan also suggests front-of-the-box labeling: “I think the U.K. has the right idea withtheir stoplight panel on the front of packages; only a small percentage of shoppers get to the nutritional panel on the back.” And the N.Y.U. nutrition professor Marion Nestle (who called this label change “courageous”) says that “A recommended upper limit for added sugars would help put them in context; I’d like to see that set at 10 percent of calories or 50 grams (200 calories) in a 2,000-calorie diet.” (I wrote about my own dream label, which includes categories that probably won’t be considered for another 10 years — if ever — back in 2012.)

What else is wrong? The label covers a lot of food, but it has no effect on restaurant food, takeout, most prepared food sold in bulk (do you have any idea what’s in that fried chicken at the supermarket deli counter, for example?) or alcohol.

The Obama administration and the F.D.A. have made a couple of moves here that might be categorized as bold, but they could have done so three or four years ago; these are regulations that can be built upon, and do not require Congressional approval. But by the time they’re in effect it may be too late for this administration to take them to the next level.

In short, it’s not a case of too-little-too-late but one of “it could’ve been more and happened sooner.”

But that’s looking backward instead of forward. If we see a decline in obesity rates, more curbs on food marketing and greater transparency in packaged food, that’s progress. Let’s be thankful for it, then get back to work pushing for more.

A couple of terrific safety quality presentations

 

Rene Amalberti to a Geneva Quality Conference:

b13-rene-amalberti

http://www.isqua.org/docs/geneva-presentations/b13-rene-amalberti.pdf?sfvrsn=2

 

Some random, but 80 slides, often good

Clapper_ReliabilitySlides

http://net.acpe.org/interact/highReliability/References/powerpoints/Clapper_ReliabilitySlides.pdf

Big data in healthcare

A decent sweep through the available technologies and techniques with practical examples of their applications.

Big data in healthcare

Big data in healthcare

big data in healthcare industrySome healthcare practitioners smirk when you tell them that you used some alternative medication such as homeopathy or naturopathy to cure some illness. However, in the longer run it sometimes really is a much better solution, even if it takes longer, because it encourages and enables the body to fight the disease naturally, and in the process build up the necessary long term defence mechanisms. Likewise, some IT practitioners question it when you don’t use the “mainstream” technologies…  So, in this post, I cover the “alternative” big data technologies. I explore the different types of big data datatypes and the NoSQL databases that cater for them. I illustrate the types of applications and analyses that they are suitable for using healthcare examples.

 

Big data in healthcare

Healthcare organisations have become very interested in big data, no doubt fired on by the hype around Hadoop and the ongoing promises that big data really adds big value.

However, big data really means different things to different people. For example, for a clinical researcher it is unstructured text on a prescription, for a radiologist it is the image of an x-ray, for an insurer it may be the network of geographical coordinates of the hospitals they have agreements with, and for a doctor it may refer to the fine print on the schedule of some newly released drug. For the CMO of a large hospital group, it may even constitute the commentary that patients are tweeting or posting on Facebook about their experiences in the group’s various hospitals. So, big data is a very generic term for a wide variety of data, including unstructured text, audio, images, geospatial data and other complex data formats, which previously were not analysed or even processed.

There is no doubt about that big data can add value in the healthcare field. In fact, it can add a lot of value. Partially because of the different types of big data that is available in healthcare. However, for big data to contribute significant value, we need to be able to apply analytics to it in order to derive new and meaningful insights. And in order to apply those analytics, the big data must be in a processable and analysable format.

Hadoop

Enter yellow elephant, stage left. Hadoop, in particular, is touted as the ultimate big data storage platform, with very efficient parallelised processing through the MapReduce distributed “divide and conquer” programming model. However, in many cases, it is very cumbersome to try and store a particular healthcare dataset in Hadoop and try and get to analytical insights using MapReduce. So even though Hadoop is an efficient storage medium for very large data sets, it is not necessarily the most useful storage structure to use when applying complex analytical algorithms to healthcare data. Quick cameo appearance. Exit yellow elephant, stage right.

There are other “alternative” storage technologies available for big data as well – namely the so-called NoSQL (not only SQL) databases. These specialised databases each support a specialised data structure, and are used to store and analyse data that fits that particular data structure. For specific applications, these data structures are therefore more appropriate to store, process and extract insights from data that suit that storage structure.

Unstructured text

A very large portion of big data is unstructured text, and this definitely applies to healthcare too. Even audio eventually becomes transformed to unstructured text. The NoSQL document databases are very good for storing, processing and analysing documents consisting of unstructured text of varying complexity, typically contained in XML, JSON or even Microsoft Word or Adobe format files. Examples of the document databases are Apache CouchDB and MongoDb. The document databases are good for storing and analysing prescriptions, drug schedules, patient records, and the contracts written up between healthcare insurers and providers.

On textual data you perform lexical analytics such as word frequency distributions, co-occurrence (to find the number of occurrences of particular words in a sentence, paragraph or even a document), find sentences or paragraphs with particular words within a given distance apart, and other text analytics operations such as link and association analysis. The overarching goal is, essentially, to turn unstructured text into structured data, by applying natural language processing (NLP) and analytical methods.

For example, if a co-occurrence analysis found that BRCA1 and breast cancer regularly occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Nowadays co-occurrence in text is often used as a simple baseline when evaluating more sophisticated systems.

Rule-based analyses make use of some a priori information, such as language structure, language rules, specific knowledge about how biologically relevant facts are stated in the biomedical literature, the kinds of relationships or variant forms that they can have with one another, or subsets or combinations of these. Of course the accuracy of a rule-based system depends on the quality of the rules that it operates on.

Statistical or machine-learning–based systems operate by building classifications, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. These are very useful to turn unstructured text into an analysable dataset. However, these systems normally require a substantial amount of already labelled training data. This is often time-consuming to create or expensive to acquire.

However, it’s important to keep in mind that much of the textual data requires disambiguation before you can process, make sense of, and apply analytics to it. The existence of ambiguity, such as multiple relationships between language and meanings or categories makes it very difficult to accurately interpret and analyse textual data. Acronym / slang / shorthand resolution, interpretation, standardisation, homographic resolution, taxonomy ontologies, textual proximity, cluster analysis and various other inferences and translations all form part of textual disambiguation. Establishing and capturing context is also crucial for unstructured text analytics – the same text can have radically different meanings and interpretations, depending on the context where it is used.

As an example of the ambiguities found in healthcare, “fat” is the official symbol of Entrez Gene entry 2195 and an alternate symbol for Entrez Gene entry 948. The distinction is not trivial – the first is associated with tumour suppression and with bipolar disorder, while the second is associated with insulin resistance and quite a few other unrelated phenotypes. If you get the interpretation wrong, you can miss or erroneously extract the wrong information.

Graph structures

An interesting class of big data is graph structures, where entities are related to each other in complex relationships like trees, networks or graphs. This type of data is typically neither large, nor unstructured, but graph structures of undetermined depth are very complex to store in relational or key-value pair structures, and even more complex to process using standard SQL. For this reason this type of data can be stored in a graph-oriented NoSQL database such as Neo4J, InfoGrid, InfiniteGraph, uRiKa, OrientDB or FlockDB.

Examples of graph structures include the networks of people that know each other, as you find on LinkedIn or Facebook. In healthcare a similar example is the network of providers linked to a group of practices or a hospital group. Referral patterns can be analysed to determine how specific doctors and hospitals team together to deliver improved healthcare outcomes. Graph-based analyses of referral patterns can also point out fraudulent behaviour, such as whether a particular doctor is a conservative or a liberal prescriber, and whether he refers patients to a hospital that charges more than double than the one just across the street.

Another useful graph-based analysis is the spread of a highly contagious disease through groups of people who were in contact with each other. An infectious disease clinic, for instance, should strive to have higher infection caseloads across such a network, but with lower actual infection rates.

A more deep-dive application of graph-based analytics is to study network models of genetic inheritance.

Geospatial data

Like other graph-structured data, geospatial data itself is pretty structured – coordinates can simply be represented as pairs of coordinates. However, when analysing and optimising ambulance routes of different lengths, for example, the data is best stored and processed using a graph structures.

Geospatial analyses are also useful for hospital and practice location planning. For example, Epworth HealthCare group teamed up with geospatial group MapData Services to conduct an extensive analysis of demographic and medical services across Victoria. The analysis involved sourcing a range of data including Australian Bureau of Statistics figures around population growth and demographics, details of currently available health services, and the geographical distribution of particular types of conditions. The outcome was that the ideal location and services mix for a new $447m private teaching hospital should be in the much smaller city of Geelong, instead of in the much larger but services-rich city of Melbourne.

Sensor data

Sensor data often are also normally quite structured, with an aspect being measured, a measurement value and a unit of measure. The complexity comes in that for each patient or each blood sample test you often have a variable record structure with widely different aspects being measured and recorded. Some sources of sensor data also produce large volumes of data at high rates. Sensor data are often best stored in key-value databases, such as Riak, DynamoDB, Redis Voldemort, and sure, Hadoop.

Biosensors are now used to enable better and more efficient patient care across a wide range of healthcare operations, including telemedicine, telehealth, and mobile health. Typical analyses compare related sets of measurements for cause and effect, reaction predictions, antagonistic interactions, dependencies and correlations.

For example, biometric data, which includes data such as diet, sleep, weight, exercise, and blood sugar levels, can be collected from mobile apps and sensors. Outcome-oriented analytics applied to this biometric data, when combined with other healthcare data, can help patients with controllable conditions improve their health by providing them with insights on their behaviours that can lead to increases or decreases in the occurrences of diseases. Data-wise healthcare organisations can similarly use analytics to understand and measure wellness, apply patient and disease segmentation, and track health setbacks and improvements. Predictive analytics can be used to inform and drive multichannel patient interaction that can help shape lifestyle choices, and so avoid poor health and costly medical care.

Concluding remarks

Although there are merits in storing and processing complex big data, we need to ensure that the type of analytical processing possible on the big data sets lead to valuable enough new insights. The way in which the big data is structured often has an implication on the type of analytics that can be applied to it. Often, too, if the analytics are not properly applied to big data integrated with existing structured data, the results are not as meaningful and valuable as expected.

We need to be cognisant of the fact that there are many storage and analytics technologies available. We need to apply the correct storage structure that matches the data structure and thereby ensure that the correct analytics can be efficiently and correctly applied, which in turn will deliver new and valuable insights.