Category Archives: complex adaptive systems

How we forget – dentate gyrus neurogenesis is the key!

 

http://theconversation.com/neuron-study-helps-explain-why-we-forget-26367

Neuron study helps explain why we forget

Childhood memories seem few and far between – if they still exist at all. So why can’t we dig them up as adults? Rob./FlickrCC BY-NC

Memories from early childhood are notoriously elusive but why can’t we recall our most formative experiences? New research suggests it could be a case of the old making way for the new – neurons, that is.

A study, published today in Science, has found that neurogenesis – the generation of new neurons – regulates forgetting in adulthood and infancy and could significantly contribute to the phenomenon of “infantile amnesia”.

Throughout life, new neurons are continually generated in the dentate gyrus, part of the brain’s hippocampus. This is one of only two areas in the mammalian brain that consistently generates neurons after infancy, aiding the formation of new memories of places and events.

These new neurons compete for established neuronal connections, altering pre-existing ones. By squeezing their way into these networks, new neurons disrupt old memories, leading to their degradation and thus contributing to forgetting.

Neurogenesis is particularly rampant in humans during infancy but declines dramatically with age. So researchers hypothesised that this increased disruption to hippocampal memories during childhood renders them inaccessible in adulthood.

Rodent recollections

To investigate the correlation between neurogenesis and forgetting, a team from the University of Toronto conducted a series of tests on mice, guinea pigs and a type of small rodent called degus.

First, a group of infant and adult mice were trained to fear a certain environment through the use of mild electric foot shocks.

Some of the adult mice were then provided access to running wheels, an activity that has been shown to boost neurogenesis. When returned to the initial environment, the adult mice who used the running wheels had largely forgotten their fear of the electric shocks, while those without the wheels maintained an association between the space and fear.

From the group of infant mice a number were given drugs to slow the rate of neurogenesis to see if decreasing the generation of new neurons mitigated the forgetting normally observed in infant mice. In accordance with the researchers’ hypothesis, the ability of these animals to retain memories improved in comparison to their untreated counterparts.

The study was then moved to rodents whose infancy period distinctly differs from mice – and humans – guinea pigs and degus. These rodents have a shorter postnatal hippocampal neurogenesis because they are more neurologically mature at birth. That means they have extended memory retention as infants so those animals were given drugs to artificially increase neurogenesis – which resulted in forgetting.

Psychologist Dr Amy Reichelt, from the University of New South Wales, said it was good the study used infant guinea pigs and degus.

“These animals are born in a ‘precocious’ way – they are basically miniature adults – able to run about independently, as opposed to mice, rats and humans who are vulnerable and dependent at birth,” she said.

“In young animals where neurogenesis is at a high level, memory circuits are constantly changing, so this supports that certain memories are ‘pruned’ out and thus forgotten – supporting the notion of infantile amnesia.”

How could you forget?

Previous studies have examined the relationship between hippocampal neurogenesis and memory, with a focus on its importance in the consolidation of memories in adult animals. But they have not considered how neurogenesis can also jeopardise memory retention.

Behavioural psychologist Dr Jee Hyun Kim, Head of the Developmental Psychobiology Lab at Melbourne’s Florey Institute of Neuroscience and Mental Health, said: “It has long been speculated that the ‘immaturity’ of the hippocampus may be responsible for infantile amnesia. Back in the days ‘immaturity’ was interpreted as dysfunctional, or low in function.

“However, recent studies speculated that immaturity can also occur in the form of hyper functionality. This study shows that the extreme plastic nature of our brains early in life can be the reason why we forget quickly episodic memories happening early in life.”

Infantile amnesia is not restricted to hippocampus-dependent memories in humans and animals. Dr Kim said it was likely that neurogenesis formed only a part of the story.

“I wouldn’t be surprised if we find undiscovered neurogenesis in other parts of the brain,” she said.

A spotless mind

But does this research hint at ways of improving memory retention in the future?

“It would not be feasible to discourage neurogenesis and reduce forgetting of existing memories,” Dr Kim said, “as adult neurogenesis has a well-established link to depression (low neurogenesis means high depression)”.

Surprisingly, it’s the other side of the coin that promises more potential opportunities. Harnessing neurogenesis to destabilise pre-existent memories could have its own benefits. Dr Kim said depressed or anxious people may want to forget and focus on creating better memories and/or thought patterns.

This can be especially constructive for children who experience trauma in early life, Dr Reichelt said.

“Increasing neurogenesis could be a useful therapy to treat or prevent the onset of post-traumatic stress disorder,” she said.

LNL: The Reading Brain – Proust and the Squid

 

 

http://www.abc.net.au/radionational/programs/latenightlive/the-reading-brain/3276794

The reading brain

Wednesday 2 April 2008 10:40PM

The development of reading brought radical changes to the functioning of the human brain, as well as to the evolution of human society.

What does our move into a digital and visual culture mean for the brain and its capacity for transformation?

Guests

Maryanne Wolf
Professor of Child Development and Director of the Center for Reading and Language Development at Tufts University, Boston.

Publications

Title
Proust and the Squid: The Story and Science of the Reading Brain
Author
Maryann Wolf
Publisher
Harper Collins
Title
Proust and the Squid: The Story and Science of the Reading Brain
Author
Maryanne Wolf
Publisher
HarperCollins

Credits

Researcher
Sarah Kanowski

Geraldine and Piketty

Geraldine completes a tight interview with Piketty to pull out the main themes of his work… capital accrues wealth faster than the waged, so tax capital (vs income) and pay a lot more attention to inequality.

Implementation of the ideas hinge on a lot of international cooperation, peace, love and mungbeans, but it remains a compelling and disruptive concept…

Check this awesome Chomsky comment:

Ne Obliviscaris :

12 Apr 2014 8:14:59am

People read snippets of Adam Smith, the few phrases they teach in school. Everybody reads the first paragraph of The Wealth of Nations where he talks about how wonderful the division of labor is. But not many people get to the point hundreds of pages later, where he says that division of labor will destroy human beings and turn people into creatures as stupid and ignorant as it is possible for a human being to be. And therefore in any civilized society the government is going to have to take some measures to prevent division of labor from proceeding to its limits.

– Noam Chomsky on Adam Smith and the Wealth of Nations

http://www.abc.net.au/radionational/programs/saturdayextra/capital-in-the-21st-century/5362266

21st century capital

Saturday 12 April 2014 8:05AM

French economist Thomas Piketty has spent fifteen years collecting and analysing incomes reported on tax returns over the last 100 years to predict that the world is heading towards inequality rates not seen since the 19th century, unless there is global action to narrow the divide.

His book, Capital in the twenty first century, has been described by the former World Bank senior economist as “one of the watershed books in economic thinking” and The Economist magazine wrote it could change the way people think about the past two centuries of economic history.

Guests

Thomas Piketty
Professor at the Paris School of Economics

Publications

Title
Capital in the twenty-first century
Author
Thomas Piketty (translated by Arthur Goldhammer)
Publisher
Belknap Press of Harvard University Press

Credits

Presenter
Geraldine Doogue
Producer
Kate MacDonald

Comments (10)

Add your comment


  • Ne Obliviscaris :

    12 Apr 2014 8:14:59am

    People read snippets of Adam Smith, the few phrases they teach in school. Everybody reads the first paragraph of The Wealth of Nations where he talks about how wonderful the division of labor is. But not many people get to the point hundreds of pages later, where he says that division of labor will destroy human beings and turn people into creatures as stupid and ignorant as it is possible for a human being to be. And therefore in any civilized society the government is going to have to take some measures to prevent division of labor from proceeding to its limits.

    – Noam Chomsky on Adam Smith and the Wealth of Nations


  • david hawcroft :

    12 Apr 2014 10:04:24am

    It took 20years of study to conclude that the rich get richer and the poor get poorer?

    In case there’s anyone doesn’t know it’s an axiom – amongst the poor, who know from bitter experience going back generations.

    Have you ever played Monopoly? The poor simply don’t exist, really. It’s a game between landowners, capitalists. And what happens? The capital finally concentrates in one player.

    Economics 101 : ‘the rational investor will always seek to maximise profit’. Whereas the rational human being seeks to maximise humanity and love of friends and family at the expense of profit.

    So in the end who gets most of which?

    Which calls into question the definition of ‘profit’ which is a concept that should not be confined to money.

    And calls into question the concept of ‘humanity’ which calls into question the concept of ‘civilisation’.

    Clearly our civilisation is being run as though it were a business with economic rationalism the guiding force and monetary profit the great light in the sky.

    All wrong. Needs rethinking. Needs philosophy. Political parties – as said somewhere I think this morning in this segment or somewhere – currently without any philosophy whatever.

    At bottom what’s been lost is humanity.

    You don’t run humanity as a business.

    You don’t measure what profits humans in dollar terms.

    We’re building a machine and populating it with robots – us. Daleks. We’re all becoming Daleks in a Dalek world.

    Are the rich the only ones who can escape this and live human lives free of the economic bondage and the madness of a robot world? No. They are the ones that lost and went under first.

    It is our blindness and stupid belief that they are ‘rich’, ‘succesful’, ‘powerful’, ‘safe’ etc.. etc.. that leads us to wish to emulate them, follow them, be them…

    So we bend to our tasks and forsake our humanity and strive, strive, strive to become like those sorry creatures..

    bloody shame, eh?


    • seyre :

      16 Apr 2014 1:59:42pm

      wonderful. YES! well said


    • Bob Elliston :

      18 Apr 2014 1:52:35am

      Thanks David.
      You are quite right.
      I’m reminded of Matthew 16:26:
      “For what is a man profited, if he shall gain the whole world, and lose his own soul? Or what shall a man give in exchange for his soul?”
      This dichotomy between the rich and the poor has troubled us for at LEAST two thousand years.
      Time for a new economic system, one that is centred on fairness, justice and sustainability.


  • Mike Ballard :

    12 Apr 2014 10:57:48am

    I see no political will on the part of those who appropriate the wealth which the bottom 90% produce to allow their gains to be redistributed through a tax on their accumulated wealth. Furthermore, history has demonstrated that as soon as politicians suggest such a tax change, they are hounded out of office through a flurry of public relations propaganda directed at workers anxious about their job security, just as Kevin Rudd was after he introduced the mining tax.

    Julia Gillard had Rudd’s tax renegotiated by a Labor right-winger, Martin Ferguson, and what was agreed to by the mining capitalists was the toothless tax we still have today; but which shall be axed after the new Tory dominated Senate convenes in July.


  • Cedric Beidatsch :

    13 Apr 2014 10:14:59am

    I stress these comments come from the radio interview, not the book, which I have not yet read (or even seen in the stores!) Piketty shows that inequality increases under capitalism as the owners of capital accrue wealth at a greater rate than wage earners. This did not occur in the period 1945 – 1973 when high growth rates were experienced and inequality decreased. Piketty concludes that there is no “logical reason” why inequality should increase like it does and that what “we” need to do is find institutions on a global scale that can for example progressively tax capital to reduce this inequality gap. I have no argument with the statistics that illustrate the growth in inequality; but would suggest that rather than seeing this as a return to some mythical nineteenth century “hierarchical society” this phenomenon is in fact about 500 years old and is inherent in structure of capitalism itself. Piketty simply has had too short a time horizon for his research. If we view capital in a proper historical perspective the 25 years post WWII stand out as an anomaly not a normal to which we can easily return. The explanation for the post war social democratic consensus should then be sought in specific historical circumstances. I would suggest there are the following: 1) the massive destruction of capital in the period 1914 – 1945; 2) the strength and power of working class struggle from 1917 on that put capital on the defensive; 3) the absence of any real competitors to American capital after 1945 until European and Japanese capital rebuilt by ca. 1965; 4) the hyper exploitation of the Third World which does not even get a look in Piketty’s analysis (as far as I can determine anyway). What Piketty overlooks totally is the issue of class and the power of classes. Post 1945 the working class were strong and were able to wring a reformist economic agenda from a capital owning class and via the state, which could be granted because the specific global economic conditions were supportive of a high rate of profit that compensated or progressive taxation in the developed world. The moment that particular combination of historical circumstances came to an end, between 1965 and 1973, the capital owning class went on the offensive to restructure the game. The capitalist class are in the present conjuncture simply way more powerful than the working class and there is no neutral way to impose the kind of institutions that Piketty suggests. Politics is not the realm of dispassionate reason but of class conflict and winner takes all. Piketty’s research and stats will be useful; his proposed remedies a chimera. Without a really strong working class offensive, or the kind of destruction of capital produced by the Great Depression and 2 world wars, the rich just keep getting richer and the rest of us work to make them richer


  • Pat :

    13 Apr 2014 4:12:53pm

    US ideologue economists are ‘revered’, unlike in France because they are serving to retail and legitimize cultivated triumphalist neoliberal economic rationalism (engendered via Hayek & in Friedman’s Chicago School lab) now become the only economics, the lingua franca under the global empire of conglomerated corporate capitalism. An elitist and rogue ideology, intentionally dissociated from and privileged above other social sciences. It is a purposefully designed system of exploitation for syphoning real wealth into fewer and fewer hands…..the cultivated “vampire squid” feeding the 1%. It is the functioning machine producing deliberate and massive inequality which runs the corporate empire (“the old industrial military complex”) and which occupies governments of the European “democratic” model via the paradigm of the revolving door between the various Wall Streets and Whitehouses. And globally via the architecture of the World Bank, IMF etc and a dysfunctional UN. The US CEO of this market empire has the NSA and the world’s biggest nuclear arsenal at his disposal. Why would this emperor supreme of crypto-fascism willingly, magnanimously (considering his late 20th century history of covert and overt operations, wars of aggression, assassinations/exercises of soft power etc) hand over this power and share his wealth without a fight after all the trouble he’s gone to in securing it? Koombyeya it won’t be.


  • Bryan Kavanagh :

    14 Apr 2014 2:45:39pm

    Good on you, Thomas Piketty! Now we’re getting to the nub of things about how wealth disparities have risen! In your own way, you’re coming to the same conclusion the American philosopher and economist, Henry George, came to in his “Progress and Poverty” – that the returns to labour and capital will always be diminished if rentiers are permitted to steal our publicly-generated rents via untaxed rent-seeking.

    All we need in Australia is an all-in, single rate land tax, as suggested by the Henry Tax Review, because the wealthy own the more valuable land, and it can’t flee overseas. The first country to bring in a serious land tax will be the first country to reward workers and businesses with their fair due, and to redress the problem of economic rents flowing mainly to the 1%.


  • Geoff Saunders :

    15 Apr 2014 7:45:17am

    “…precious few solutions, it must be said…”

    Gee. Let me think…oh, how about this one? Rich folks and corporations should pay a bit more tax back to the societies upon whose security, stability, infrastructure and amenity they base their wealth.

    Call me Trotsky…


  • Groucho or Karl :

    18 Apr 2014 9:26:40pm

    Wonderful to have such a prominent (and modest) thinker on Aunty.

    Thanks Geraldine.

Healthy Ageing Japan-style

 

http://www.abc.net.au/radionational/programs/saturdayextra/japan27s-aging-population/5397864

Japan’s ageing population

Saturday 26 April 2014 8:30AM

A quarter of Japanese people are now aged over 65, with predictions that nearly half the population will reach that age by the end of the century.

In Japan people don’t just live longer, they work longer, stay healthier and approach old age in some interesting and innovative ways.

One policy initiative is old age day care which is well used and well organised in Japan.

Guests

Professor John Creighton Campbell
Visiting scholar, Institute of Gerontology at Tokyo University

Credits

Presenter
Dr Norman Swan
Producer
Kate Pearcy

an idea of earth shattering significance

ok.

been looking for alignment between a significant industry sector and human health. it’s a surprisingly difficult alignment to find… go figure?

but I had lunch with joran laird from nab health today, and something amazing dawned on me, on the back of the AIA Vitality launch.

Life (not health) insurance is the vehicle. The longer you pay premiums, the more money they make.

AMAZING… AN ALIGNMENT!!!

This puts the pressure on prevention advocates to put their money where their mouth is.

If they can extend healthy life by a second, how many billions of dollars does that make for life insurers?

imagine, a health intervention that doesn’t actually involve the blundering health system!!?? PERFECT!!!

And Australia’s the perfect test bed given the opt out status of life insurance and superannuation.

Joran wants to introduce me to the MLC guys.

What could possibly go wrong??????

Flu Trends fails…

  • “automated arrogance”
  • big data hubris
  • At its best, science is an open, cooperative and cumulative effort. If companies like Google keep their big data to themselves, they’ll miss out on the chance to improve their models, and make big data worthy of the hype. “To harness the research community, they need to be more transparent,” says Lazer. “The models for collaboration around big data haven’t been built.” It’s scary enough to think that private companies are gathering endless amounts of data on us. It’d be even worse if the conclusions they reach from that data aren’t even right.

But then this:
http://www.theatlantic.com/technology/archive/2014/03/in-defense-of-google-flu-trends/359688/

 

http://time.com/23782/google-flu-trends-big-data-problems/

Google’s Flu Project Shows the Failings of Big Data

Google flu trends
GEORGES GOBET/AFP/Getty Images

A new study shows that using big data to predict the future isn’t as easy as it looks—and that raises questions about how Internet companies gather and use information

Big data: as buzzwords go,it’s inescapable. Gigantic corporations like SAS andIBM tout their big data analytics, while experts promise that big data—our exponentially growing ability to collect and analyze information about anything at all—will transform everything from business to sports to cooking. Big data was—no surprise—one of the major themes coming outof this month’s SXSW Interactive conference. It’s inescapable.

One of the most conspicuous examples of big data in action is Google’s data-aggregating tool Google Flu Trends (GFT). The program is designed to provide real-time monitoring of flu cases around the world based on Google searches that match terms for flu-related activity. Here’s how Google explains it:

We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for “flu” is actually sick, but a pattern emerges when all the flu-related search queries are added together. We compared our query counts with traditional flu surveillance systems and found that many search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in different countries and regions around the world.

Seems like a perfect use of the 500 million plus Google searchesmade each day. There’s a reason GFT became the symbol of big data in action, in books like Kenneth Cukier and Viktor Mayer-Schonberger’s Big Data: A Revolution That Will Transform How We Live, Work and Think. But there’s just one problem: as a new article in Science shows, when you compare its results to the real world, GFT doesn’t really work.

GFT overestimated the prevalence of flu in the 2012-2013 and 2011-2012 seasons by more than 50%. From August 2011 to September 2013, GFT over-predicted the prevalence of the flu in 100 out 108 weeks. During the peak flu season last winter, GFTwould have had us believe that 11% of the U.S. had influenza, nearly double the CDC numbers of 6%. If you wanted to project current flu prevalence, you would have done much better basing your models off of 3-week-old data on cases from the CDC than you would have been using GFT’s sophisticated big data methods. “It’s a Dewey beats Truman moment for big data,” says David Lazer, a professor of computer science and politics at Northeastern University and one of the authors of the Sciencearticle.

Just as the editors of the Chicago Tribune believed it could predict the winner of the close 1948 Presidential election—they were wrong—Google believed that its big data methods alone were capable of producing a more accurate picture of real-time flu trends than old methods of prediction from past data. That’s a form of “automated arrogance,” or big data hubris, and it can be seen in a lot of the hype around big data today. Just because companies like Google can amass an astounding amount of information about the world doesn’t mean they’re always capable of processing that information to produce an accurate picture of what’s going on—especially if turns out they’re gathering the wrong information. Not only did the search terms picked by GFT often not reflect incidences of actual illness—thus repeatedly overestimating just how sick the American public was—it also completely missed unexpected events like the nonseasonal 2009 H1N1-A flu pandemic. “A number of associations in the model were really problematic,” says Lazer. “It was doomed to fail.”

Nor did help that GFT was dependent on Google’s top-secret and always changing search algorithm. Google modifies its search algorithm to provide more accurate results, but also to increase advertising revenue. Recommended searches, based on what other users have searched, can throw off the results for flu trends. While GFT assumes that the relative search volume for different flu terms is based in reality—the more of us are sick, the more of us will search for info about flu as we sniffle above our keyboards—in fact Google itself alters search behavior through that ever-shifting algorithim. If the data isn’t reflecting the world, how can it predict what will happen?

GFT and other big data methods can be useful, but only if they’re paired with what the Science researchers call “small data”—traditional forms of information collection. Put the two together, and you can get an excellent model of the world as it actually is. Of course, if big data is really just one tool of many, not an all-purpose path to omniscience, that would puncture the hype just a bit. You won’t get a SXSW panel with that kind of modesty.

A bigger concern, though, is that much of the data being gathered in “big data”—and the formulas used to analyze it—is controlled by private companies that can be positively opaque. Google has never made the search terms used in GFT public, and there’s no way for researchers to replicate how GFT works. There’s Google Correlate, which allows anyone to find search patterns that purport to map real-life trends, but as the Scienceresearchers wryly note: “Clicking the link titled ‘match the pattern of actual flu actvity (this is how we built Google Flu Trends!)’ will not, ironically, produce a replication of the GFT search terms.” Even in the academic papers on GFT written by Google researchers, there’s no clear contact information, other than a generic Google email address. (Academic papers almost always contain direct contact information for lead authors.)

At its best, science is an open, cooperative and cumulative effort. If companies like Google keep their big data to themselves, they’ll miss out on the chance to improve their models, and make big data worthy of the hype. “To harness the research community, they need to be more transparent,” says Lazer. “The models for collaboration around big data haven’t been built.” It’s scary enough to think that private companies are gathering endless amounts of data on us. It’d be even worse if the conclusions they reach from that data aren’t even right.

Machines put half of US work at risk

Great tip from Michael Griffith on the back of last night’s dinner terrific conversation at the Nicholas Gruen organised feast at Hellenic Republic…

http://www.bloomberg.com/news/2014-03-12/your-job-taught-to-machines-puts-half-u-s-work-at-risk.html

Paper (PDF): The_Future_of_Employment

Your Job Taught to Machines Puts Half U.S. Work at Risk

By Aki Ito  Mar 12, 2014 3:01 PM ET
Photographer: Javier Pierini/Getty Images

Who needs an army of lawyers when you have a computer?

When Minneapolis attorney William Greene faced the task of combing through 1.3 million electronic documents in a recent case, he turned to a so-called smart computer program. Three associates selected relevant documents from a smaller sample, “teaching” their reasoning to the computer. The software’s algorithms then sorted the remaining material by importance.

“We were able to get the information we needed after reviewing only 2.3 percent of the documents,” said Greene, a Minneapolis-based partner at law firm Stinson Leonard Street LLP.

Full Coverage: Technology and the Economy

Artificial intelligence has arrived in the American workplace, spawning tools that replicate human judgments that were too complicated and subtle to distill into instructions for a computer. Algorithms that “learn” from past examples relieve engineers of the need to write out every command.

The advances, coupled with mobile robots wired with this intelligence, make it likely that occupations employing almost half of today’s U.S. workers, ranging from loan officers to cab drivers and real estate agents, become possible to automate in the next decade or two, according to a study done at the University of Oxford in the U.K.

Source: Aethon Inc. via Bloomberg

Aethon Inc.’s self-navigating TUG robot transports soiled linens, drugs and meals in…Read More

“These transitions have happened before,” said Carl Benedikt Frey, co-author of the study and a research fellow at the Oxford Martin Programme on the Impacts of Future Technology. “What’s different this time is that technological change is happening even faster, and it may affect a greater variety of jobs.”

Profound Imprint

It’s a transition on the heels of an information-technology revolution that’s already left a profound imprint on employment across the globe. For both physical andmental labor, computers and robots replaced tasks that could be specified in step-by-step instructions — jobs that involved routine responsibilities that were fully understood.

That eliminated work for typists, travel agents and a whole array of middle-class earners over a single generation.

Yet even increasingly powerful computers faced a mammoth obstacle: they could execute only what they’re explicitly told. It was a nightmare for engineers trying to anticipate every command necessary to get software to operate vehicles or accurately recognize speech. That kept many jobs in the exclusive province of human labor — until recently.

Oxford’s Frey is convinced of the broader reach of technology now because of advances in machine learning, a branch of artificial intelligence that has software “learn” how to make decisions by detecting patterns in those humans have made.

Source: Aethon Inc. via Bloomberg

Artificial intelligence has arrived in the American workplace, spawning tools that… Read More

702 Occupations

The approach has powered leapfrog improvements in making self-driving cars and voice search a reality in the past few years. To estimate the impact that will have on 702 U.S. occupations, Frey and colleague Michael Osborne applied some of their own machine learning.

They first looked at detailed descriptions for 70 of those jobs and classified them as either possible or impossible to computerize. Frey and Osborne then fed that data to an algorithm that analyzed what kind of jobs make themselves to automation and predicted probabilities for the remaining 632 professions.

The higher that percentage, the sooner computers and robots will be capable of stepping in for human workers. Occupations that employed about 47 percent of Americans in 2010 scored high enough to rank in the risky category, meaning they could be possible to automate “perhaps over the next decade or two,” their analysis, released in September, showed.

Safe Havens

“My initial reaction was, wow, can this really be accurate?” said Frey, who’s a Ph.D. economist. “Some of these occupations that used to be safe havens for human labor are disappearing one by one.”

Loan officers are among the most susceptible professions, at a 98 percent probability, according to Frey’s estimates. Inroads are already being made by Daric Inc., an online peer-to-peer lender partially funded by former Wells Fargo & Co. Chairman Richard Kovacevich. Begun in November, it doesn’t employ a single loan officer. It probably never will.

The startup’s weapon: an algorithm that not only learned what kind of person made for a safe borrower in the past, but is also constantly updating its understanding of who is creditworthy as more customers repay or default on their debt.

It’s this computerized “experience,” not a loan officer or a committee, that calls the shots, dictating which small businesses and individuals get financing and at what interest rate. It doesn’t need teams of analysts devising hypotheses and running calculations because the software does that on massive streams of data on its own.

Lower Rates

The result: An interest rate that’s typically 8.8 percentage points lower than from a credit card, according to Daric. “The algorithm is the loan officer,” said Greg Ryan, the 29-year-old chief executive officer of the Redwood City, California, company that consists of him and five programmers. “We don’t have overhead, and that means we can pass the savings on to our customers.”

Similar technology is transforming what is often the most expensive part of litigation, during which attorneys pore over e-mails, spreadsheets, social media posts and other records to build their arguments.

Each lawsuit was too nuanced for a standard set of sorting rules, and the string of keywords lawyers suggested before every case still missed too many smoking guns. The reading got so costly that many law firms farmed out the initial sorting to lower-paid contractors.

Training Software

The key to automate some of this was the old adage to show not tell — to have trained attorneys illustrate to the software the kind of documents that make for gold. Programs developed by companies such as San Francisco-based Recommind Inc. then run massive statistics to predict which files expensive lawyers shouldn’t waste their time reading. It took Greene’s team of lawyers 600 hours to get through the 1.3 million documents with the help of Recommind’s software. That task, assuming a speed of 100 documents per hour, could take 13,000 hours if humans had to read all of them.

“It doesn’t mean you need zero people, but it’s fewer people than you used to need,” said Daniel Martin Katz, a professor at Michigan State University’s College of Law in East Lansing who teaches legal analytics. “It’s definitely a transformation for getting people that first job while they’re trying to gain additional skills as lawyers.”

Robot Transporters

Smart software is transforming the world of manual labor as well, propelling improvements in autonomous cars that make it likely machines can replace taxi drivers and heavy truck drivers in the next two decades, according to Frey’s study.

One application already here: Aethon Inc.’s self-navigating TUG robots that transport soiled linens, drugs and meals in now more than 140 hospitals predominantly in the U.S. When Pittsburgh-based Aethon first installs its robots in new facilities, humans walk the machines around. It would have been impossible to have engineers pre-program all the necessary steps, according to Chief Executive Officer Aldo Zini.

“Every building we encounter is different,” said Zini. “It’s an infinite number” of potential contingencies and “you could never ahead of time try to program everything in. That would be a massive effort. We had to be able to adapt and learn as we go.”

Human-level Cognition

To be sure, employers won’t necessarily replace their staff with computers just because it becomes technically feasible to do so, Frey said. It could remain cheaper for some time to employ low-wage workers than invest in expensive robots. Consumers may prefer interacting with people than with self-service kiosks, while government regulators could choose to require human supervision of high-stakes decisions.

Even more, recent advances still don’t mean computers are nearing human-level cognition that would enable them to replicate most jobs. That’s at least “many decades” away, according to Andrew Ng, director of the Stanford Artificial Intelligence Laboratory near Palo Alto, California.

Machine-learning programs are best at specific routines with lots of data to train on and whose answers can be gleaned from the past. Try getting a computer to do something that’s unlike anything it’s seen before, and it just can’t improvise. Neither can machines come up with novel and creative solutions or learn from a couple examples the way people can, said Ng.

Employment Impact

“This stuff works best on fairly structured problems,” said Frank Levy, a professor emeritus at the Massachusetts Institute of Technology in Cambridge who has extensively researched technology’s impact on employment. “Where there’s more flexibility needed and you don’t have all the information in advance, it’s a problem.”

That means the positions of Greene and other senior attorneys, whose responsibilities range from synthesizing persuasive narratives to earning the trust of their clients, won’t disappear for some time. Less certain are prospects for those specializing in lower-paid legal work like document reading, or in jobs that involve other relatively repetitive tasks.

As more of the world gets digitized and the cost to store and process that information continues to decline, artificial intelligence will become even more pervasive in everyday life, says Stanford’s Ng.

“There will always be work for people who can synthesize information, think critically, and be flexible in how they act in different situations,” said Ng, also co-founder of online education provider Coursera Inc. Still, he said, “the jobs of yesterday won’t the same as the jobs of tomorrow.”

Workers will likely need to find vocations involving more cognitively complex tasks that machines can’t touch. Those positions also typically require more schooling, said Frey. “It’s a race between technology and education.”

To contact the reporter on this story: Aki Ito in San Francisco at aito16@bloomberg.net

To contact the editors responsible for this story: Chris Wellisz at cwellisz@bloomberg.net Gail DeGeorge, Mark Rohner

A couple of terrific safety quality presentations

 

Rene Amalberti to a Geneva Quality Conference:

b13-rene-amalberti

http://www.isqua.org/docs/geneva-presentations/b13-rene-amalberti.pdf?sfvrsn=2

 

Some random, but 80 slides, often good

Clapper_ReliabilitySlides

http://net.acpe.org/interact/highReliability/References/powerpoints/Clapper_ReliabilitySlides.pdf

In defense of sugar

Interesting, detailed, slick presentation on the biochemistry and epidemiology of fructose on health

He discloses significant industry engagements (coca cola, dr pepper etc.)

Does present the view (shared by Katz) that it shouldn’t be about single nutrients, but diet and activity overall.

This seems to be industry-backed smoke to confuse the discussion.

http://media.soph.uab.edu/PresenterPlus/norc-sievenpiper-20140214/main.htm#

Title: Sugars and cardiometabolic health: A story lost in translation?
Presenter: John L. Sievenpiper, MD, PhD
Date: February 11, 2014
Description: NORC Seminar
SugarsOK