Category Archives: cool

Mindful habits for a lighter life…

Terrific tips…

http://www.apartmenttherapy.com/10-mindful-habits-for-a-lighter-life-201180

10 Mindful Habits for a Lighter Life

APARTMENT THERAPY’S HOME REMEDIES

null
Pin it button

A disorganized and cluttered home can quickly become a huge source of stress that can rob us of the ability to live life to the fullest. Of course we all want a lighter life, but what is the key to truly achieving it? Streamlining your home and developing the rituals needed to keep it that way can allow you to come home to a peaceful and relaxing environment instead of feeling overwhelmed the instant you turn the key in the door. Here are ten mindful habits that you can follow to help you achieve the life you deserve.

1. Have a vision for your space.
The first step toward a lighter life and less cluttered home is to have a vision for your life and your space. Picture it in your mind and use that image as motivation to make the tough decisions.

2. Have a plan.
Go on the offense and plan ahead, for example, have a blueprint for your wardrobe – choosing simple basics that coordinate with one another will ensure that you have an open, airy, unstuffed closet no matter its size.

3. Before buying, ask yourself, “Is it useful? Is it beautiful?”
If the answer is no, don’t allow the item to enter your home, even if it may be free or was a gift.

4. Don’t let it get past the front door – shred, trash, donate, or recycle it.
Immediately shred personal papers, recycle junk mail, and trash garbage – don’t let it pile up on your landing strip or counter. Keep a donation box in the back of your vehicle and drive it to your favorite charity as soon as it becomes full.

5. Follow the “one in, one out” rule.
If you buy a new dress, donate one to charity. When your child gets a new stuffed animal, have them choose one to give away. Make sure it’s done immediately and don’t be afraid to make substitutions. It’s okay to get rid of a blazer when you purchase a pair of pants – as long as it’s making space in your closet.

6. Utilize an outbox.
If you’re finding it difficult to get rid of something due to an emotional or financial connection, or you’re just afraid that you might need it “someday”, an outbox is a no-pressure way to get started on decluttering.

7. Have a place for everything.
If you know where something belongs it’s much more likely that you will actually put it away. Items that don’t have a home are the ones that pile up in the corners, cabinets, closets and on the horizontal surfaces of your home. Eventually, if left unchecked, those piles can become overwhelming sources of stress.

8. Follow routines and complete the cycle.
Those with clean, organized homes have a very simple secret – they follow small routines throughout the day. After dinner the dishes are washed and put away. When they get the mail it is opened and action is taken. The coat is hung up immediately upon entering the house. Completing the cycle is all part of this – putting clothing in the washer but failing to switch it to the dryer will leave you with moldy, musty-smelling clothing that needs to be rewashed and the dirty laundry piles up. Failing to unload the dishwasher in the morning leads to a pile of dirty dishes in the sink.

9. Be a problem solver.
Too many bulky CD’s and DVD’s? Toss the plastic cases and use paper instead. Overwhelming piles of old family photos? Digitize them. Keep losing your keys? Create a landing strip. When something threatens to overwhelm you or becomes a source of stress, set aside time to research or think about a solution for the problem and take action.

10. Be content with what you have.
There’s the old saying “less is more”. Being content with what you have is truly the key to a lighter life.

What habits do you follow in order to maintain a lighter life and an uncluttered home? We’d love to hear about them!

Machines put half of US work at risk

Great tip from Michael Griffith on the back of last night’s dinner terrific conversation at the Nicholas Gruen organised feast at Hellenic Republic…

http://www.bloomberg.com/news/2014-03-12/your-job-taught-to-machines-puts-half-u-s-work-at-risk.html

Paper (PDF): The_Future_of_Employment

Your Job Taught to Machines Puts Half U.S. Work at Risk

By Aki Ito  Mar 12, 2014 3:01 PM ET
Photographer: Javier Pierini/Getty Images

Who needs an army of lawyers when you have a computer?

When Minneapolis attorney William Greene faced the task of combing through 1.3 million electronic documents in a recent case, he turned to a so-called smart computer program. Three associates selected relevant documents from a smaller sample, “teaching” their reasoning to the computer. The software’s algorithms then sorted the remaining material by importance.

“We were able to get the information we needed after reviewing only 2.3 percent of the documents,” said Greene, a Minneapolis-based partner at law firm Stinson Leonard Street LLP.

Full Coverage: Technology and the Economy

Artificial intelligence has arrived in the American workplace, spawning tools that replicate human judgments that were too complicated and subtle to distill into instructions for a computer. Algorithms that “learn” from past examples relieve engineers of the need to write out every command.

The advances, coupled with mobile robots wired with this intelligence, make it likely that occupations employing almost half of today’s U.S. workers, ranging from loan officers to cab drivers and real estate agents, become possible to automate in the next decade or two, according to a study done at the University of Oxford in the U.K.

Source: Aethon Inc. via Bloomberg

Aethon Inc.’s self-navigating TUG robot transports soiled linens, drugs and meals in…Read More

“These transitions have happened before,” said Carl Benedikt Frey, co-author of the study and a research fellow at the Oxford Martin Programme on the Impacts of Future Technology. “What’s different this time is that technological change is happening even faster, and it may affect a greater variety of jobs.”

Profound Imprint

It’s a transition on the heels of an information-technology revolution that’s already left a profound imprint on employment across the globe. For both physical andmental labor, computers and robots replaced tasks that could be specified in step-by-step instructions — jobs that involved routine responsibilities that were fully understood.

That eliminated work for typists, travel agents and a whole array of middle-class earners over a single generation.

Yet even increasingly powerful computers faced a mammoth obstacle: they could execute only what they’re explicitly told. It was a nightmare for engineers trying to anticipate every command necessary to get software to operate vehicles or accurately recognize speech. That kept many jobs in the exclusive province of human labor — until recently.

Oxford’s Frey is convinced of the broader reach of technology now because of advances in machine learning, a branch of artificial intelligence that has software “learn” how to make decisions by detecting patterns in those humans have made.

Source: Aethon Inc. via Bloomberg

Artificial intelligence has arrived in the American workplace, spawning tools that… Read More

702 Occupations

The approach has powered leapfrog improvements in making self-driving cars and voice search a reality in the past few years. To estimate the impact that will have on 702 U.S. occupations, Frey and colleague Michael Osborne applied some of their own machine learning.

They first looked at detailed descriptions for 70 of those jobs and classified them as either possible or impossible to computerize. Frey and Osborne then fed that data to an algorithm that analyzed what kind of jobs make themselves to automation and predicted probabilities for the remaining 632 professions.

The higher that percentage, the sooner computers and robots will be capable of stepping in for human workers. Occupations that employed about 47 percent of Americans in 2010 scored high enough to rank in the risky category, meaning they could be possible to automate “perhaps over the next decade or two,” their analysis, released in September, showed.

Safe Havens

“My initial reaction was, wow, can this really be accurate?” said Frey, who’s a Ph.D. economist. “Some of these occupations that used to be safe havens for human labor are disappearing one by one.”

Loan officers are among the most susceptible professions, at a 98 percent probability, according to Frey’s estimates. Inroads are already being made by Daric Inc., an online peer-to-peer lender partially funded by former Wells Fargo & Co. Chairman Richard Kovacevich. Begun in November, it doesn’t employ a single loan officer. It probably never will.

The startup’s weapon: an algorithm that not only learned what kind of person made for a safe borrower in the past, but is also constantly updating its understanding of who is creditworthy as more customers repay or default on their debt.

It’s this computerized “experience,” not a loan officer or a committee, that calls the shots, dictating which small businesses and individuals get financing and at what interest rate. It doesn’t need teams of analysts devising hypotheses and running calculations because the software does that on massive streams of data on its own.

Lower Rates

The result: An interest rate that’s typically 8.8 percentage points lower than from a credit card, according to Daric. “The algorithm is the loan officer,” said Greg Ryan, the 29-year-old chief executive officer of the Redwood City, California, company that consists of him and five programmers. “We don’t have overhead, and that means we can pass the savings on to our customers.”

Similar technology is transforming what is often the most expensive part of litigation, during which attorneys pore over e-mails, spreadsheets, social media posts and other records to build their arguments.

Each lawsuit was too nuanced for a standard set of sorting rules, and the string of keywords lawyers suggested before every case still missed too many smoking guns. The reading got so costly that many law firms farmed out the initial sorting to lower-paid contractors.

Training Software

The key to automate some of this was the old adage to show not tell — to have trained attorneys illustrate to the software the kind of documents that make for gold. Programs developed by companies such as San Francisco-based Recommind Inc. then run massive statistics to predict which files expensive lawyers shouldn’t waste their time reading. It took Greene’s team of lawyers 600 hours to get through the 1.3 million documents with the help of Recommind’s software. That task, assuming a speed of 100 documents per hour, could take 13,000 hours if humans had to read all of them.

“It doesn’t mean you need zero people, but it’s fewer people than you used to need,” said Daniel Martin Katz, a professor at Michigan State University’s College of Law in East Lansing who teaches legal analytics. “It’s definitely a transformation for getting people that first job while they’re trying to gain additional skills as lawyers.”

Robot Transporters

Smart software is transforming the world of manual labor as well, propelling improvements in autonomous cars that make it likely machines can replace taxi drivers and heavy truck drivers in the next two decades, according to Frey’s study.

One application already here: Aethon Inc.’s self-navigating TUG robots that transport soiled linens, drugs and meals in now more than 140 hospitals predominantly in the U.S. When Pittsburgh-based Aethon first installs its robots in new facilities, humans walk the machines around. It would have been impossible to have engineers pre-program all the necessary steps, according to Chief Executive Officer Aldo Zini.

“Every building we encounter is different,” said Zini. “It’s an infinite number” of potential contingencies and “you could never ahead of time try to program everything in. That would be a massive effort. We had to be able to adapt and learn as we go.”

Human-level Cognition

To be sure, employers won’t necessarily replace their staff with computers just because it becomes technically feasible to do so, Frey said. It could remain cheaper for some time to employ low-wage workers than invest in expensive robots. Consumers may prefer interacting with people than with self-service kiosks, while government regulators could choose to require human supervision of high-stakes decisions.

Even more, recent advances still don’t mean computers are nearing human-level cognition that would enable them to replicate most jobs. That’s at least “many decades” away, according to Andrew Ng, director of the Stanford Artificial Intelligence Laboratory near Palo Alto, California.

Machine-learning programs are best at specific routines with lots of data to train on and whose answers can be gleaned from the past. Try getting a computer to do something that’s unlike anything it’s seen before, and it just can’t improvise. Neither can machines come up with novel and creative solutions or learn from a couple examples the way people can, said Ng.

Employment Impact

“This stuff works best on fairly structured problems,” said Frank Levy, a professor emeritus at the Massachusetts Institute of Technology in Cambridge who has extensively researched technology’s impact on employment. “Where there’s more flexibility needed and you don’t have all the information in advance, it’s a problem.”

That means the positions of Greene and other senior attorneys, whose responsibilities range from synthesizing persuasive narratives to earning the trust of their clients, won’t disappear for some time. Less certain are prospects for those specializing in lower-paid legal work like document reading, or in jobs that involve other relatively repetitive tasks.

As more of the world gets digitized and the cost to store and process that information continues to decline, artificial intelligence will become even more pervasive in everyday life, says Stanford’s Ng.

“There will always be work for people who can synthesize information, think critically, and be flexible in how they act in different situations,” said Ng, also co-founder of online education provider Coursera Inc. Still, he said, “the jobs of yesterday won’t the same as the jobs of tomorrow.”

Workers will likely need to find vocations involving more cognitively complex tasks that machines can’t touch. Those positions also typically require more schooling, said Frey. “It’s a race between technology and education.”

To contact the reporter on this story: Aki Ito in San Francisco at aito16@bloomberg.net

To contact the editors responsible for this story: Chris Wellisz at cwellisz@bloomberg.net Gail DeGeorge, Mark Rohner

A couple of terrific safety quality presentations

 

Rene Amalberti to a Geneva Quality Conference:

b13-rene-amalberti

http://www.isqua.org/docs/geneva-presentations/b13-rene-amalberti.pdf?sfvrsn=2

 

Some random, but 80 slides, often good

Clapper_ReliabilitySlides

http://net.acpe.org/interact/highReliability/References/powerpoints/Clapper_ReliabilitySlides.pdf

A clear head shot from Jeffrey…

Not one stakeholder group left untrashed…

Great Einstein quote – the original definition of insanity presumably:

‘The significant problems we face cannot be solved at the same level of thinking we were at when we created them’

PDF: Braithwaite Delusions of health care JRSM 2014

The medical miracles delusion

Army ants subscribe to a simple rule: follow the ant
in front. If the group gets lost each ant tracks
another, eventually forming a circle. According to
crowd theorist James Surowiecki, one circle 400m
in circumference marched for two days until they
all died.1
Humans are not ants, but we often trudge together
along the same trail, neglecting to look around for
alternatives. Mass delusions involve large groups
holding false or exaggerated beliefs for sustained periods.
Humanity has a long, sorry list of these shadowthe-
leader epidemics of collective consciousness which
appear obviously wrong only in hindsight. Some last
for centuries: early alchemists intent on transmuting
base metals into gold and the Christian Crusades of
Europe’s middle ages, for example. Others have correlates
which resurface decades or centuries later:
McCarthy’s persecution of alleged communists in
the 1950s harked back to the Salem Witch hunts of
16th century America just as the 2008 Global
Financial Crisis had much in common with the
‘South Sea Bubble’ which slashed 17th century
Britain’s GDP.
In the educated 21st century, too, we blithely trust
in economic and political systems which are stripping
the earth’s resources, altering the climate and facilitating
wars. Are we then similarly mistaken, en masse,
about the capabilities of the health system?
Most of us believe in the miracles of modern medicine.
We like to think that the health system is
increasingly effective, that we are implementing
better treatments and cures with rapid diffusion of
new practices and pharmaceuticals and that there is
always another scientific or technological breakthrough
just around the corner promising to save
even more lives; all at an affordable price.
We maintain the faith despite multiple contraindications.
Modern health systems consistently deliver
at least 10% iatrogenic harm.2 Despite very large
investments and intermittent but important interventional
successes, such as checklists in theatres3 and
clinical bundles in ICU,4 there is no study showing
a step-change reduction in this rate, systems-wide.

Only half of care delivered is in line with guidelines,5
one-third is thought to be waste,6 and much is not
evidence-based,7 notwithstanding concerted efforts to
optimise that evidence and incorporate it into routine
practice.8
The reality is that progress is slowing, and medicine
seems to be reaching the limits of its capacities.
The potentially disastrous problems of antibiotic
resistance, for example, are yet to play out. This is
only one point among many. New technologies such
as the enormously expensive human genome project
have provided only marginal benefits to date. We still
do not have the answers to fundamental questions
about the causes of common diseases and how to
cure them. Many doctors are dissatisfied and increasingly
pessimistic.9,10 It must also be remembered that
although death is no longer seen as natural in the
modern era, everyone must die. Yet, we inflict most
of our medical ‘miracles’ on people during their last
six months of life. Le Fanu describes this levelling off
and now falling away of health care progress in The
Rise and Fall of Modern Medicine.11
Every major group of stakeholders has its own
specific delusion which acts to augment the metalevel
medical miracles delusion. Thus, the overarching
delusion is buttressed by a set of related ‘viruses
of the mind’, to borrow Richard Dawkins’ evocative
phrase.12
Although politicians think and act as if they are
running things, modern health systems are so complex
and encompass so many competing interests that no
one is actually in charge. Then, bureaucrats – acting
under their own brand of ‘groupthink’ – assume their
rules and pronouncements provide top-down stimulus
for medical progress and improved clinical performance
on the ground. Yet coalface clinicians are relatively
autonomous agents, so there can only ever be
modest policy trickle down.13,14
Researchers, too, support the medical miracles
industrial complex. The electronic database
PubMed holds some 23 million articles and is growing
rapidly. Every author hopes it will be his or her
results that will make a difference, yet there is far less

take up than imagined and comparatively little
investment in the science of implementation8 – translating
evidence into real life enhancements.
Nor are clinicians or the patients they serve
immune. While frontline clinicians strive to provide
good care, many myopically assume their practice is
above average; the so-called Dunning-Kruger
effect.15,16 Of course, statistically, half of all care clinicians
provide is below average. And notwithstanding
decades of public awareness, patients believe modern
medicine can repair them after decades of alcohol,
drugs, sedentary lives and dietary-excesses, despite
evidence to the contrary.
Meanwhile, the media’s unremitting propensity to
lend credibility to controversial views and to hone in
on ‘gee whiz’ breakthroughs – while ignoring the
incremental and the routine – fuels unrealistic expectations
of what modern medicine can deliver.
Throughout history, mass delusions have been
aligned with mass desires for favourable outcomes.
In the pursuit of medical miracles all of our interests
line up in a perfect circle. We seem more like army
ants than we think.
Just as the Global Financial Crisis was a wake-up
call for the serious consequences of blind fiscal faith
we must begin to manage our expectations of the
health system. Progress is always in jeopardy when
the real problems are obscured.
The challenge is to harness the tough-minded
scepticism needed to tackle this widely held ‘received
wisdom’. One realistic way forward is to encourage
stakeholders – politicians, policymakers, journalists,
researchers, clinicians, patients – to first consider
that their own and others’ perspectives are simply not
logically sustainable. This may be achieved through
genuine inter-group discourse about the health
system, where it is at, and its limitations.
As is so often the case, Albert Einstein said it best,
in a typically neat aphorism: ‘The significant problems
we face cannot be solved at the same level of
thinking we were at when we created them’.17 If we
can humbly accept that we need new perspectives
for healthcare – and radically different ways of
thinking – we will be better placed to free ourselves
from the hold of these peculiar viruses of the mind.

Fashionable wearables…

Where tech meets fashion…

Classy photos of integrated wearables in this story.

http://www.wired.com/design/2014/02/can-fashion-tech-work-together-make-wearables-truly-wearble/

What’s the Secret to Making Wearables That People Actually Want?

shine

Misfit Wearables launched the Shine, an activity tracker that can be worn almost anywhere on your body. Image: Misfit Wearables

 

Last September, right around spring/winter Fashion Week, an unexpected group of people gathered for a round table discussion at the main offices of the Council of Fashion Designers of America in New York City. Present was Steven Kolb, the CFDA’s CEO, a few higher-ups from Intel and a handful ofCFDA members who also happen to be big names in fashion and accessory design.

Intel had called the meeting to discuss the idea of starting a collaboration between the company and the fashion industry at large, with the ultimate goal of figuring out a way turn their decidedly unwearable technology into something people—fashionable people—might actually want to put on their bodies.

‘Tech companies know what is useful, but do we know how to make something desirable?’

Earlier in the summer, Intel, like most every other big technology company out there, had started a division to explore the future of wearable technology. Best known for supplying the processor chips you find in your computer’s guts, Intel has the technology to build what could eventually be a very smart device. They did not, however, have the design and fashion expertise to create stylish hardware.

“Technology companies know what is useful, but do we know how to make something desirable?” says Ayse Ildeniz, Intel’s vice president of business development and strategy for new devices. “We have thousands of hardware and software engineers looking at sensors, voice activation and how to build smart devices, but we wanted to create a platform where they can meet with the aesthetic gurus. There needs to be an alignment and discussion, so breakthroughs can actually come about and flourish.”

Enter the Hipsters

During CES this year, Intel announced the formalization of its partnership with the CFDA, Barney’s and Opening Ceremony, an ultra-hip fashion company tasked with designing the first wearable product to be born from the collaboration. If that wasn’t proof enough that Intel was taking wearables seriously, the company also announced its Make It Wearable competition, which will award $1.3 million in prize money ($500,000 for the grand prize) for whoever who comes up with the most promising design in wearable tech this year. Those are some pretty good incentives.

UB1B2411 argent

Netatmo’s June is a UV tracker that takes the form of a jewel designed by French jewelry designer Camille Toupet. It syncs up with your smartphone to help keep track of your skin health. Image: Netatmo

We’ve only recently begun to see technology and fashion take each other seriously. A few months ago, Apple hired Angela Ahrendts, Burberry’s former CEO, and before that they poached Paul Deneve, Yves Saint Laurent’s CEO. Given the optimistic projections for wearable tech’s influence, the union between these two worlds seems inevitable. If wearable technology makers have learned one thing so far, it’s that just because you make something, it doesn’t mean people are actually going to wear it. Adoption of wearable tech depends on striking a delicate balance between style and functionality, and no one has leveled that see-saw quite yet. And the fashion crowd, as progressive as they are, have never been trained to think through the rigors of product design, ranging from use cases to demographics.

“Products are often made with good intentions, but in a vacuum,” says Kolb. “You have programming people thinking about wearable technology but not necessarily, and I don’t mean this with disrespect, thinking about the aesthetic. Then you’ve got fashion people who are very much focused on the overall look but don’t have the technological language or vocabulary.”

Kolb explains that oftentimes, fashion people have a sci-fi understanding of what technology can do. On the flip side, technologists and even industrial designers have a difficult time grasping what it means to create something people feel good wearing. “Fashion designers are always thinking about things like, how does that clasp close, how does this leather feel?” he says. “That element might not necessarily be on the radar of a tech person, but it’s definitely on the radar of a fashion person.”

Image: Misfit Wearables

Image: Misfit Wearables

Up to this point, technology companies have approached wearables with a one-size-fits-all mentality. Even Google Glass’ Titanium Collection, while certainly more stylish than the original, hasn’t gotten it quite right. A choice of frames that say, “I write code and like to shop” is a start, but in order for people to really want to wear Glass, we have to be able to seamlessly integrate them into our own very personal style. We have to feel like we’ve had more of a choice in the matter.

The Missing Link: Modularity

“I think fashion and accessory brands in the near future will make glasses that work with Glass in the same way we have accessories and covers for our mobile phones,” explains Syuzi Pakhchyan, accessories lead at Misfit Wearables. “The key here is to design technology that can be modular and allow others to develop an ecosystem of products that work with your technology.”

Misfit is the maker of the Shine, a pretty, smoothed-over disc that acts as an activity tracker. As far as wearable tech goes, the Shine is actually quite lovely. Misfit’s offering is part of an increasing number of wearables that make an honest effort to look good. There are others like Netatmo’s June, a UV tracker disguised as a sparkling rhinestone that can be worn as a broach or on a leather band around a wrist, and the collaboration between Cellini and CSR to create a Bluetooth-enabled pendant.

Working Together Earlier

The intentions are good, but they all fall a little short, as though the styling was a last minute gloss instead of baked into the actual product. In order for wearables to feel authentically cool, fashion and technology need to begin working together from the earliest moments of product development, discussing what current technology enables and having an an open-minded conversation about how it could be worn.

‘Products are often made with good intentions, but in a vacuum,’ says Kolb.

As Pakhchyan points out, much like our clothes, not everyone wants or needs to wear the same piece of technology, and we don’t necessarily have to wear it all the time either. Tech companies have been chasing the elusive silver bullet smartwatch, but maybe it’s not such a bad thing to treat wearables like the other wearables in our life: As separate, individually-valuable pieces of clothing that can work together to ultimately create the perfect outfit. Staying focused, at least while we’re figuring out what form and functionality works and what doesn’t, might not be such a bad thing.

Right now, the collaboration between Intel and the CFDA is just getting started. How it will shape up depends on what each organization is trying to achieve. But at least by beginning to build a real bridge between the fashion and technology worlds, we’re opening up discussion about how these industries can benefit each other, which hopefully will lead to some great innovations.

For what it’s worth, Pakhchyan figures it’s only a matter of time before the parallel paths of technology and fashion intersect for good. And when they do? We’ll probably be seeing a lot more people actually wearing wearables. “I think we’re going to see a lot more beautiful and interesting wearables coming out in the next few years,” she says. “I have a feeling we’re going to look back at these plastic wrist-worn things and be like, ‘Oh, that was kind of an awkward stage.’”

Image:TK

This pendant prototype, a collaboration between CSR (developers of Bluetooth Smart and jewelry designers Cellini), communicates phone alerts via the glowing green light. Image:CSR

Liz Stinson

Liz is a Brooklyn-based reporter for Wired Design. She likes talking to people about technology, innovation and pretty things.

Read more by Liz Stinson

Follow @lizstins on Twitter.

Global solar vs standard time deltas

How cool is this map. OMG. Everything explained…. and east coast Australia on time.

Solar time versus standard time around the world

Solar time versus standard time around the world

MARCH 3, 2014  |  MAPPING

How much is time wrong around the world?

After noting the later dinner time in Spain, Stefano Maggiolo noted relatively late sunsets for one of the possible reasons, compared to standard time. Then he mapped sunset time versus standard time around the world.

Looking for other regions of the world having the same peculiarity of Spain, I edited a world map from Wikipedia to show the difference between solar and standard time. It turns out, there are many places where the sun rises and sets late in the day, like in Spain, but not a lot where it is very early (highlighted in red and green in the map, respectively). Most of Russia is heavily red, but mostly in zones with very scarce population; the exception is St. Petersburg, with a discrepancy of two hours, but the effect on time is mitigated by the high latitude. The most extreme example of Spain-like time is western China: the difference reaches three hours against solar time. For example, today the sun rises there at 10:15 and sets at 19:45, and solar noon is at 15:01.