Every day it's the same. One by one, workers at PwC's head office in London trickle through the revolving doors as they prepare for another day at work. But something is different about today.
Curiosity replaces the thousand-yard stare on the faces of many on the way to work this morning, as they peer into a part of the building redesigned for the future, reports the Daily Telegraph.
Inside is a giant screen. Guests are invited to sit at a u-shaped table in the middle of the room, where aesthetics are clearly more important than comfort.
In one corner, a virtual reality experience is ready to immerse chief executives and other officials in the problems and war games of tomorrow.
IPads are scattered like cushions around the room. Interaction is encouraged. But suddenly what looks like another iPad mounted on a Segway charges towards a group of unsuspecting journalists.
"Ooops," says the sheepish demonstrator as she puts down the device. The crowd disperses. It is not quite the robot revolution everyone was expecting.
Can computers be creative?
We have been told the future could be bleak. Andy Haldane, the chief economist at the Bank of England, has warned that the rise of the robot will put as many as 15 million UK jobs at risk.
Academics at Oxford University have already published a cheat sheet identifying the most vulnerable jobs.
Authors Carl Benedikt Frey and Michael Osborne have urged library technicians and insurance underwriters to think about a career change. Therapists, social workers and personal trainers have less to worry about.
Being creative helps. Choreographers, musicians and teachers are also at less risk of being left on the scrapheap, the study shows.
After all, robots can draw, but can they design? Machines can follow patterns, but can they predict them?
For Erik Brynjolfsson and Andrew McAfee, the answer is a resounding yes.
The pair, both professors at the MIT Sloan School of Management in Cambridge, Massachusetts, believe the biggest barrier to using technology to generate higher productivity and bigger profits is not the limits of robots, but the hubris of humans.
In their latest book, Machine, Platform, Crowd, Brynjolfsson and McAfee set out to dispel the myth that robots are only suited for "dull, dirty and dangerous" tasks that humans cannot, or do not want to, do. They insist that artificial intelligence and machine learning is not only getting smarter, but more creative.
Take the US elections. Dan Wagner, Barack Obama's chief analytics officer during the 2012 presidential election campaign, used data and machine-learning to score every US voter on how likely he or she would be to re-elect Obama for a second term.
The analysts used algorithms to judge the probability that each potential voter would actually go out to vote. Floating voters were assessed on the basis of whether they could be persuaded to choose Obama.
Wagner had money and he needed to buy TV adverts. But where? The Obama campaign wanted to target 18-24-year-old men in Colorado. Demographic data pointed to the same predictable advertising slots: Tuesday evening Family Guy reruns.
But the right demographic did not necessarily mean the right audience. Wagner's analysis showed something different was needed, and his ability to identify the "persuadables" and "get out the vote" groups within this demographic helped him to ensure the best buys in terms of advertising.
Based on the data, the Obama team bought slots in between late-night reruns of Everybody Loves Raymond on TV Land rather than prime-time slots during Family Guy.
The results surprised everyone. "It just kind of popped out," Wagner told Brynjolfsson and McAfee. More importantly, the strategy secured the votes.
More than numbers
Humans must recognise that AI is more than just number-crunching, Brynjolfsson and McAfee say.
IBM's Watson may be known as the supercomputer that beat the smartest humans at Jeopardy, but it has also written a cookbook.
The concept for the structural and aesthetic wonder that is the Shanghai Tower in China was created by a machine, and only then adapted by humans.
Computers have even designed a race car chassis from scratch. A few years ago, researchers at the 3D design specialist Autodesk teamed up with a group of car designers and stunt drivers to take on the task. Project Dreamcatcher was born.
The team took a car out to the Mojave Desert and pushed it to its limits, collecting 20m data points along the way. They used software to create an optimal structure designed to perform on the race track.
What the Autodesk software came up with surprised many. It looked more like a skull than a car chassis, as if Mother Nature designed it herself. It was strong, slim, durable, and, most strikingly, asymmetric.
The software understood that this race car turned in one direction more often than the other, and adapted the design to the forces put on the structure.
"Designers have been aware of this fact for a long time, but their creations have rarely, if ever, been as deeply asymmetric as the ones that emerge from generative-design software," Brynjolfsson and McAfee write.
Of course, race tracks are not all the same. Different tracks need different chassis, which implies changes to harnessing systems, engines and gearboxes. Catering for those changes can get expensive.
For now at least, too, these machine-designed cars are still driven by humans, who will have to adapt to differences in the design of the car - and who still care deeply about looks.
This human trait is recognised by Autodesk, which says Dreamcatcher has created the "complex math to make a good structure" and left the human designers to "make a 'cover' that meets whatever aesthetic criteria is important."
Examples like this convince Brynjolfsson and McAfee that "digital creativity is more than mimicry and incrementalism", leaving them with hope that "computers can and will" come up with novel solutions that never would occur to humans.
Accepting the future
But will humans accept these solutions? All too often, the academics argue, judgments are left to the "HiPPOs". The "highest paid person's opinions", which are too often based on judgments, intuition, gut feeling and biases not grounded in evidence.
"The evidence is clear that this approach frequently doesn't work well, and that HiPPOS too often destroy value," Brynjolfsson and McAfee say.
It is an argument that the partners at PwC come across every day.
Aldous Birchall, the lead at PwC for AI and machine learning financial services, says people still believe they know best when it comes to finding solutions.
"Certainly in my area of financial services, there's a lot of credit analysts out there who say the type of analysis they do could never be done by a machine. Yet I have very good empirical evidence to show that it can often be done by a machine much better than a human," he says.
Jon Andrews, head of technology and investment at PwC, also comes across resistance: "We're still at a point in time when the bar that AI is expected to hurdle is 100 per cent accuracy, when actually it just needs to be better than humans, because fundamentally that's when there is a business case for it."
The evidence that robots are better decision-makers is compelling.
Human bias is everywhere.
It is why judges are more likely to grant prisoners parole just after breakfast than just before lunch, when their stomachs are rumbling.
It is why AI is helping managers to budget better, and ensuring the best candidates are recruited for the job, regardless of their age, gender or race.
HiPPOS "need to become an endangered species within organisations," Brynjolfsson and McAfee say.
Andrews says education will be vital to unlocking the potential.
"At the moment, the majority of the UK education system focuses on this very exam-focused approach which has become centred around knowledge rather than problem solving creativity. We have to start at the beginning of the education system and work all the way through."
All agree that the big decisions will continue to be shaped by the entrepreneurs of the future.
The next hit novel will not be written by a robot, and machine learning will never be able to coordinate large-scale creativity and planning.
Self-employment may be on the rise, but the big companies and the managers that run them still have a vital role to play in driving innovation forward.
"Knowing what people want next usually requires a deep understanding of what it means to be a person, and what it is like to experience the world with all our senses and emotions," Brynjolfsson and McAfee say.
For now, convincing humans to let go of some decision-making remains a slow process.
Birchall says clients still want to know the reasoning behind the AI's decisions. Humans are not quite ready to cede control just yet.
Brynjolfsson and McAfee sympathise, but say letting go is the key to success.
"We appreciate that losing decision-making authority you once had is uncomfortable, and that no one likes feeling like a servant to a computer. But does that mean that the wrong inmates should be let out or kept in prison, just so that judges and parole boards can continue to work as they used to? That companies should hire the wrong people, just to let interviewers keep feeling smart?
"For us, the answer to these questions is no."
This article first appeared on the Daily Telegraph and is reproduced here with permission.