Thursday, November 1, 2018

AI Arms Race and what it means


In the spring of 2016, an artificial intelligence system called AlphaGo defeated a world champion Go player in a match at the Four Seasons hotel in Seoul. In the US, this momentous news required some unpacking. Most Americans were unfamiliar with Go, an ancient Asian game that involves placing black and white stones on a wooden board. And the technology that had emerged victorious was even more foreign: a form of AI called machine learning, which uses large data sets to train a computer to recognize patterns and make its own strategic choices.
Still, the gist of the story was familiar enough. Computers had already mastered checkers and chess; now they had learned to dominate a still more complex game. Geeks cared, but most people didn’t. In the White House, Terah Lyons, one of Barack Obama’s science and technology policy advisers, remembers her team cheering on the fourth floor of the Eisenhower Executive Building. “We saw it as a win for technology,” she says. “The next day the rest of the White House forgot about it.”
In China, by contrast, 280 million people watched AlphaGo win. There, what really mattered was that a machine owned by a California company, Alphabet, the parent of Google, had conquered a game invented more than 2,500 years ago in Asia. Americans don’t even play Go. And yet they had somehow figured out how to vanquish it? Kai-Fu Lee, a pioneer in the field of AI, remembers being asked to comment on the match by nearly every major television station in the country. Until then, he had been quietly investing in Chinese AI companies. But when he saw the attention, he started broadcasting his venture fund’s artificial intelligence investment strategy. “We said, OK, after this match, the whole country is going to know about AI,” he recalls. “So we went big.”
In Beijing, the machine’s victory cracked the air like a warning shot. That impression was only reinforced when, over the next few months, the Obama administration published a series of reports grappling with the benefits and risks of AI. The papers made a series of recommendations for government action, both to stave off potential job losses from automation and to invest in the development of machine learning. A group of senior policy wonks inside China’s science and technology bureaucracy, who had already been working on their own plan for AI, believed they were seeing signs of a focused, emerging US strategy—and they needed to act fast.
In May 2017, AlphaGo triumphed again, this time over Ke Jie, a Chinese Go master, ranked at the top of the world. Two months later, China unveiled its Next Generation Artificial Intelligence Development Plan, a document that laid out the country’s strategy to become the global leader in AI by 2030. And with this clear signal from Beijing, it was as if a giant axle began to turn in the machinery of the industrial state. Other Chinese government ministries soon issued their own plans, based on the strategy sketched out by Beijing’s planners. Expert advisory groups and industry alliances cropped up, and local governments all over China began to fund AI ventures.
China’s tech giants were enlisted as well. Alibaba, the giant online retailer, was tapped to develop a “City Brain” for a new Special Economic Zone being planned about 60 miles southwest of Beijing. Already, in the city of Hangzhou, the company was soaking up data from thousands of street cameras and using it to control traffic lights with AI, optimizing traffic flow in much the way AlphaGo had optimized for winning moves on the Go board; now Alibaba would help design AI into a new megacity’s entire infrastructure from the ground up.
On October 18, 2017, China’s president, Xi Jinping, stood in front of 2,300 of his fellow party members, flanked by enormous red drapes and a giant gold hammer and sickle. As Xi laid out his plans for the party’s future over nearly three and a half hours, he named artificial intelligence, big data, and the internet as core technologies that would help transform China into an advanced industrial economy in the coming decades. It was the first time many of these technologies had explicitly come up in a president’s speech at the Communist Party Congress, a once-in-five-years event.
In the decisive span of a few months, the Chinese government had given its citizens a new vision of the future, and made clear that it would be coming fast. “If AlphaGo was China’s Sputnik moment, the government’s AI plan was like President John F. Kennedy’s landmark speech calling for America to land a man on the moon,” Kai-Fu Lee writes in his new book, AI Superpowers.
Meanwhile, as Beijing began to build up speed, the United States government was slowing to a walk. After President Trump took office, the Obama-era reports on AI were relegated to an archived website. In March 2017, Treasury secretary Steven Mnuchin said that the idea of humans losing jobs because of AI “is not even on our radar screen.” It might be a threat, he added, in “50 to 100 more years.” That same year, China committed itself to building a $150 billion AI industry by 2030.
Only slowly, pushed mainly by the Pentagon, has the Trump administration begun to talk about, and fund, national AI initiatives. In May, secretary of defense James Mattis read an article in The Atlantic by Henry Kissinger, who warned that AI was moving so quickly it could soon subvert human intelligence and creativity. The result, he warned, could be the end of the Enlightenment; he called for a government commission to study the issue.
Many AI experts pooh-poohed Kissinger’s article for extrapolating too broadly and darkly from the field’s narrow accomplishments. Mattis, however, pulled the article into a memo for President Trump. That month, Michael Kratsios, Trump’s top adviser on technology, organized a summit on the subject of AI. In an interview with WIRED this summer, Kratsios said the White House was fully committed to AI research and to figuring out “what the government can do, and how it can do it even more.” In June, Ivanka Trump tweeted out a link to the Kissinger piece, praising its account of “the ongoing technological revolution whose consequences we have failed to fully reckon with.”
But if the Trump White House was relatively slow to grasp the significance and potential of AI, it was quick to rivalry. By midsummer, talk of a “new cold war arms race” over artificial intelligence was pervasive in the US media.
At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. And what’s at stake is not just the technological dominance of the United States. At a moment of great anxiety about the state of modern liberal democracy, AI in China appears to be an incredibly powerful enabler of authoritarian rule. Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?
After the end of the Cold War, conventional wisdom in the West came to be guided by two articles of faith: that liberal democracy was destined to spread across the planet, and that digital technology would be the wind at its back. The censorship, media consolidation, and propaganda that had propped up Soviet-era autocracies would simply be inoperable in the age of the internet. The World Wide Web would give people free, unmediated access to the world’s information. It would enable citizens to organize, hold governments accountable, and evade the predations of the state.
No one had more confidence in the liberalizing effects of technology than the tech companies themselves: Twitter was, in one executive’s words, “the free speech wing of the free speech party”; Facebook wanted to make the world more open and connected; Google, cofounded by a refugee from the Soviet Union, wanted to organize the world’s information and make it accessible to all.
As the era of social media kicked in, the techno-optimists’ twin articles of faith looked unassailable. In 2009, during Iran’s Green Revolution, outsiders marveled at how protest organizers on Twitter circumvented the state’s media blackout. A year later, the Arab Spring toppled regimes in Tunisia and Egypt and sparked protests across the Middle East, spreading with all the virality of a social media phenomenon—because, in large part, that’s what it was. “If you want to liberate a society, all you need is the internet,” said Wael Ghonim, an Egyptian Google executive who set up the primary Facebook group that helped galvanize dissenters in Cairo.
It didn’t take long, however, for the Arab Spring to turn into winter—in ways that would become eerily familiar to Western countries in a few years. Within a few weeks of President Hosni Mubarak’s departure, Ghonim saw activists start to turn on each other. Social media was amplifying everyone’s worst instincts. “You could easily see the voices in the middle become more and more irrelevant, the voices on the extremes becoming more and more heard,” he recalls. The activists who were vulgar or attacked other groups or responded with rage got more likes and shares. That gave them more influence, and it gave otherwise moderate people a model to emulate. Why post something conciliatory if no one on Facebook will read it? Instead, post something full of vitriol that millions will see. Ghonim began to become dispirited. The tools that had brought the protesters together, he said, were now tearing them apart.
Ultimately, Egypt elected a government run by the Muslim Brotherhood, a traditionalist political machine that had played little part in the initial Tahrir Square groundswell. Then in 2013 the military staged a successful coup. Soon thereafter, Ghonim moved to California, where he tried to set up a social media platform that would favor reason over outrage. But it was too hard to peel users away from Twitter and Facebook, and the project didn’t last long. Egypt’s military government, meanwhile, recently passed a law that allows it to wipe its critics off social media.
Of course, it’s not just in Egypt and the Middle East that things have gone sour. In a remarkably short time, the exuberance surrounding the spread of liberalism and technology has turned into a crisis of faith in both. Overall, the number of liberal democracies in the world has been in steady decline for a decade. According to Freedom House, 71 countries last year saw declines in their political rights and freedoms; only 35 saw improvements.
While the crisis of democracy has many causes, social media platforms have come to seem like a prime culprit. The recent wave of anti­establishment politicians and nativist political movements—Donald Trump in the United States; Brexit in the UK; the resurgent right wing in Germany, Italy, or across Eastern Europe—has revealed not only a deep disenchantment with the global rules and institutions of Western democracy, but also an automated media landscape that rewards demagoguery with clicks. Political opinions have become more polarized, populations have become more tribal, and civic nationalism is disintegrating.
Which leaves us where we are now: Rather than cheering for the way social platforms spread democracy, we are busy assessing the extent to which they corrode it.
In China, government officials watched the Arab Spring with attentiveness and unease. Beijing already had the world’s most sophisticated internet control system, dynamically blocking a huge swath of foreign web domains, including Google. Now it garlanded its Great Firewall with even more barbed wire. China developed new ways to surgically turn off internet access in zones within cities, including a major block of downtown Beijing where it feared demonstrations. It also digitally walled off the entire province of Xinjiang after violent protests there that spread via the internet. Beijing may even have dabbled with creating a nationwide internet “kill switch.”
This bowdlerized version of the internet doesn’t sound at all like the original dream of the World Wide Web, but it has thrived nonetheless. By now, there are roughly 800 million people who surf the internet, exchange chat messages, and shop online behind the Great Firewall—nearly as many people as live in the United States and Europe combined. And for many Chinese, rising middle-class prosperity has made online censorship considerably easier to bear. Give me liberty, the line might go, or give me wealth.
China’s authoritarianism, which has doubled down under Xi’s leadership, certainly hasn’t hindered the Chinese tech industry. Over the past decade, China’s leading tech companies have come to dominate their home markets and compete globally. They’ve expanded through acquisitions in Southeast Asia. Baidu and Tencent have set up research centers in the US, and Huawei sells advanced networking equipment in Europe. The old silk road is being strung with Chinese fiber-optic cables and network equipment.
More than any other country, China has shown that, with a few adjustments, autocracy is quite compatible with the internet age. But those adjustments have caused the internet itself to start to break apart, like two continents cracking along a shelf. There’s the freewheeling, lightly regulated internet dominated by the geeks of Silicon Valley. And then there’s China’s authoritarian alternative, powered by massive, home-grown tech giants as innovative as their Western counterparts.
Today, China doesn’t just play defense against viral dissent by redacting troublesome parts of the internet; the government actively wields technology as a tool of control. In cities across China, including in Xinjiang, authorities are trying out facial-recognition software and other AI-powered technologies for security. In May, facial-recognition cameras at Jiaxing Sports Center Stadium in Zhejiang led to the arrest of a fugitive who was attending a concert. He had been wanted since 2015 for allegedly stealing more than $17,000 worth of potatoes. China’s Police Cloud System is built to monitor seven categories of people, including those who “undermine stability.” The country also aspires to build a system that will give every citizen and every company a social credit score: Imagine your FICO score adjusted to reflect your shopping habits, your driving record, and the appropriateness of your politics.
The fundamental force driving this change—this pivot from defense to offense—is a shift in how power flows from technology. In the beginning, the communications revolution made computers affordable to the masses. It wired devices together in a giant global network and shrank them down to the size of your hand. It was a revolution that empowered the individual—the lone programmer with the power to create in her pocket, the academic with infinite research at his fingertips, the dissident with a new and powerful way of organizing resistance.
Today’s stage of the digital revolution is different. That supercomputer in your pocket is also a homing device. It’s tracking your every “like,” keeping a record of everyone you talk to, everything you buy, everything you read, and everywhere you go. Your fridge, your thermostat, your smartwatch, and your car are increasingly sending your information back to headquarters too. In the future, security cameras will track the ways our eyes dilate, and sensors on the wall will track our body temperature.
In today’s digital world, in China and the West alike, power comes from controlling data, making sense of it, and using it to influence how people behave. That power will only grow as the next generation of mobile networks goes live. Remember how it felt like magic to be able to browse real web pages on the second-generation iPhone? That was 3G, the mobile standard that became widespread in the mid-2000s. Modern 4G networks are several times faster. 5G will be vastly faster still. And when we can do things faster, we do them more, which means data piles up.
It’s already hard for most people to comprehend, much less control, all the information collected about them. And the leverage that accrues to data aggregators will just increase as we move into the era of AI.
Vladimir Putin is a technological pioneer when it comes to cyberwarfare and disinformation. And he has an opinion about what happens next with AI: “The one who becomes the leader in this sphere will be the ruler of the world.”
In a way, Putin’s line is a bit overwrought. AI is not a hill that one nation can conquer or a hydrogen bomb that one country will develop first. Increasingly, AI is simply how computers work; it’s a broad term describing systems that learn from examples—or follow rules—to make independent decisions. Still, it’s easily the most important advance in computer science in a generation. Sundar Pichai, the CEO of Google, has compared it to the discovery of electricity or fire.
A country that strategically and smartly implements AI technologies throughout its workforce will likely grow faster, even as it deals with the disruptions that AI is likely to cause. Its cities will run more efficiently, as driverless cars and smart infrastructure cut congestion. Its largest businesses will have the best maps of consumer behavior. Its people will live longer, as AI revolutionizes the diagnosis and treatment of disease. And its military will project more power, as autonomous weapons replace soldiers on the battlefield and pilots in the skies, and as cybertroops wage digital warfare. “I can’t really think of any mission that doesn’t have the potential to be done better or faster if properly integrated with AI,” says Will Roper, an assistant secretary of the US Air Force.
And these benefits may compound with interest. So far, at least, AI appears to be a centralizing force, among companies and among nations. The more data you gather, the better the systems you can build; and better systems allow you to collect more data. “AI will become concentrated, because of the inputs required to pull it off. You need a lot of data and you need a lot of computing power,” says Tim Hwang, who leads the Harvard-MIT Ethics and Governance of AI Initiative.
China has two fundamental advantages over the US in building a robust AI infrastructure, and they’re both, generally, advantages that authoritarian states have over democratic ones. The first is the sheer scope of the data generated by Chinese tech giants. Think of how much data Facebook collects from its users and how that data powers the company’s algorithms; now consider that Tencent’s popular WeChat app is basically like Facebook, Twitter, and your online bank account all rolled into one. China has roughly three times as many mobile phone users as the US, and those phone users spend nearly 50 times as much via mobile payments. China is, as The Economist first put it, the Saudi Arabia of data. Data privacy protections are on the rise in China, but they are still weaker than those in the US and much weaker than those in Europe, allowing data aggregators a freer hand in what they can do with what they collect. And the government can access personal data for reasons of public or national security without the same legal constraints a democracy would face.
Of course, data isn’t everything: Any technological system depends on a whole stack of tools, from its software to its processors to the humans who curate noisy inputs and analyze results. And there are promising subfields of AI, such as reinforcement learning, that generate their own data from scratch, using lots of computing power. Still, China has a second big advantage as we move into the era of AI, and that’s the relationship between its largest companies and the state. In China, the private-sector companies at the cutting edge of AI innovation feel obliged to keep Xi’s priorities in mind. Under Xi, Communist Party committees within companies have expanded. Last November, China tapped Baidu, Alibaba, Tencent, and iFlytek, a Chinese voice-­recognition software company, as the inaugural members of its “AI National Team.” The message was clear: Go forth, invest, and the government will ensure that your breakthroughs have a market not just in China, but beyond.
During the original Cold War, the US relied on companies like Lockheed, Northrop, and Raytheon to develop cutting-edge strategic technology. Technically, these companies were privately owned. In practice, their vital defense mission made them quasipublic entities. (Indeed, long before the phrase “too big to fail” was ever used to describe a bank, it was applied to Lockheed.)
Fast forward to today, and the companies at the forefront of AI—Google, Facebook, Amazon, Apple, and Microsoft—don’t exactly wear flag pins on their lapels. This past spring, employees at Google demanded that the company pull out of a Pentagon collaboration called Project Maven. The idea was to use AI for image recognition in Defense Department missions. Ultimately, Google’s management caved. Defense Department officials were bitterly disappointed, especially given that ­Google has a number of partnerships with Chinese technology companies. “It is ironic to be working with Chinese companies as though that is not a direct channel to the Chinese military,” says former secretary of defense Ashton Carter, “and not to be willing to operate with the US military, which is far more transparent and which reflects the values of our society. We’re imperfect for sure, but we’re not a dictatorship.”
The Cold War wasn’t inevitable in 1945. The United States and Soviet Union had been allies during World War II, but then a series of choices and circumstances over a five-year period set the conflict on its self-perpetuating track. Similarly, as we can now see in the cold glare of hindsight, it was never inevitable that the digital revolution would inherently favor democracy. Nor is it inevitable today that AI will favor global authoritarianism to the permanent disadvantage of liberalism. If that scenario comes to pass, it will be because a series of choices and circumstances precipitated it.
In the original Cold War, two ideological foes created rival geopolitical blocs that were effectively non-interoperable. The US was boxed out of the Soviet bloc, and vice versa. The same could easily happen again, to disastrous effect. A new cold war that gradually isolates the Chinese and American tech sectors from each other would starve the US of much of the fuel it now relies on for innovation: American companies depend heavily on the Chinese market for their profits and for engineering and software talent. At the same time, it could actually create the kinds of dangers that hawks warn about now: It would increase the risk that one side could surprise the other with a decisive strategic breakthrough in AI or quantum computing.
Right now, maintaining a degree of openness with China is the best defense against the growth of a techno-­authoritarian bloc. That’s not the way American leaders are headed, though.
A little over six months after Donald Trump’s inauguration—and his invocation of “American carnage”—the administration launched a sweeping investigation into China’s trade practices and alleged theft of US technology via cyberspace. That investigation has mushroomed into a steadily escalating trade war, with the US launching tariffs on billions of dollars of Chinese goods and new investment and export restrictions on technologies that China considers key to AI and to its advanced manufacturing ambitions.
The confrontation is about much more than trade. The Trump administration has made it official US policy to protect the “national security innovation base”—White House shorthand for America’s leading technology and talent—from China and other foreign economic predators. In January, Axios published a leaked White House presentation that recommended the US work with its allies to build a 5G network that excludes China, to prevent Beijing from grabbing “the commanding heights of the information domain.” The presentation likened the 21st-century struggle for data dominance to the WWII-era race to construct an atom bomb. Then in April, the US Commerce Department hit ZTE, a leading Chinese telecommunications equipment firm that was gearing up to work on China’s 5G network, with a seven-year ban on doing business with US suppliers; the department said ZTE had violated the terms of a sanctions settlement. (The US later lifted the ban.)
For US security hawks, the prospect that China might dominate both 5G and AI is a nightmare scenario. At the same time, Washington’s escalating pushback against China’s tech ambitions has made Xi even more determined to wean his country off Western technology.
This is a very different philosophy from the one that has guided the technology sector for 30 years, which has favored deeply enmeshed hardware and software supply chains. Shortly before Trump’s inauguration, Jack Ma, the chair of Alibaba, pledged to create a million jobs in the United States. By September 2018, he was forced to admit that the offer was off the table, another casualty in the growing list of companies and projects that are now unthinkable.
Global work in AI has long taken place in three spheres: research departments, corporations, and the military. The first sphere has always been marked by openness and cooperation; to a lesser extent, so has the second. Academics freely share their work. Microsoft has trained many of China’s best AI researchers and helped nurture many promising AI startups, and Alibaba, Baidu, and Tencent employ US engineers at their research hubs in Silicon Valley and Seattle. An AI-driven breakthrough in Shanghai—say, in diagnosing disease through more accurate scans of medical images—can save lives in Shawnee. But national security concerns have a way of overriding commercial considerations. For now, the political momentum appears to be driving the two countries’ tech sectors apart to such a degree that even collaboration between researchers and corporations could be stifled. The schism could well define how the struggle between democracy and authoritarianism plays out.
Imagine it’s 2022: America’s confrontational economic policies have continued, and China has refused to yield. Huawei and ZTE have been banned from the networks of the US and key Western allies. Through investment and theft, Beijing has reduced its reliance on US semiconductors. Rival tech superpowers have failed to develop common standards. US and Chinese academics increasingly deposit their cutting-edge AI research in government safes instead of sharing it at international conferences. Other countries—like France and Russia—have tried to build homegrown technology industries centered on AI, but they lag far behind.
The world’s nations can commit to American technology: buying Apple phones, using Google search, driving Teslas, and managing a fleet of personal robots made by a startup in Seattle. Or they can commit to China: using the equivalents built by Alibaba and Tencent, connecting through the 5G network constructed by Huawei and ZTE, and driving autonomous cars built by Baidu. The choice is a fraught one. If you are a poor country that lacks the capacity to build your own data network, you’re going to feel loyalty to whoever helps lay the pipes at low cost. It will all seem uncomfortably close to the arms and security pacts that defined the Cold War.
And we may be seeing the first evidence of this. In May 2018, about six months after Zimbabwe finally got rid of the despot Robert Mugabe, the new government announced that it was partnering with a Chinese company called CloudWalk to build an AI and facial-recognition system. Zimbabwe gets to expand its surveillance state. China gets money, influence, and data. In July, nearly 700 dignitaries from China and Pakistan gathered in Islamabad to celebrate the completion of the Pak-China Optical Fibre Cable, a 500-mile-long data line connecting the two countries through the Karakoram Mountains, built by Huawei and financed with a loan from China’s Export-Import Bank. Documents obtained by Pakistan’s Dawn newspaper revealed a future plan for high-speed fiber to help wire up cities across Pakistan with surveillance cameras and vehicle-monitoring systems, part of a “Safe Cities” initiative launched in 2016 with help from Huawei and other Chinese firms. China has effectively constructed its own Marshall Plan, one that may, in some cases, build surveillance states instead of democracies.
It’s not hard to see the appeal for much of the world of hitching their future to China. Today, as the West grapples with stagnant wage growth and declining trust in core institutions, more Chinese people live in cities, work in middle-class jobs, drive cars, and take vacations than ever before. China’s plans for a tech-driven, privacy-invading social credit system may sound dystopian to Western ears, but it hasn’t raised much protest there. In a recent survey by the public relations consultancy Edelman, 84 percent of Chinese respondents said they had trust in their government. In the US, only a third of people felt that way.
No one can be certain what happens next. In the US, in the wake of controversies surrounding the 2016 election and user privacy, a growing number of Republicans and Democrats want to regulate America’s tech giants and rein them in. At the same time, China has stiffened its resolve to become an AI superpower and export its techno-­authoritarian revolution—which means the US has a vital national interest in ensuring that its tech firms remain world leaders. For now, there is nothing close to a serious debate about how to address this dilemma.
As for China, it remains unclear how much digital intrusion people there will tolerate in the name of efficiency and social cohesion—to say nothing of people in other countries that are tempted by Beijing’s model. Regimes that ask people to trade freedom for stability tend to invite dissent. And Chinese growth is slowing. For the past century, democracies have proven more resilient and successful than dictatorships, even if democracies, particularly in an age of algorithms, have made some stupid decisions along the way.
It is at least conceivable that Trump’s aggressive policies could, counterintuitively, lead to a rapprochement with Beijing. If Trump threatens to take something off the table that China truly cannot afford to lose, that could pressure Beijing to dial back its global tech ambitions and open its domestic market to US firms. But there is another way to influence China, one more likely to succeed: The US could try to wrap Beijing in a technology embrace. Work with China to develop rules and norms for the development of AI. Establish international standards to ensure that the algorithms governing people’s lives and livelihoods are transparent and accountable. Both countries could, as Tim Hwang suggests, commit to developing more shared, open databases for researchers.
But for now, at least, conflicting goals, mutual suspicion, and a growing conviction that AI and other advanced technologies are a winner-take-all game are pushing the two countries’ tech sectors further apart. A permanent cleavage will come at a steep cost and will only give techno-authoritarianism more room to grow.
Nicholas Thompson (@nxthompson) is editor in chief of WIRED. Ian Bremmer(@­ianbremmer) is a political scientist and president of the Eurasia Group.








Sunday, October 28, 2018

The adaptable mind



Identification of the skills that will be needed for surviving today. Yes, it is another list, only 5, but this topic is very important for schools today.


Thursday, October 18, 2018

A book you may like to read


Tim Elmore is someone that I read regularly and have several of his books. I haven't read this one yet but generally they are very sensible and make a great deal of sense.


12 Huge Mistakes Parents Can Avoid: Leading Your Kids to Succeed in Life
Look Inside



12 Huge Mistakes Parents Can Avoid: Leading Your Kids to Succeed in Life


You’re deeply committed to helping your kids succeed. But you’re concerned–why are so many graduates unprepared to enter the workforce and face life on their own? You’re doing your best to raise healthy children, but sometimes you wonder, am I really preparing them?
To help adults answer this question, Dr. Tim Elmore’s latest book for parents equips parents and leaders to:Lead their kids to succeed before and after graduationBuild resilience, resolve, purpose, and satisfaction in kidsDiscover a relevant, practical framework for recognizing students’ needsPrepare students who can care for themselvesDevelop emotionally-healthy kids who become thriving adultsClearly see the potential of who and what your kids can be




Tim Elmore shows you how to avoid twelve critical mistakes parents unintentionally make. He outlines practical and effective parenting skills so you won’t fall into common traps, such as…making happiness a goal instead of a by-productpraising their beauty and intelligencenot letting them fail or suffer consequenceslying about kids’ potential–and not exploring their true potentialgiving them what they should earn
This book is also an ideal discussion resource for faculty, PTA’s, small groups, and reading plans.

Find out why thousands of organizations have sought out Tim Elmore to help them develop young leaders–and how you can improve your leadership and parenting skills to help your kids soar.

Sunday, October 14, 2018

These Are The Skills That Your Kids Will Need For The Future (Hint: It’s Not Coding)

These Are The Skills That Your Kids Will Need For The Future (Hint: It's Not Coding)

PHOTO CREDIT: Getty Images
The jobs of the future will involve humans collaborating with other humans to design work for machines and value will shift from cognitive to social skills
An education is supposed to prepare you for the future. Traditionally, that meant learning certain facts and skills, like when Columbus discovered America or how to do multiplication and long division. Today, curriculums have shifted to focus on a more global and digital world, like cultural history, basic computer skills and writing code.
Yet the challenges that our kids will face will be much different than we did growing up and many of the things a typical student learns in school today will no longer be relevant by the time he or she graduates college. In fact, a study at the University of Oxford found that 47% of today's jobs will be eliminated over the next 20 years.
In 10 or 20 years, much of what we "know" about the world will no longer be true. The computers of the future will not be digital. Software code itself is disappearing, or at least becoming far less relevant. Many of what are considered good jobs today will be either completely automated or greatly devalued. We need to rethink how we prepare our kids for the world to come.
Understanding Systems
Applying Empathy and Design SkillsThe Ability to Communicate Complex IdeasCollaborating and Working in Teams
Applying Empathy and Design SkillsThe Ability to Communicate Complex IdeasCollaborating and Working in Teams

Understanding Systems

The subjects we learned in school were mostly static. 2+2 always equaled 4 and Columbus always discovered America in 1492. Interpretations may have differed from place to place and evolved over time, but we were taught that the world was based on certain facts and we were evaluated on the basis on knowing them.
Yet as the complexity theorist Sam Arbesman has pointed out, facts have a half life and, as the accumulation of knowledge accelerates, those half lives are shrinking. For example, when we learned computer programming in school, it was usually in BASIC, a now mostly defunct language. Today, Python is the most popular language, but will likely not be a decade from now.
Computers themselves will be very different as well, based less on the digital code of ones and zeros and more on quantum laws and the human brain. We will likely store less information on silicon and more in DNA. There's no way to teach kids how these things will work because nobody, not even experts, is quite sure yet.
So kids today need to learn less about how things are today and more about the systems future technologies will be based on, such as quantum dynamics, genetics and the logic of code. One thing economists have consistently found is that it is routine jobs that are most likely to be automated. The best way to prepare for the future is to develop the ability to learn and adapt.

Applying Empathy and Design Skills

While machines are taking over many high level tasks, such as medical analysis and legal research, there are some things they will never do. For example, a computer will never strike out in a Little League game, have its heart broken or see its child born. So it is terribly unlikely, if not impossible, that a machine will be able to relate to a human like other humans can.
That absence of empathy makes it hard for machines to design products and processes that will maximize enjoyment and utility for humans. So design skills are likely to be in high demand for decades to come as basic production and analytical processes are increasingly automated.
We've already seen this process take place with regard to the Internet. In the early days, it was a very technical field. You had to be a highly skilled engineer to make a website work. Today, however, building a website is something any fairly intelligent high schooler can do and much of the value has shifted to front-end tasks, like designing the user experience.
With the rise of artificial intelligence and virtual reality our experiences with technology will become far more immersive and that will increase the need for good design. For example, conversational analysts (yes, that's a real job) are working with designers to create conversational intelligence for voice interfaces and, clearly, virtual reality will be much more design intensive than video ever was.

The Ability to Communicate Complex Ideas

Much of the recent emphasis in education has been around STEM subjects (science, technology, engineering and math) and proficiency in those areas is certainly important for today's students to understand the world around them. However, many STEM graduates are finding it difficult to find good jobs.
On the other hand, the ability to communicate ideas effectively is becoming a highly prized skill. Consider Amazon, one of the most innovative and technically proficient organizations on the planet. However, a key factor to its success its writing culture. The company is so fanatical about the ability to communicate that developing good writing skills are a key factor to building a successful career there.
Think about Amazon's business and it becomes clear why, Sure, the it employs highly adept engineers, but to create a truly superior product, those people need to collaborate closely with designers, marketers, business development executives and so on. To coordinate all that activity and keep everybody focused on delivering a specific experience to the customer, communication needs to be clear and coherent.
So while learning technical subjects like math and science is always a good idea, studying things like literature, history and philosophy is just as important.

Collaborating and Working in Teams

Traditionally, school work has been based on individual accomplishment. You were supposed to study at home, come in prepared and take your test without help. If you looked at your friend's paper, it was called cheating and you got in a lot of trouble for it. We were taught to be accountable for achievements on our own merits.
Yet consider how the nature of work has changed, even in highly technical fields. In 1920, most scientific papers were written by sole authors, but by 1950 that had changed and co-authorship became the norm. Today, the average paper has four times as many authors as it did then and the work being done is far more interdisciplinary and done at greater distances than in the past.
Make no mistake. The high value work today is being done in teams and that will only increase as more jobs become automated. The jobs of the future will not depend as much on knowing facts or crunching numbers, but will involve humans collaborating with other humans to design work for machines. Collaboration will increasingly be a competitive advantage.
That's why we need to pay attention not just to how our kids work and achieve academically, but how they play, resolve conflicts and make others feel supported and empowered. The truth is that value has shifted from cognitive skills to social skillsAs kids will increasingly be able to learn complex subjects through technology, the most important class may well be recess.
Perhaps most of all, we need to be honest with ourselves and make peace with the fact that our kids' educational experience will not--and should not--mirror our own. The world which they will need to face will be far more complex and more difficult to navigate than anything we could imagine back in the days when Fast Times at Ridgemont High was still popular.


Sunday, October 7, 2018

What Is Success? Redefining School


This is a post that I was sent recently from a blog, Learning and Leading in a New World, it completely supports the way we think at ISHCMC.



"Due to a number of different things in my life I only completed my degree a couple of years ago at a ripe old age of 49. I’ve also recently also got into post graduate study with the eventual aim of a masters qualification. While I love reading, and thinking and dialogue about all things education, am I doing it because I sincerely think it’s going to make me a better educator or leader? No, I’m not. I’m doing it because it may open doors for me that not having that qualification on paper would be closed. Despite the fact that I’ve been an educator for 20 years and in senior leadership and principal ship for 22 years in some widely varied and interesting places that have all helped me become the educator and leader I am that one piece of paper would mean more than all that successful experience to some people. Why is that?







I have a brother who barely finished high school. He spent considerable time of his last year at secondary school playing spacies- which was actually helping him regain coordination after a sizeable brain tumour was diagnosed and removed when he was 16. He is now a very successful businessman, having run many small and medium businesses in his life, has a comprehensive housing portfolio and is positioned to have a significant and influential role in one of our countries political parties. Successful? Many would say highly so.

I have a sister who spent most of her high school years in a lot of trouble- the cliched sex, drugs and rock and roll comes easily to mind. Im not sure what school qualifications she emerged from the system with, but I can’t imagine they were startling.She is, today, an accomplished self employed business owner, supplementing her hairdressing salon with the most incredible pieces of art which she sells as a sideline in her hairdressing salon. She loves people, connects effortlessly with a wide range of others and has the most varied kind of people forge connections with her. Successful? Absolutely.

I have a daughter who excelled in visual art and drama at secondary school. Right through to the end of secondary school, how many times was I told she needed to do “real” subjects if she was going to be successful in life? When she left school and didn’t know what she wanted to do and worked in hospitality for a few years, how many times was I told to make her go and study or she’ll never do it? Like that was going to be the measure of her success or otherwise in life. In actuality she did go and study at tertiary level but years later when she’d actually figured out what she was truly passionate about. And this young person that hadn’t really got or loved the “real” subjects at school studied science and law and loved it because it had a context that was meaningful to her. And now she is onto her third career, all careers in which she has had to care deeply and compassionately about living creatures- animals and other human beings. How better to measure the success of the child you raised but in how they care for other living things in their careers, and in what a loving partner she is to her husband? I could not be prouder of the successful person she is.

I have a friend who has left jobs without a new one lined up on more than one occasion because the institution he was working for did not mesh with his own value system. I personally think he is both highly successful and a passionate advocate for what he does today because he proved to himself, and others, that he is highly principled and prepared to stand behind those beliefs and values in a way that others might decry as being a quitter or showing a lack of perseverance.





I often write messages- blogs, twitter etc, that are aimed at other educators and what needs to change in the institutionalism of schooling, but this post is aimed at my wider community- the community of family and friends I have who are not educators.

The above stories and hundreds of similar ones illustrate why it is imperative that all of society, not just educators, re evaluate what success in the school system actually is, and what indeed the purpose of the school system is. If success in the school system is not necessarily reflective of success in life, what do we need to change?

Educators have been talking about this stuff, or some of us have, for years. We can’t do it alone. Because to adapt or change the school system means we have to adapt and reimagine the role of schooling in society and the role of society in schooling. 

It’s not simple like extend the school day or school year and make kids do more of the same. That’s just making them get better at the wrong things. It’s highly complex and means we have to deeply and carefully examine all our carefully construed biases. Our bias about what success in life constitutes. Our biases about the role school does or does not play in that success. Our biases about whether School is a place to gain a qualification or a place to hone what being a member of society means to us. A place to be docilely compliant to the adults in power and control or a place to work out what our values are and how to be successful in applying them to whatever we turn our hand in life to. Our biases about the role that qualifications do or do not play in the success of our lives.

We need your support when we try to do things differently, we do not need calls for back to basics or statements like it was ok for me-didn’t do me any harm.

It was ok for you because the society you went into was vastly different.

I lived for the first 30 years of my life without the internet. My daughter has never known a world without it. Just the internet itself has incredibly changed our lives. It’s not ok for School to be the same it was for you. It may not have harmed you, but it also is unlikely to have prepared you for society as it is today.

The new basics are very different from the old basics. Your life is irrevocably different due to developments in internet and technology, in transportation and communication. Why do some still support school being the same way? The basics to survive and thrive in life today are different than they were in the past. 





We do not need to hear bring back the cane. We don’t want to raise young people who think it is ok for someone in power to humiliate them and hurt them in order to coerce them into doing what they think is right.

We want to develop young people who care deeply and problem solve and fix the problems we’ve caused in the world today.





We want to raise young people who don’t believe everything everyone tells the. We want to grow people who are discriminate about what they believe. We want them to have principles and to be prepared to live by those principles.

Next time you think young people have no staying power and should be able to stick to things they don’t agree with because it’s good for them think about what criteria you are using for “good.”

So next time you hear about schools trying different things, question why.

When schools are trying to move away from subjects to a problem based approach that integrates subject knowledge and skills into solving big problems or delving into deep issues they are trying to prepare young people for approaching big world problems rather than memorising chunks of content in discrete and disconnected ways.

When schools are moving from tight to broad age groups they are trying to be more like society is. Where else, ever, do we segregate ourselves based on such tight age groupings? As adults do we only play with or work with or learn with other people within 12 months of our birthdate? Why do we continue to think children learn best in this segregation?

When schools are trying to develop self managing learners, who will be able to direct themselves in society and work why do some call for them to just do what they are told. If we don’t develop those self determination skills at a young age we will have groups of adults waiting to be told what to do, like factory workers of the past, not like the active problem solvers we need to preserve society and our environment moving forward.

When schools are trying to be collaborative they are trying to help out young people learn that we will be able to progress much further and effectively if we work together as a team instead of row after row of single units. We will make a better world for all of us together than just you can make for yourself.

Flexible When schools are trying to use space flexibly and you get confused because that doesn't look like school a you remember it being, think about what else still looks the same as it did when you were at school.

And while we are at it as well as reevaluating what makes a young person successful and what the role of schooling is in that lets also reevaluate our definition of what makes a successful school. Next time you read a media beat up or a list of school rankings listing the most successful schools by a magazine take a bit more time to interrogate the criteria of success being applied, and even more time to deeply consider whether those success criteria are going to mean anything in the lives of those young people in 5 or 10 years time.

Please take some time to consider what success in life means to you. And then how your current understanding of success in school matches this and if you need to spend some time re defining this in today’s context in your own mind so that you can join us in understanding and helping others to understand why schooling as we knew it has to change. And change fast. And change significantly.

Many of us inside the system are trying to change it. Vastly change it, not just tweak it a little. We need your help, and even more importantly your understanding. We need your support in our activism and we need you to talk about this with everyone. These changes aren’t just about and for the school system. They are about and for society and we need to spread this message widely."

Friday, October 5, 2018

It's Not Cyberbullying, But ...


It's Not Cyberbullying, But ...
A student sees a group of girls coming toward her in the hallway. One has been her best friend since second grade, but she doesn't know the others very well. She says hi to them as they pass. They all ignore her or roll their eyes, including her friend. A few lockers down, they whisper to each other while they stare at her and laugh behind their hands.
While we can all agree the girls in this situation are being mean, can we call this bullying?
These "IRL" (in real life) scenarios happen all the time, and they often carry over into the online world. And though insults, exclusion, and even all-out aggression don't always meet the technical definition of cyberbullying -- ongoing, targeted harassment via digital communication tools over a period of time -- they still hurt.
The best remedy for all these issues is prevention and education: Teaching kids what it means to be kind and respectful and a responsible digital citizen can nip lots of trouble in the bud. But when and if problems start, it's good for parents to understand what's happening -- and how to help.
So, other than straight-up cyberbullying, what are some other reasons our kids might be bummed by others' online behavior?
Ghosting. When friends cut off online contact and stop responding, they're ghosting. Refusing to answer someone's texts or Snaps is actually a way of communicating during a shift or upheaval among a group of friends. Often, instead of ever addressing the issue head-on, kids will just ignore the targeted person.
  • How to handle it. Being ignored is tough. Instead of relying on the old parent standby, "If they're ignoring you, they're obviously not your real friends," try to empathize and validate your kid's feelings. If they're willing, encourage them to try a face-to-face conversation with the ghosters. If that feels too hard, suggest your kid stop trying to get replies; the ghosters may come around, but if not, your kid is free to move on.
Subtweeting. When you tweet or post something about a specific person but don't mention them by name or tag them, you're subtweeting. Usually, subtweets are critical or downright mean. Since the target isn't tagged or even named in most cases, they might not know it's happening until someone clues them in.
  • How to handle it. If your kid finds out someone is subtweeting them, they have a few options depending on the perpetrator. If it's a friend who's suddenly turned on them, it's a good time to address it face-to-face. If it's someone they don't know well or have a conflict with, it's best to ignore it. Engaging in a Twitter war (or conflict on any other platform) usually escalates the problem.
Fake accounts. Sometimes kids will create fake accounts in someone else's name and use that account to stir up trouble or hurt that person. In most cases, there's no way to trace who created the account, and even if it's shut down, the person can just create another one.
  • How to handle it. Dealing with fake accounts can feel like a game of whack-a-mole. But a kid who's targeted should actively defend themselves by blocking and reporting it. Kids should also let friends know what's happening to set the record straight -- and take some of the fun out of it for the person creating the accounts.
Sharing embarrassing posts and pics. Taking selfies and group pics are a normal part of tween and teen life. But sometimes kids take pictures of each other that, while fun in the moment, are potentially embarrassing if widely shared or cruelly captioned. Often this is done by someone who thinks they're being funny or assumes everyone will get the joke. But pictures or compromising posts can make the rounds in a hot minute, so no matter the intentions, the shame can stick.
  • How to handle it. It's best if kids get in the habit of asking each other for permission to share photos. But that won't always happen. Remind kids to think about the impact the photo will have on others before they post it. Kids can also ask their friends to take down embarrassing pictures as soon as they know they're public. If the image has already made the rounds, they may not be able to chase down every copy. But you can reassure kids that everyone will likely move on to the next piece of news and forget about it soon.
Rumors. Social media is a perfect venue for the rumor mill, so lies can go far and wide before the target even knows what's happening. And once the fake news is out there, it's pretty impossible to reel it back in.
  • How to handle it. Your kid's response depends on the type of rumor. If it's something that involves other people -- like a rumor that your kid stole someone's significant other and that has led to threats -- you may need to get the school involved. If the rumor is embarrassing or hurtful but isn't likely to cause a fight, it's fine for your kid to post a response. Coach them to respond just once and ignore the comments. Otherwise, they can refute the rumor in person when it comes up and wait for everyone to move on.
Exclusion. A kid may be scrolling through their feed and stop cold at a picture of all their friends together -- without them. Usually, these kinds of photos aren't intentional slights. But sometimes they are. And if the person who posted the picture knows your kid follows them, there's -- at the very least -- a lapse in judgment.
  • How to handle it. Responding online probably won't get the best results. Encourage your kid to approach the original poster face-to-face and explain that the photos hurt their feelings. It's best if your kid can use "I" statements, like "I felt really hurt when I saw that picture … " (not "I think you're a jerk"). If your kid can express their emotions honestly, they'll probably discover it was just a careless oversight. If it was a deliberate jab, then your kid should probably unfriend the OP (original poster).
Griefing. Remember those kids on the playground who always whipped the ball at other kids and called them names? Those kids play multiplayer video games, too. But instead of whipping a ball, they kill your character on purpose, steal your game loot, and harass you in chat. Online, that behavior is called "griefing." If your kid plays multiplayer games with chat, they're bound to run into it at some point.
  • How to handle it. Before your kid starts playing a game with anonymous strangers, make sure they know how to report and block players who are being cruel on purpose. Tell your kid not to get into an argument over chat, since it probably won't resolve anything and could escalate the aggression. Certain games tend to have more toxic behavior than others, so encourage your kid to try a different game where the community is known to be respectful and the moderators don't tolerate trash-talking.
Hate speech. Teens encounter hate speech even more than cyberbullying. This kind of language is similar to cyberbullying, but it's targeted to hurt someone based on personal traits such as race, ethnicity, nationality, religion, disability, sexual orientation, gender identity, or belief system. And unlike the persistent cruelty of cyberbullying, it can be a one-time thing. Even if your kid isn't the object of the posts or comments, they may feel the impact if they're a part of the targeted group.
  • How to handle it. If your kid encounters hate speech online, it's OK for them to post a matter-of-fact, one-time response refuting it. But they shouldn't get involved in a flame war. Check in with your kid about the kinds of attitudes they see expressed online. If they're seeing a lot of hurtful language, encourage them to seek out alternative feeds -- especially ones from supportive online communities. And if it's something really painful or that makes your kid feel humiliated, offer strong counter-messages. If your kid knows the person who posted hate speech -- such as another student at school -- you can gauge whether to get others (administrators and other parents) involved. Hate speech can have very real consequences in the real world, depending on the context and whether threats are involved

Friday, September 14, 2018

The Robots are coming and they want your jobs


Experts believe that almost a third of the global workforce will be automated by 2030. But are universities preparing students for the rise of the office machines?

Had you popped into the equity trading floor at Goldman Sachs' New York headquarters in 2000, you would have walked into a bloodbath of the senses: 500 men and women projectile swearing, phones blaring, the dizzying aroma of adrenaline oozing from every human orifice. These days, you might just make out the lifeless whir of 200 high-speed servers over the ticking clock. Because those 500 people have been whittled down to three. The other 497 have been usurped by complex algorithms. 
These were not working stiffs: cleaners, receptionists or other service-industry hirelings already humbled by computers. They were university graduates with hard-fought degrees in subjects like business, finance or economics. Trouble was, for all their brainpower, passion and pedigree, algorithms just did the job better. They aren't the only victims. The computers, now, have caught the scent of blood.
"A lot of people assume automation is only going to affect blue-collar people, and that so long as you go to university you will be immune to that," says Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future. "But that's not true, there will be a much broader impact."
This raises the question: as we move toward the brave new automated world, is a university degree in, say, economics, philosophy, English or anything else that isn't to do with fixing cobots (collaborative robots) or writing algorithms worth the PDF file it was exported on? Or is it, practically speaking, useless? And if so, what are universities doing about it?
"Most universities are simply not doing enough to prepare students for the automated workforce," says Nancy W Gleason, PhD, director of the Centre for Teaching and Learning at Singapore's Yale-NUS College, and the author ofHigher Education: Preparation for the Fourth Industrial Revolution. "We need to teach students to be cognitively flexible, to have the skills and confidence to try different jobs throughout their lives. In the gig economy, you're not going to have seven employers, you're going to have seven careers. People might say, 'Oh my degree in history didn't do me any good.' Well, guess what, neither will a degree in radiology, dentistry or law."
This is not a joke. Last year, a report by McKinsey Global Institute suggested that up to 800 million careers (or 30 percent of the global job force) – from doctors to accountants, lawyers to journalists – will be lost to computers by 2030, while every single worker on earth will need to adapt "as their occupations evolve alongside increasingly capable machines". Others suggest this number may be as high as 50 percent. "Machines are taking on cognitive capability, beginning to compete with our ability to reason, to make decisions and, most importantly, to learn," adds Ford. "At least over the next couple of decades, AI and robotics are going to eliminate huge amounts of jobs. Beyond that, it gets more unpredictable; we really don't know what's going to happen."
*
To find out more, I contacted 25 of the world's leading universities to ask what, if anything, they are doing to prepare students for the choppy waters of fluid work. Of America's eight Ivy League schools, only Dartmouth College had something to say; the rest either did not reply, were too busy or couldn't find the proper person for me to speak to. And of the eight UK universities I approached, the London School of Economics and University of Sheffield did not reply, while Leeds and Birmingham both couldn't find anyone suitable to comment. A press officer for the University of Cambridge said she wasn't "aware of anything Cambridge-specific".
Oxford, Bristol, Manchester and City, University of London, however, all got back to me. "Next year, we'll be introducing an interdisciplinary course unit that all of our undergraduates can take, and which looks at exactly this issue," said Caroline Jay, PhD, a senior lecturer in computer science at the University of Manchester.
According to its overview, the course, called AI: Robot Overlord, Replacement or Colleague?, aims to "equip Manchester graduates from all disciplines with an understanding of the impact this technology currently has, the way this is likely to change in the future and, crucially, the ability to grasp the opportunities it brings, whatever your chosen career."

"The whole point of universities is to equip people with the skills to learn," adds Jay. "Students are not just here to learn a set of facts, but to learn how things change, evolve and how they can fit into that future."
The University of Bristol takes a broader view. "If the economy is becoming more of a gig economy, preparing students to become entrepreneurial is something we take very seriously," says Dave Jarman of the university's Centre for Innovation and Entrepreneurship.
So the university has built Bristol Futures, a new initiative that offers a range of open online courses designed to provide "the opportunity for the development of core academic skills and key personal attributes to help students become adaptable, successful graduates". The courses currently offered – Innovation and Enterprise, Global Citizenship and Sustainable Futures – are not degrees per se, but run alongside a student's chosen subject.
"This is our long game," says Jarman. "We're looking at how we smuggle those ideas into anything from classics to chemistry. Of course, sometimes changing practice in a university is like turning round an oil tanker in a phone box, but we're in that process."
Dirk Erfurth, the careers service director at the University of Munich (LMU), in Germany, agrees. "You cannot expect every professor in every faculty to take these issues as their most serious concerns. That is not their task. It is our task in the careers service, as the bridge between the labour market and the academic world."
He says LMU offers funded overseas internships, mentoring programmes and holiday-season mini-courses (€95 (£85) for 40 hours of class time) in subjects like presentation and rhetoric, leadership, time management and communication, and conflict management, as well as a "professional education unit" for former students looking for a skills bump. Erfurth says LMU takes students' future employability very seriously, as long as the students are prepared to play the game.

"This is not about grades or certificates," he adds. "We want to show students that, if you invest a little bit of time and money in your skills, wonderful things can happen to you. You have to leave your comfort zone and go out into the world, to distinguish yourself from others, take internships, develop your open-mindedness, creative thinking, curiosity, networking and entrepreneurial spirit. Those are the skills that will make you employable in the future." This is what the University of Copenhagen calls an "interdisciplinary skills profile".
"We aim to improve students' opportunities to exploit the potential of digitalisation and big data both across the university and with our collaborated partners," says the university's vice-provost, Anni Søborg, echoing much of what I've already heard. "And we make explicit how programmes can be applied in the job market, including a focus on initiatives that ensure students have the requisite skills for innovation and entrepreneurship."
And so, over to America, which Dr Gleason says is "doing very little in higher education relative to other countries". "The truth is, we don't actually know all the jobs we are preparing students for," says Dartmouth's associate dean for the sciences, Dan Rockmore. "Dartmouth is the premier liberal arts university in the world. The liberal arts ethos is that a well-rounded and broad education, an exposure to the multidimensional nature of the great challenges of our day, are what prepares a mind for the unpredictable challenges of the world post-graduation. We aim to teach critical thinking, habits of mind that can be brought to bear in many different contexts."
He then pointed to the Dartmouth Entrepreneurial Network, which gives students "the opportunity to try out ideas for and in the 'new economy'", along with its "flexible quarter" system that gives students the "opportunity to experience the workplaces of the new economy" all year round. "In short, a Dartmouth education will prepare students to take advantage of those [technological] transformations."
The key point here is that all these courses are optional. No students are forced to take them, and they offer no future-proofing guarantees. But then, is it really a university's responsibility to hold students' hands throughout their lives? Or is it, really, up to students?
"I would say this is like a gym membership, not a butler," says Jarman. "You don't pay your money and the goods turn up. You pay for an opportunity, but you've got to go in and lift the weights and run the distance. If you do those things, universities have got amazing facilities and people that can help you accelerate that process. But it doesn’t land on a plate."
University students – as Jonathan Black, the director of university career services at Oxford University, is keen to point out – are adults after all. "One of the things Oxford, and other universities, endeavour to do is to persuade people who are perfectly bright enough to benefit from a university education to consider our many extracurricular services, such as the careers department, student societies, volunteering or work experience in the summer. That's where they're going to get that experience, but they’ve got to realise they're getting it."
He went on: "But we're not going to tell students what to do. I think we'd be doing students a disservice if we hold their hand all the way until the end and then say, 'Here's your job.' We're here to lay the table, show students what's available, but it's up to them to decide if they want to eat."
The truth is, what keeps most university presidents up at night is not the robocalypse, but shorter-term threats to their survival, like competing for endowments and enrolment. But there is one university president whose dreamsare overrun by robots. That, Joseph E Aoun says, is his advantage: robots cannot dream. The president of Northeastern University (NU) in Boston has developed a strategy to fight back. He calls it "humanics".
"If robots are going to replace human beings in the workplace, then we need to become robot-proof," he says. "The rise of extraordinary artificial intelligence requires us to cultivate extraordinary human intelligence. Even today's most brilliant machines still have limitations. Machines do not yet have a capacity for creativity, innovation or inspiration."
His idea, essentially, is to give students the ability to solve the world's most urgent issues in a way that robots cannot – with empathy. Or, as he puts it: "I've not yet seen a computer cry."

Laid out in his book, Robot-Proof: Higher Education in the Age of Artificial Intelligence, humanics has become a staple of Northeastern’s programme that requires computer science majors to, say, take side classes in theatre or improvisation. "Why? Because it allows them to start interacting with others, which is a simplistic but vital example of getting people to go beyond what they’re studying," he says. "Human interaction is going to be a vital skill in the future."
Aoun argues that the only way to create a curriculum for a "robot-proof" education is by fostering "purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship".
But, he says, experiential learning is also essential, and so has developed an acclaimed co-operative education and career-development programme called Co-op at NU. "We have a network of 3,000 employers in 136 countries on all continents, including Antarctica, where the students apply for paid jobs for six months," he says. "There, they get the unique opportunity to learn how people interact in the workplace, what opportunities look like, what it's like to work in a different cultural setting; they start understanding themselves better. That is powerful and transformational."
The numbers speak for themselves: most students do two or three co-ops throughout their college years, and 92 percent of them find full-time work within nine months of graduating.
*
The flood of automation is coming. But Aoun and Gleason say simply teaching students to swim – as the handful of universities I spoke to are beginning to do – will not save them from drowning eventually. Instead, they agree, we need to build an arc. "We must move away from the idea of a university degree being front-loaded in the first 18 to 24 years of your life," says Gleason. "Instead of a three- to four-year model, students should be admitted for 20 years with the ability to come back and take classes for free whenever they want."
That is exactly what both NU and NUS, where Gleason works, are doing. NUS, for example, has launched two government-supported "lifelong learning institutes", where graduates can return at any stage of life to "upskill" in hundreds of courses – long and short – from psychology to Arabic, "business agility" to "cyber security for the internet of things". "We are looking at stacking courses together to re-skill adults," Gleason says. "It's a long road ahead, but the real low-lying fruit is more experiential learning, and less lectures."
As for NU, Aoun has overseen the establishment of a lifetime-learning network of campuses in Charlotte, North Carolina, Seattle, Silicon Valley, Toronto and San Francisco, where members can return to learn new skills. "Seventy-four percent of the population are what we call 'non-professional learners'," he says. "Ignore them and universities will become irrelevant. If we don't step in and integrate lifelong learning as part of our core mission, we become like the railway industry that saw the onset of the airline revolution and said, 'This is nothing to do with us.' They didn't see themselves in the transportation business, and their business suffered as a result."
None of this, of course, comes cheap. NUS and NU are both well-funded institutions. Gleason suggests a tax on robots would cover it. If not, industry needs to step up and cough up. "I don't see why industry shouldn't," she adds. "It's not like they won't be profiting from some of the jobs that go away."
So what, in the meantime, can students who don't go to NUS or NU – or one of the world's few other universities with similar ideas – do to future-proof their careers? The answer, really, is to become as human as humanly possible. We need to fight back with feelings. "The future labour market needs not content experts or information processors," says Gleason, "but creators, analysers, problem solvers, collaborators and lifelong learners who are able to acquire new skills as old ones quickly become obsolete. The best place you can learn those skills are in the liberal arts."

Maybe, as a start then, that degree in philosophy or English isn't such a bad idea after all.