The Future of Work and the ‘Hyperbole Curve’
The perils and promise we imagine the future to hold are like a mirage on the horizon, reflecting a time that never really arrives. It is the perfect canvas for us to project our hopes and fears onto, always ahead, ominous or inviting.
The result is that we fail to attend to the present and our recent past, and the clues they might offer to validate or diminish our fears and hopes.
Our organizations and institutions are no different, made up as they are of imperfect and irrational individuals. Even in the face of clear examples to the contrary we often persist in assuming organizations to be rational actors, taking action based on the best information, dispassionately assessing the costs and benefits of various options before making a decision.
The truth is that organizations get swept up in fears and dreams just like individuals do. They are not immune to hysteria, the bandwagon effect, or the whims of a leader who read the latest HBR article as a prescription for that which ails the company (or seems to) just at the moment).
Y2K (which cost businesses $300 billion dollars to prepare for, quite probably unnecessarily), and the much feared Baby Boomer retirement exodus are two recent examples of widespread fears that never really came to be.
Which of our current fears is similarly misplaced? A few ideas:
According to many, AI is the means by which robots will steal our jobs, leaving us to trade bird-bone necklaces for cricket soup in a Mad Max-like hellscape. Or maybe not…
The theory is that the ongoing advances in artificial intelligence will render many of the tasks and jobs we do now obsolete, and that we should fear the rise of an intelligence capable of doing even the most creative and complex jobs.
If dreams of Cyberdyne Systems keeps you up at night, then this article from Kevin Kelly The AI Cargo Cult: The Myth of a Superhuman AI might make you feel a little better. In it, Kelly argues that the entire notion that advances in AI will beget a superhuman intelligence is based on flawed beliefs about the nature of intelligence and its value to solve humanity’s problems.
Inherent in fears about a superhuman intelligence “smarter than humans” is the assumption that intelligence is a one-dimensional metric, which Kelly asserts is false:
“We will soon arrive at the obvious realization that “smartness” is not a single dimension, and that what we really care about are the many other ways in which intelligence operates — all the other nodes of cognition we have not yet discovered.”
Likewise the belief that an advanced AI would be a “generalist intelligence” is based on the assumption that we have a “generalist intelligence” as humans. Kelly contrasts this with Google search, which has a specialized kind of artificial intelligence in one skill: scanning data on millions of web pages to return relevant results to a user. Again, Kelly asserts that this assumption has yet to be proven:
“Because we believe our human minds are general purpose, we tend to believe that cognition does not follow the engineer’s tradeoff, that it will be possible to build an intelligence that maximizes all modes of thinking. But I see no evidence of that.”
Kelly’ asserts that AI is already here, filling in gaps, specializing in forms of intelligence that are beyond us as humans, but no more likely to take over the universe or solve our most challenging societal problems than Google search is.
“Yet non-superhuman artificial intelligence is already here, for real. We keep redefining it, increasing its difficulty, which imprisons it in the future, but in the wider sense of alien intelligences — of a continuous spectrum of various smartness, intelligences, cognition, reasonings, learning, and consciousness — AI is already pervasive on this planet and will continue to spread, deepen, diversify, and amplify.”
Fears that humans will be made obsolete by AI are perhaps overblown. Which is not to suggest it will not bring significant changes to our personal and professional lives. It’s likely time for us to take a deep breath and try to look past the hysteria on this. If you’re not yet convinced check out this Gartner blog post analysis of the AI ‘hyperbole curve’.
Everyone is going to lose their jobs and we’ll all be free-agents juggling whatever tasks the techno-aristocracy throws our way. As I’ve written about before, most countries’ laws lag far behind the realities of modern work. Gig or platform workers and virtual workplaces aren’t contemplated in most of our employment laws, exposing workers to exploitation and workplaces to liability. But, don’t worry, the laws will catch up. When they do they’ll be unwieldy, confusing, miss the mark, and be stifling for everyone involved…but most employment law is.
Millennial Job Hoppers Will Destroy Workforce and Succession Planning
“Millenials are opportunistic, job hopp-“ Nope. “But they think they’ll get a promotion in 6 mont-“ Nope. The data (here and here) doesn’t support this specific accusation, nor does it support the utility of generational stereotypes generally. We need to let this one go. I have a theory that Millennials have been a convenient scapegoat as organizations and HR have begun to notice that linear processes like traditional succession planning break down in complex environments, where the need for skills and knowledge changes in sometimes unpredictable ways, and can sometimes be better addressed via solutions that are not full-time employees.
Predicting the unpredictable and influencing individual behaviour over an extended period of time was never possible. That’s why many organizations’ succession planning is an expensive joke. The question of organizational capability and adaptability remains salient, and I don’t suggest that we abandon the investment of time, money, and thought into developing individuals, and assessing their interest and readiness. But the idea that succession planning worked great until those job-hopping Millenials came along and ruined everything is inaccurate and ridiculous.
I don’t think we can help but fear the future, but we ignore the history of hyperbole and organizational hysteria at our peril. Not only does surfing the latest wave of fear erode our credibility and cost our organizations time, energy, and money, but we may miss a real and unexpected danger while our gaze is set on the mirage at the horizon.
Read This Week:
The Intimacies of Remote Working – Andrew Speer: I love this post from my colleague Andrew about what it’s like working on our completely virtual/remote team. It’s a great peek into what it’s like to work at Actionable and how we’re trying to build a cohesive and world class virtual workplace.
“Countless articles and think pieces make the claim that “remote working is the future.” But it wasn’t until I experienced remote working done well that I believed the hype. An effective virtual environment is not an easy thing to achieve, but when worked on day in, day out, it can be extraordinary. “
Women in Tech Speak Frankly On Culture of Harassment – New York Times, Katie Benner: Heart-breaking and infuriating. And important to read to further our understanding of the underlying culture that gave rise to organizations like Uber. This is a systemic problem.
“Many of the women also said they believed they had limited ability to push back against inappropriate behavior, often because they needed funding, a job or other help.”
Are you still worried about AI? What other organizational fears did I neglect to mention? How many of you had to Google Y2K?
Image Credit: Scott Webb via Unsplash.com
Trackbacks & Pingbacks