Do you suffer from FUD - fear, uncertainty and doubt?
I often find myself in conversations about the evolution of artificial intelligence (AI) as well as the perceived threats and benefits of an AI-first world. Almost everyone who engages in such conversations is partially motivated by FUD relating to the impact of AI on individuals, work, organisations, society and even the future of the human species.
New technologies have always been accompanied by apprehension and alarmism before their benefits become apparent and they are accepted at speeds that were never expected. FUD is inevitable, as is its annoying ability to delay the adoption of new technologies, and progress. FUD about a new technology being overcome is also always inevitable.
The FUD cycle
I have learned in life and business that there is no point worrying about an inevitable future once the factors that will bring it about have crystallised. Energy is best spent on embracing that future and developing strategies to extract the most value from it, or minimising any negative impact.
In this article I explain the reasons why AI-related FUD should be minimised and how the acceleration of AI adoption will overcome AI FUD more quickly than it was in any previous new technology adoption cycle.
The Cycle of Scepticism and Acceptance
Throughout history, breakthrough innovations including cars, electricity and cloud computing faced initial scepticism and uncertainty before becoming integral to modern life.
When electricity first flickered into being in the 19th century, its early systems frequently caused fires and electrocutions. The war of the currents saw Edison, who had invested heavily in DC power, throw FUD at the suppliers of more scalable and useful AC power. Leading experts deemed electricity unsafe for widespread residential use and AC power was used to develop the electric chair and to electrocute elephants in public demonstrations of its danger to life. Edison eventually adopted AC and within 50 years, access to metered and affordable electricity generated at scale and distribution networks had changed the world forever.
Pioneers in the computing age like IBM’s Thomas Watson questioned the technology’s viability. In 1943 Watson speculated that only five computers would find a global market. Today over 1.5 billion personal computers connect workforces and families worldwide.
Legacy computing infrastructure suppliers said that cloud computing, a market I participated in pioneering in the UK, was not secure or flexible. Today most computing workloads are hosted in the so-called public cloud by Amazon Web Service, Microsoft Azure or Google Cloud. It rarely makes sense to build your own infrastructure, especially for the growing number of AI-related workloads.
In hindsight, early doubters and those propagating FUD always appear short-sighted, but they can also help to ensure that new technologies are safe, useful and properly regulated before mass adoption. Healthy scepticism can act as a balance to the unchecked exuberance and profiteering of those bringing a new technology to market.
AI Anxiety and the Next FUD Cycle
Modern AI evokes even more concern regarding its potential impact on employment, privacy and even humanity’s future. Numerous science fiction films based some competition between humans and AI for power, resources or even love with uneasy or dire results have done for AI what the film Jaws did for sharks. Although alignment between AI and humans is important given the impact assessment of improbable worst case scenarios, AI and even super-intelligences may be malevolent and lack the basic instincts that have driven humans to do terrible things.
Unease Over the Future of Work
Work will change dramatically, but Studies warning of significant workforce disruption fuel anxiety. A 2013 Oxford study forecast that automation could threaten over 47% of US jobs within two decades. While such predictions vary in conclusions and timescales, their sheer volume amplifies uncertainty. However, in the case of AI the likelihood of a significant disruption has led to large enterprises and governments to take action. The slow adoption and long period of FUD is going to be shorter in my view due to the material economic upsides that are already obvious. As I have said before, the adoption of AI and the eventually development of artificial general intelligence (AGI) may be the last invention by humans without AI assistance. In the case of wok, we are already experiencing change and even if AI development ceases today a wide range of sectors and roles have been permanently changed.
History suggests societies adapt to technological change reshaping work. In 1841, over 20% of UK employment was in agriculture – today is has declined to less than 1%. In the US 41% were employed in agriculture in 1900; today it is less than 2%. Yet mass unemployment was avoided. The industrial revolution saw textile artisans displaced by mechanised looms. Rather than systemic joblessness, people migrated from cottages to towns cities to work in mills and factories.
Today, while certain roles will undoubtedly become unnecessary, and others substituted, AI will initially automate specific tasks rather than entire jobs. The cost of AI ‘agents’ will begin to replace the cost of human resource. Initially, automation of the mundane will allow workers to focus more on responsibilities requiring human strengths like creativity. We will all have to learn to add more value or become unemployable.
Educational programmes, provided by governments or employers, can reskill displaced workers or upskill them to add more value. New social safety nets will be necessary to support those left behind and universal basic income (UBI) is already on the UK and US political agendas. With proper leadership and acceptance of change, employers and economies can unlock AI’s benefits while ensuring shared prosperity.
Concerns Over Data Privacy
As AI systems grow more capable, their reliance on data increases. Feeding them people’s personal information raises concerns about how it will be used. OpenAI’s ChatGPT was trained on most of the internet and tech companies are working hard to lock-down their content using technology or lawyers.
China’s Social Credit System exemplifies these fears. By collecting extensive data on citizens’ lives, enabling omnipresent monitoring, it hints at how unrestrained AI could accelerate the rise of digital authoritarianism.
Yet the same technologies used for state surveillance also enable breakthroughs in sectors like healthcare where large data sets exist and are powerful. Google DeepMind’s AI techniques using NHS patient records pushed forward detection of eye disease and kidney injury. The issue is not inherently the technology itself, but how it is applied.
International debates are coalescing around AI governance to balance innovation and ethical use. Organisations like the EU’s High-Level Expert Group have outlined frameworks surrounding transparency, accountability and consent for data usage that can guide responsible development. But vigilance around civil liberties will remain vital as capabilities advance. In any event, AI does not exist in a bubble outside existing laws and regulations, such as GDPR.
Apprehension About Existential Risks
Lurking beneath surface-level concerns lies deeper unease about AI potentially threatening humanity’s future. Dystopian science fiction depictions of humanoid robots trampling over the skulls of easily defeated human soldiers reinforce fears of sentient robots turning against their creators. Current AI lacks fundamental attributes that define human consciousness and leading experts believe that AGI, which can rival human cognition remains decades away, if ever feasible. AGI does not mean sentient either, which may never occur.
Rather than framing AI as an independent entity, the technology is better understood as a tool, like your toaster; just far more powerful. Used judiciously, its capabilities can empower society rather than endanger it. But reckless implementation without proper regulation could lead to unintended consequences warranting caution, especially in the hands of bad actors.
In particular, care must be taken to avoid embedding biases that marginalise groups based on race, gender and socioeconomic status. Microsoft’s Tay chatbot exemplified this in 2016 by rapidly adopting offensive behaviours from toxic inputs (i.e., ‘the internet’). Keeping AI aligned with ethics will only grow more vital as it becomes deeply integrated into social systems.
Beyond Binary Narratives – Recognising Complexity
AI debates often simplify progress as binary - utopia or dystopia. But innovation rarely follows such simplistic narratives. AI enables transformative applications alongside potential for misuse. Whether maximising benefits while minimising harms, or speeding progress while respecting people’s apprehensions, nuance and balance are key.
Realising AI’s upside requires acknowledging that it won’t be a panacea. All tools bring trade-offs, but with wise governance, AI can positively impact core issues like climate change, healthcare, education and beyond. Google D so far catalogued by science, solving one of the biggest challenges in biology in only 18 months.
Navigating the tides of change also demands empathy. Progress has always disrupted - but when managed with care and foresight, societies can fluidly adapt. Leaders must listen to those uneasy about AI’s implications and work constructively to address concerns.
Beyond The FUD - The Path Forward
As with past innovations, AI brings immense opportunities alongside complex challenges. By recognizing FUD as a natural human response to change, we can thoughtfully guide AI's development and deployment for the common good.
There will always be those who resist change for many reasons although most relate to the protection of investment in legacy or the desire to avoid the uncertainty that change brings.
The future belongs to the bold and in those that embrace AI and get out in front first are going to stay there not matter how much FUD is generated.
Thanks for reading.