top of page

The AI Con

  • Writer: Jim Parker
    Jim Parker
  • Jul 17
  • 5 min read
ree

Anyone doubting that much of what is driving the Artificial Intelligence (AI) boom is self-interested hype need only look at the share prices of listed AI firms. Nvidia alone has a market cap of more than $US4 trillion - equal to all the funds in Australia's retirement savings pool, the fifth biggest in the world.


AI valuations are so outsized partly due to the ever-mounting, and often conflicting, claims being made for it. On the one hand, we are told that this is a technology that will totally transform and enhance our material world and humanity itself - and on the other that it will bring about doomsday and destroy human life on this planet as we lose control of its implications.


Both claims - the boom school and the doom school - are part of the same cycle of hype being pushed by the promoters of AI, according to the authors of this timely new book, The AI Con - How to Fight Big Tech's Hype and Create the Future We Want. The authors are experts on the subect - Dr Emily Bender is a professor of linguistics at the University of Washington, while her co-author, Dr Alex Hannon, is a former research scientist at Google's Ethical AI team. (I saw Bender present recently at the University of Technology Sydney).


ree

For all the media noise, AI is essentially a marketing term, Bender and Hannon argue. It is deployed when those who build or sell AI programs stand to profit from persuading us that their products can do things that in fact require human judgement and creativity. Of course, they are perfectly within their rights to make such claims. But equally, the rest of us are equally within our rights in insisting on the application of the same degree of scrutiny, disclosure and accountability our laws demand when contemplating the deployment of any new, potentially society-upending technology


Yes, AI is profoundly exciting for many investors, because it offers the hope of large-scale automation of processes now done by humans in decision-making, classification, recommendations, translation, text and image generation, and countless other activities.


And It's also true that many people who use something like ChatGPT for the first time can be left overawed. You issue a prompt - for example 'write me 800 words on the political tensions and policy issues involved in the energy transition' - and, within seconds out comes a perfectly structured, grammatically correct and (apparently) soundly reasoned 'analysis'. It strikes novices as almost supernatural. Journalists, like me, wonder whether we will ever work as writers again.


Gripped by FOMO ('fear of missing out'), CEOs everywhere are telling their executive teams to find how such technology can be deployed in their own businesses, either as a defensive or a growth strategy. In this rush, normally rigorous processes can be easily cast aside, while internal naysayers and cautioners are dismissed as Luddites or illiterates.

"As investor interest pushes AI hype to new heights, tech boosters have been promoting AI 'solutions' in nearly every domain of human activity," the authors write. From policing to social services, healthcare, education, law, finance, human resources, transportation, energy, politics, journalism, art and entertainment - machine learning is being promoted as the answer to every known human problem. "For AI boosters, the fully automated AI future is always just about to arrive."

But Bender and Hannon are sceptical and they have spent the past several years looking at why we should exercise extreme caution before embracing this technology wholesale. For one, they write, AI is not a 'thinking' machine at all. It is not sentient. It has no judgement. It has no ethical dimension. It will not create art or solve intractable problems. It has no imagination. It only works with already known, pre-existing and (often stale) information, the sources of which are never specified or tested.

Ultimately, these are language learning models, high-powered replication machines, and souped-up autocomplete programs. They extract text from existing sources on the internet and reproduce it (essentially stealing someone else’s work) in a way that looks and sounds intelligent but that lacks any judgement or human dimension. Ultimately, AI is not about turning machines into humans, but recasting humans as machines. It is as if the entire world is turning autistic - Elon's World.

"AI hype reduces the human condition to one of computability, quantificaton and rationality," Bender and Hanna write. "If we accept that, consciousness can be judged by how it manifests in phenomena that are external to the mind."

Worst of all, AI is about serving the powerful. Not only does the AI hype machine feed the profits of companies in the sector and their investors, but it helps others get rich by giving them cover to steal and launder massive amounts of personal data. It also dangles the prospect of enormous profits for those seeking to replace stable, better-paying jobs with ones that are both more precarious and less fulfilling. ("AI is not going to replace your job. But it will make your job a lot shittier.") And, of course, aside from the insatiable monetary demands of Mammon, the AI hype serves a political purpose, allowing ideologue Ayn Rand-loving libertarians to devalue the social contract by selling the fiction that real social services can be replaced by cheap automated systems.


Watching the world's opinion leaders and many decision-makers fall over themselves for the AI hype is indeed depressing, but the authors end on a hopeful note, pointing out that we do have agency and we can push back. That includes increasing information literacy and asking tough questions of the AI promoters ('What is being automated? What goes in, and what comes out? How is the system evaluated? Who benefits? What are the sources of the information? Who checks it?'). It also means using existing regulation to crack down on illegal claims by companies about what the technology can do and enforcing laws protecting workers' rights.


For all the big numbers being casually tossed around by the boosters, the greedy and the stupid, AI cannot be allowed to blind democratic societies to the governance requirements they insist on in in other areas of the economy - in terms of accountability and transparency and disclosure and privacy rights, and in terms of the ethical dimension for our decisions. Even in terms of finance, an area AI is often touted as likely to upend, automation cannot ever replace realtime price discovery. or overcome the perennial challenge of bad data in/bad data out.


Finally, the authors conclude, we should never underestimate the power of just saying 'no'. As with all technologies, particularly ones claiming to be able to completely transform our established ways of doing things, we must not surrender healthy scepticism and human judgment. Ultimately, any technology must serve humanity, not the other way around. AI can make a useful office assistant (I used it to make the image above), but a very bad boss.


At a time when it seems half the world is impossibly infatuated with the grandiose claims being made for AI, this book offers a badly needed, feet-on-the-ground perspective from two informed experts who know how the automation sausages are made.


Highly recommended.

Stay Connected

Thank You for Connecting!

Receive My Latest Insights

Thank You for Subscribing!

© 2024 Jim Parker Media. All rights reserved.

bottom of page