THE RISE OF AI POLITICS

Jordan Leichnitz

The rollout of ChatGPT’s publicly accessible artificial intelligence (AI) model earlier this year, hailed as ground-breaking and paradigm-shattering in business, media and academia, was met with little fanfare on Parliament Hill. 

With the exception of Michelle Rempel Garner’s curious thought experiment where she asked the chatbot to make a case about whether or not AI should be regulated (with her prompting, it had distinct libertarian tenancies), the new technology seems to have passed over Canadian politics with barely a blip. 

I suspect that clever staffers have already taken advantage of the technology’s powerful writing abilities to make the dull task of drafting endless briefing notes, speeches and Question Period material a bit easier. ChatGPT can do some fun party tricks, to be sure, but it would be a mistake to think AI’s impact on our broader political and democratic ecosystem will start and end with copywriting. 

At its core, generative artificial intelligence is a type of algorithm. ChatGPT is trained on incomprehensibly vast amounts of human writing, and it essentially makes predictions about what might come next – learning these patterns allows it to mimic the way a person would speak or write. AI is also able manipulate huge data sets, making it a powerful tool to identify patterns and complete rote tasks. It can create a lot of content – everything from articles and social media posts, to images and video –  instantly and very cheaply.

With 100 million users just two months after its launch, it’s clear that ChatGPT and accessible AI are here to say. What could it mean for our democracy? 

Most obviously, it’s going to be harder and harder to spot disinformation. AI is giving a leg up to state-sponsored purveyors of disinformation who will use it to produce more fake news, disseminate it faster, and seed it deeper into social media networks. Increasingly, AI-powered text, image and videos will look and feel more like the real thing and appear to come from legitimate new outlets. 

AI can power fake social media accounts that are far more persuasive than bots – ones that seem like real people who participate actively in normal online conversations, talk about regular things other than politics, and join closed communities to slip in misinformation at critical moments – increasing the risk of electoral interference and pushing conspiracy theories even more into the mainstream.

When it comes to the content made by ChatGPT, a commonly-cited concern about AI is ingrained bias. AI risks magnifying our worst societal tendencies because it’s trained on human writing, which contains racism, sexism, homophobia and all sorts of other problematic social patterns. AI can reflect back these inequalities in its responses and recommendations. Applied to political work, this means that AI-generated strategies and analyses may miss critical connections, ignore certain public policy implications and neglect marginalized groups – deepening and entrenching existing inequalities. 

Turning to tactical considerations, an era of easily accessible AI will change how campaigns look and communicate. Targeted messaging will be able to take on new levels of speed and accuracy, as AI blends data from social media, voter’s lists, and internal party databases to produce new insights about patterns in voter behaviour, instantly.

Expect messages to be quickly optimized to match shifting public attitudes, a cycle that has historically relied more on overnight polling, real-life and phone bank feedback from the ground and the gut instinct of campaign directors. This amped-up campaigning power won’t be limited to political parties, either – fringe groups and the far-right are already thinking about how to use AI to advance their agenda, and Elon Musk is reportedly looking to build a competitor to ChatGPT with no ethical guardrails at all. 

Between elections, AI has the potential to supercharge political lobbying. Sophisticated, unique AI-generated comments could easily be orchestrated to overwhelm public comment processes – one recent experiment submitted 1,000 unique fake comments to a Medicaid reform panel in Idaho, comprising over half the total received, and officials accepted them as legitimate comments made by citizens before the experiment was ended and they were withdrawn. It doesn’t take much imagination to see what this type of astroturfing could do to the already-battered reputation of Canada’s public consultations on, say, major energy projects. 

The ability to instantly analyze vast amounts of data will also make AI a double-edged sword when it comes to legislation – it’ll help legislators quickly understand the implications of complex bills, but other actors will be equally fast to identify and exploit loopholes and errors. It could put Canada’s multi-billion dollar corporate tax avoidance problem on steroids. 

These are all interesting and important – or risky – uses for this technology that merit discussions about how it should be regulated to protect our democratic process. 

But the biggest impact of the broad-based adoption of AI in Canada may be societal. For a long time, it was assumed by many that the jobs that would be most likely impacted by AI were blue collar, working-class jobs. Truck drivers, fast food workers, factory workers and other physical jobs were seen as being the most at risk of replacement by robots. 

As it turns out, AI can analyse and do creative work far better than it can build, move things or do other kinds of unpredictable physical labour. Its fast-moving, ever-improving algorithms could quickly transform the media, upend the legal profession and revolutionize the financial sector. Analysts, designers, writers and other creatives are all watching the developing technology warily, as companies consider ways to cut staff and improve the bottom line. 

In short, AI is coming for white collar workers – and for the parties that rely on their votes. 

University-educated voters have held political sway in the West since the 1980s. They’re credited with driving economic growth, shaping cultural norms, and as a group have the unenviable reputation of being “convinced of [their] own unassailable position as comprising the most advanced people the earth has ever seen”. The professional class works for income like blue collar workers, but falls on the latté-sipping liberal side of the cultural divide, and has generally managed to avoid the worst impacts of recessions that have eroded the living standards for working class people since the 90’s.

Generative AI poses a serious threat to the hegemony of the professional class, and we may see its impacts extending to reshaping parts of the comfortable, upper class small-L liberal voter coalitions that have traditionally kept the Liberal and New Democratic parties afloat. 

Parties that can offer a compelling political – not just policy – response for those whose jobs are likely to be impacted by this transformation will have an advantage. On that, the Liberal government’s recent just transition debacle is a glaring example of what not to do when talking to people whose jobs are under threat.

One thing is clear: the risks and opportunities posed by AI technology should be an important public policy debate, one that Canadian politicians and political professionals have a vital stake in. There’s zero chance that our democracy will be unmarked by developments in AI, and the sooner we recognize the wide-ranging impacts it will have, the more practical guardrails can be put in place to protect the integrity of our elections, legislatures and public policy. 

As for Michelle Rempel Garner, after her look into the AI abyss she uncharacteristically landed on heavy government regulation of AI as the path forward. 

As Canadians grapple with the wide-ranging impacts of AI on every part of our lives, don’t be surprised if the political implications are equally astonishing.

. . .

ABOUT THE AUTHOR:

Jordan Leichnitz - Jordan Leichnitz is an Ottawa-based consultant with two decades of experience in progressive political strategy and campaigns at the federal, provincial and municipal level. She spent ten years on Parliament Hill working in senior strategy positions for four Leaders of the New Democratic Party of Canada, including serving as Deputy Chief of Staff, overseeing policy development and handling issues management for the parliamentary caucus. Since 2020, Jordan has served as the Canada Program Manager for the Friedrich Ebert Stiftung, a German political foundation. Jordan holds a Master's degree in Political Science from the University of Ottawa, and lives in Ottawa with her partner and two young children.

The views and opinions expressed are those of the author and do not necessarily reflect the position of Air Quotes media. Read more opinion contributions via QUOTES from Air Quotes Media.

Previous
Previous

BRITISH COLUMBIA: FAR AND AWAY FEDERALISM

Next
Next

CONFLICT: POWERFUL MEN NOT GETTING BASIC PRINCIPLES