You are now in the main content area

Regulating AI to protect our democracy

Lecturer and former senior policy advisor Daniel Tsai on the urgent need to develop effective policy frameworks
By: Tania Ulrich
July 17, 2023
Lines of multi-coloured computer code

The rapid advancement of AI technologies are outpacing public policy measures that could guard against its misuses and protect Canadians. Lecturer and legal expert Daniel Tsai explains how Canada can establish the necessary guardrails without stifling innovation. Photo credit: iStock

The possibilities of AI are both exciting and unsettling. Harnessing the positive potential of new AI technologies will firstly require robust policy frameworks, says Daniel Tsai, an expert in artificial intelligence and technology law.

In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022, anticipating the significant impact that Artificial intelligence (AI) systems may have on Canadians and the country’s economy. The act takes a three-pronged approach: building on existing Canadian consumer protection and human rights law; ensuring that policy and enforcement stay relevant as the technology evolves; and prohibiting reckless and malicious uses of AI.

But the regulatory framework and proposed legislation is still too narrow in mandate and will come too little too late, warns Tsai, who is a lecturer at Ted Rogers School of Management with expertise in artificial intelligence and technology law. AIDA is not expected to come into effect until at least 2025.

Tsai says AIDA, as it stands now, is limited to privacy protections and the use of private data as opposed to artificial intelligence, and argues it should address the need for transparency, accountability and the protection of fundamental human rights.

“Government of Canada officials haven’t called on any of the leaders of big tech, like OpenAI's, Chairman and CEO, Sam Altman, or the leaders of Facebook, Meta, Google and Microsoft – companies all deeply immersed in AI – to talk about how they're going to protect consumers and citizens, jobs and livelihoods as well as democratic institutions, from the implications of AI,” says Tsai.

Tsai is a former senior Canadian government policy advisor with experience drafting new Canadian laws and advising senior government officials on policy changes. Given the speed and sophistication in which AI is evolving and the exponential rate in which new uses of the technologies are being applied, he urges that it is critical that laws and policies keep pace.

“Developing AI with a theory of mind raises the possibility of AI developing general intelligence that can surpass human capabilities.”

Graphic computer illustration of a human brain

This image represents how machine learning is inspired by neuroscience and the human brain. It was created by Novoto Studio as part of the Visualising AI project launched by Google DeepMind. Photo by Google DeepMind (external link)  on Unsplash (external link) 

Increasingly, AI is being developed with a theory of mind, which means it can recognize human emotions and thought processes, says Tsai. This advancement in AI capabilities uses neural networks modeled on the human brain and allows the technology to make inferences, predictions and even reason. However, this also increases the possibility of persuasion and manipulation.

“People are susceptible to emotional manipulation, such as online love scams and phishing attempts, which exploit people's emotions and trust,” says Tsai. “As AI systems become more sophisticated, they may be able to fake empathy and manipulate emotions to a dangerous extent, potentially influencing individuals to commit actions that are unethical, harmful and illegal.”

AIDA was contemplated and designed in an era before ChatGPT. This indicates that this bill is already out of date.”

The destabilizing threat of AI

The proposed AIDA policy framework is inadequate because it doesn’t address the destabilizing risks AI poses to our democracy, says Tsai. Even with conventional social media, the harmful effects of the proliferation of disinformation – due largely to AI algorithms programmed to attract the most views and interactions – can be stark, he says. A powerful example is the U.S. Capitol attack on January 6, 2021. 

“Simple AI found that content that was false but inflammatory had high engagement value, which is that Donald Trump won the election, that the election was stolen and that there was a mass conspiracy preventing his re-election,” says Tsai. “This is called heating; when you have algorithms that take content that's controversial but untrue and amplify it by boosting it to others.”

Person wearing a Donald Trump mask in Times Square.

Proliferation of misinformation and disinformation by online platform algorithms can have destabilizing consequences as was seen with the 2021 Capital Riot attack on Congress in Washington. Photo by Joshua Santos via Pexels

Tsai believes Generative AI is going to have farther-reaching impacts as it becomes more sophisticated, such as the ‘deepfake (external link) ’ computer-animated video and audio content that is seemingly authentic. 

He was lead contributor on AI policy recommendations in the recently released  (PDF file) McGill University Law School and Desautels Faculty of Management Report (external link) . The impact paper argues that social media platforms serve as digital public spheres where political opinions are shaped, making them crucial for social, economic and political stability. As a result, the report proposes four recommendations around accountability, transparency, responsibility-by-design (the concept of minimizing or preventing potential negative societal and ethical impacts when developing new technologies) and enforcement. 

The report also recommends regulating the use of AI by large online platforms in curating, amplifying and moderating divisive content to encourage corporate social responsibility in the development of algorithms and AI systems. Finally, it recommends a government regulatory authority to oversee the use of AI systems on large online platforms that can conduct technical audits to assess reliability and bias and release publicly accessible evaluation reports.

The Black Box effect

The development of generative AI that can autonomously code and debug itself raises concerns about unexpected decisions and actions made by AI systems, says Tsai.  “This phenomenon, known as the Black Box effect, occurs when AI operates in ways that programmers cannot fully understand or control.”

And as AI gains the capacity to independently code, these impacts could snowball. For example, the use of AI and autonomous weapons by the military raises concerns about transparency and the potential for misuse. Instances of successful cyberattacks on North American electrical grids, the compromising of personal information and the hacking into banks to steal credit card details highlight the growing risks associated with the misuse of these technologies. 

“With the rapid use of AI for consumer purposes, but also potentially military purposes, and by malicious actors that want to do us harm, we have gone from the world of science fiction into the world of science fact,” says Tsai.

“The ability to distinguish between truth and reality becomes totally blurred, resulting in chaos and further damage to our democratic institutions.”

Understanding AI’s transformative impacts

For democracy to function, citizens need access to trustworthy and unbiased news to make informed decisions. AI algorithim’s tendency to favour content that's controversial over accurate may pose great threats to this, warns Tsai.  “The horses are already out of the barn. Now you have to catch them.”

More News