TMU professors explore AI’s evolving role in education
You don’t have to look far to hear concerns around how artificial intelligence (AI), including generative AI like ChatGPT, stands to disrupt academia. But some professors at Toronto Metropolitan University (TMU) are encouraging their students to engage critically with the tools to better prepare them for an AI-augmented future.
And some classes are not just exploring the strengths, they’re discovering the limitations.
Julián Zapata, a languages, literatures and cultures professor, designed an assignment to let students explore AI’s role in translation, especially in under-resourced languages and dialects. His students discovered that AI struggled to accurately translate an uncommon Cantonese dialect during an emergency room simulation. This hands-on experience showed students who may have been especially worried about AI’s potential that it’s not going to replace human intelligence in many fields.
“The outcome was even impressive for myself,” Zapata says. “We may take for granted that every translation that starts from English is good, but when you’re translating from, say, Russian into a minority language spoken in the Philippines, that's when things can go wrong.”
In another course, Zapata had students act as a mock translation agency. This exercise helped them understand the nuances of the industry and the importance of human oversight of AI. “We explored the risks and pitfalls of relying solely on AI in translation, and assessed the accuracy, fluency and nuance of translations done by various AI-based tools using established machine translation evaluation methods,” he says.
AI for group work and discussion facilitation
In the Department of Geography and Environmental Studies, professor Michal Bardecki is using AI to tackle the logistical complexities of large group projects.
A class assignment for first-year students this year included assessing the health of urban street trees. “I gave them the background on how to measure street tree health, the various indices they can use to give them some idea of how they might get started,” he says.
And he’s noticed, especially post-pandemic, that his first-year students often face overwhelming initial hurdles when it comes to collaboration. "They have a hard time just getting going, even deciding on how to fairly break up the workload," Bardecki says. He encouraged the students to use ChatGPT to get them started on this work, discovering that those who did advanced more rapidly in understanding the project scope and execution.
“I'm really demanding this year that students start with the premise of, ‘my professor gave us this assignment for a group of five, what do we do? And for them to explore the questions that they have. AI can make students more efficient and enhance their imagination," he explained.
Bardecki has since inverted his classroom, shifting from traditional lectures to engagement-focused classroom time where students spend one or two hours of the three-hour block talking to one another and problem-solving.
“I’m also able to use ChatGPT myself to do things in the classroom that I previously didn’t attempt, like designing big role-playing exercises,” he says. Approaching his classroom this way not only accelerates preparatory work but also enriches his students’ learning experience, freeing up more time for ethical discussions and complex problem-solving.
Critical approaches to AI
When the Lincoln Alexander School of Law launched in 2020, one of the areas of focus for the school was the use of technology in law. And law professor Jake Effoduh delves into the complex interactions of AI and legal frameworks in his course, Critical Approaches to Data, Algorithms, and Science in the Law.
“I'm not one of those professors who say, 'Don't use AI.' I say, 'Use AI, because to critically engage with the technology, we need to know how it functions, but we're going to use it ethically,” Effoduh says. His students explore various generative AI tools for legal research, analysis and case preparation, scrutinizing their ethical implications and potential impact on privacy, labour and environmental resources.
In his Tech Law and Society course in the Winter, he has students propose AI interventions to improve access to justice in Canada, tackling issues like housing crises in Toronto using AI-driven solutions. Initiatives his students have suggested include simplifying legal information for the public and creating engaging platforms for young people to learn about justice issues.
Effoduh himself uses AI to transcribe and summarize his three-hour lectures, thereby increasing accessibility for his students. This is particularly helpful for students who may struggle with lengthy recordings. The tool he uses, Firefly, provides insights into speaking dynamics, from talking time to student engagement, enriching the teaching experience for both Effoduh and his students.
In both classes, Effoduh encourages students to use AI tools as a “thinking partner” as he puts it, for ideation and brainstorming, while emphasizing that “generative AI outputs are not representative of high-quality legal reasoning or exemplary legal writing.” He argues that students who will become law leaders need to build knowledge around the ethical use of AI, and from there, they can decide how to better and more ethically use the technology.
“There's a lot of hype around what AI can and can’t do,” Effoduh says. “I believe the best way for students to understand that AI tools won’t replace them is to explore the true scope of its capabilities.”
Instructors who'd like to learn more about AI in higher ed are encouraged to review the Centre for Excellence in Learning & Teaching's new Teaching Resources page, join our community of practice (external link) and/or to explore the FAQ page on the Academic Integrity Office website.
Related stories: