Thinking about using ChatGPT as a student? Read these tips
Updated Sept. 12, 2024
As the new school year kicks off, there’s one thing that’s top of mind for a lot of students and academics: generative AI, like ChatGPT.
AI is everywhere, and you’re probably already engaging with it in a lot of ways without even realizing it. Gmail finishing your sentences for you is employing AI. Curated playlists on our streaming services also use AI! AI is even part of your Google search results.
But generative AI, like ChatGPT is raising new questions in academia.
As careers and industries increasingly employ AI-based technology, it’ll be more and more important for students to understand the opportunities, limitations and ethical considerations of these tools. By providing students with AI literacy and responsible usage skills, TMU can equip them to navigate the AI landscape that will shape their lives.
To set students off on the right foot this fall semester, we spoke with Allyson Miller, director of the Academic Integrity Office and AI specialist, on how to use generative AI responsibly throughout the school year.
“Using generative AI responsibly ensures that students avoid falling into the trap of over-reliance,” she says. “ChatGPT can be used in many creative ways, but it shouldn't be a substitute for critical thinking. By responsibly leveraging these tools, however, students can develop their own analytical skills.”
See below for Miller’s top tips, and make sure you’re unleashing AI's potential in an ethical way.
Review TMU's Policy 60 - Academic Integrity
Since the release of big generative AI platforms like ChatGPT, the university has undertaken a review of Policy 60 - Academic Integrity, and made some updates to reflect the availability of generative AI.
The policy now reflects changes that have been made to the category of "misrepresentation of personal performance." Students should be aware that the policy states that "submitting work created in whole or in part by artificial intelligence tools unless expressly permitted by the Faculty/Contract Lecturer" as well as "submitting work that does not reasonably demonstrate your own knowledge, understanding and performance" are examples that would fall under misrepresentation of personal performance and could result in academic misconduct.
Ask EVERY instructor what the rules are for each class
This doesn’t just apply to ChatGPT, says Miller. Individual professors will have their own thoughts on whether Grammarly or translators are okay to use, and the limits to what they’ll accept with ChatGPT. Some won’t mind if you use it to create an outline or come up with ideas.
“The answer you get will probably be determined by the learning outcomes of the course,” says Miller. “For example, if the professor is trying to assess your ability to read complex articles and explain them in your own words, then using AI to paraphrase information from the article would undermine the professor’s ability to assess your skills.”
Fact check all information you get from AI – be sure to use reputable sources.
AI hallucinates (If AI doesn’t know the answer, it’s been known to make one up!), and when it doesn’t hallucinate, it may still perpetuate biases inherent in the dataset, the algorithms and/or in the “behavioural” rules established by the administrators of the AI. Ethical use of AI means verifying not only the correctness of its output but also its completeness.
Going through this verification process ensures you’re becoming a critical consumer of technology. It's important to analyze and understand the biases present in AI systems, so question everything!
When you use it, cite it.
Best practice is always transparency. Professors will need to know where AI’s contribution ends and yours begins, and you can show that by following proper citation practices. Also remember that you’re responsible for anything you submit, so if you cite something incorrectly or misrepresent what’s in a source, that might be considered academic misconduct by your professors.
Just remember: Submitting AI-generated content without proper attribution violates academic integrity. Properly citing your usage ensures that you’re using AI tools ethically and in conjunction with your own expertise, rather than as a shortcut for original thinking.
Make it a study buddy, but not your only study buddy.
AI can be a powerful study buddy. You can put course notes into AI and ask it to generate short answer questions based on those notes, and then ask it to evaluate your answer for correctness and completeness.
You can also ask it to help you develop a study schedule at exam time, or to suggest ways to organize all your information. “But don’t isolate yourself by only studying with AI,” says Miller. “The richest learning comes from conversations with your professors and fellow students!”
The key is a balanced approach: seek study help and tools beyond the simple click of a button.
Use it to learn new skills!
Are you a humanities person who wants to build a web app? Or a tech nerd who wants to write a poem rich with imagery? Ask AI to teach you how.
Miller cautions that over-reliance on AI can decrease skill development if you ask AI to do the work for you. “But it will increase skill development, if you ask it to teach you how to do it yourself.”
—
Coming up this fall, the Academic Integrity Office (AIO) will launch an AI literacy game and host workshops on the ethical use of AI for students. Through the game and workshops, students will learn to critically evaluate the output of AI and to make informed decisions about appropriate use. More information about these workshops will be available on the AIO website.
Related stories: