YSGPS Guidance on the Use of Generative Artificial Intelligence (GAI) in Graduate Studies
This guidance addresses common questions and concerns about new generative artificial intelligence (GAI) technology in graduate student scholarship, research and creative activities at a high level and within the context of relevant TMU policies. They are intended to support and inform graduate students at all levels, supervisors, members of supervisory committees, graduate program directors and faculty members involved in any aspect of graduate-level education.
New GAI technology includes any tools that apply predictive technology to, for example, generate new text, imagery, audio, or other types of synthetic data. Recently, these tools and technologies have become increasingly simpler to access and use and have the potential to provide high-quality results in a very short time.
Much of the discussion on using new GAI technology in postsecondary education is understandably focused on teaching, learning and evaluation in the context of classroom instruction and course-based writing. At the undergraduate level, these discussions are deeply important, as are considerations about how new GAI technology will affect change in how we teach, learn and evaluate work in curricula across all disciplines. TMU has produced a Community Update and Guidance on the Use of Generative Artificial Intelligence in Learning and Teaching at TMU and has (google doc) ChatGP/Generative AI resources (external link) available. However, at the graduate level, using new GAI technology has unique additional implications in the important areas of graduate-level writing, scholarship, research and creative activities. A number of universities across Canada have (word file) policies, guidance and statements on the use of new GAI technology, some specific to graduate studies.
While the potential uses, opportunities and challenges of new GAI technology represent a rapidly evolving area, it is possible to provide consistent guidance and principled approaches to using new GAI technology in all aspects of graduate research, scholarship and creative activities. The use of new GAI technology in graduate student research, scholarship and creative activities raises a number of opportunities and concerns.
Disciplinary norms are expected to evolve for the use of GAI in research. In the meantime, graduate programs and supervisors are expected to provide clear guidance to their students concerning the acceptable extent of engagement with GAI in the context of writing and scholarly, research and creative activities. This guidance is crucial in helping students navigate the ethical and academic considerations surrounding using AI in their academic work. The information in the sections below will guide programs on what should be considered when developing discipline-specific guidelines or policies for students and supervisors on using GAI technology in their research. In addition, program faculty may find the resources provided below and the (word file) statements from other Canadian universities helpful.
Students should be informed about which methods are acceptable and which are not within their respective programs. Additionally, it is advisable that supervisors, if unsure about the suitability of a specific GAI application in research, seek clarification from program directors or Chair/Directors to ensure compliance with program guidelines.
While some graduate programs and individual supervisors may encourage the integration of GAI technology into research methods, they might simultaneously impose limitations on their application in other aspects of writing or editing papers and theses.
Graduate students who are considering using GAI in any aspect of their graduate studies should understand the benefits and risks associated with using GAI. Some of the potential risks are highlighted below. They must actively seek and duly document approval from their academic supervisors and programs when considering incorporating GAI in their SRC and writing processes. This not only ensures academic integrity but also promotes transparency in AI usage. Finally, full transparency on using GAI technology in the work they publish (including MRP reports, theses and dissertations) is required.
Students must also bear full responsibility for any content generated by AI that they choose to include in their thesis or major research paper (MRP). Additionally, it's important to note that the evaluation of their final oral examination is not solely based on the written document submitted; their performance during the oral examination is equally significant. During this examination, students must explain and defend their use of GAI and the content within their thesis or MRP.
Moreover, students must demonstrate mastery of all program-level learning outcomes, which typically involve effectively communicating ideas, issues, and conclusions. These learning outcomes are designed to equip graduates with the skills needed for successful employment and clear communication. Overreliance on GAI technology can hinder the mastery of some learning outcomes.
However, the use of new GAI technology by graduate students must be done with full transparency while upholding the fundamental values of academic integrity, which include honesty, trust, fairness, respect, responsibility and courage (Policy 60: Academic Integrity). Full transparency in using new GAI technology in graduate student research includes shared responsibilities between graduate students and supervisors, committee members, and others who access or review graduate student research and scholarship. These responsibilities include open and explicit discussion, explicit and unambiguous approval and agreement in advance on the specific applications of new GAI technology, as well as the integration of clear, detailed descriptions of the use of new GAI technology in graduate student research, creative activities, theses and/or any other scholarly writing or activities. Additionally, these responsibilities include appropriate acknowledgement and citation of the use of new GAI technology in research, creative processes, or writing, and these must be aligned with disciplinary or program norms. Further, programs or disciplines may also have specific local norms or guidelines for using new GAI technology in any aspect of scholarship, research or creative activities, and graduate students and supervisors should be aware of any relevant discipline- or program-specific additional guidance or requirements.
Using and describing the use of GAI technology in published work
If a graduate program permits new GAI technology in research, the graduate program should guide how students describe and reference their use of new GAI technology. Some things to consider when describing their use are whether excerpts of prompts and responses should be provided or whether the full text of their interactions with the GAI technology should be provided in an appendix.
While many citation style guidelines recommend treating new GAI technology as a source as being the same as personal communication, some style guides are starting to include specific information on how to cite GAI technology. For example, see the American Psychological Association Style Blog (external link) and the MLA Guidance on Citing Generative AI (external link) .
Most major journals and scholarly publishers now have policies regarding using GAI in publication. These policies vary widely, and researchers must ensure they adhere to the specific policies of the preprint server, journal, or publisher they are submitting. For example, some publishers allow GAI in the research process, with appropriate descriptions, references, and supplementary material to show the interaction with the AI tool, but do not include AI-generated text. Others allow the inclusion of AI-generated text but not images.
The emerging consensus is that GAI technology does not meet the criteria for authorship of scholarly works because these tools cannot take responsibility or be held accountable for submitted work. These issues are discussed in more detail in the statements on authorship from the Committee on Publication Ethics (external link) and the World Association of Medical Editors (external link) , for example.
Graduate programs, supervisors, and students must be familiar with and adhere to the requirements in their field regarding authorship and use of AI in works submitted to preprint servers or for publication.
Lastly, the principles governing the use of GAI technology for text production or editing also extend to creating or modifying figures, images, graphs, sound files, videos, or other audio-visual content. It is worth noting that specific publication policies, such as those found in Nature's editorial policy (external link) regarding AI-generated images, may impose stricter criteria or even prohibit the use of AI-generated content in certain contexts.
Current TMU policies and guidelines related to the use of GAI technology
Graduate students including AI-generated content in their own academic writing risk including plagiarized material or someone else’s intellectual property. Since students are responsible for the content of their academic work and their scholarly, research and creative activities, including unapproved or unauthorized AI-generated content may violate TMU policies such as Senate Policy 60: Academic Integrity Senate Policy 118: SRC Integrity, or other University policies. More information on unauthorized or unapproved use of new GAI technology and the potential links to academic misconduct can be found on the Academic Integrity Office Artificial Intelligence FAQs website.
Use of GAI detection tools
There are many tools emerging to detect ChatGPT use. However, controlling the use of AI writing through surveillance or detection technology is not recommended; AI will continue to learn and, if asked, will help to avoid the things its own architecture is using to detect it.
It is worth noting that the text-matching software detector used at the University, Turnitin, has a new AI detection tool, but this is currently not available at TMU. Its usefulness for enforcing our academic integrity policies remains to be determined.
While there are opportunities in terms of uses in authorized new GAI technology in graduate studies, a critical awareness of the limitations and potential biases of new GAI technology is essential for ensuring the integrity of academic research. Graduate students and faculty members should be aware of potential privacy issues, inaccuracies and biases (external link) , and concerns about the novelty of work that uses this technology.
Concerns about privacy and confidentiality
Users of new GAI technology should be aware of potential privacy issues, inaccuracies and biases (external link) . These tools may also not produce any original or novel content needed for research. Privacy concerns have been raised concerning the data processing undertaken to train new GAI tools and the (mis)information that such tools provide about individuals or groups.
For graduate student researchers working with certain kinds of data, using third-party GAI tools to process the data may come with additional privacy and security risks. For example, students working with data from human research participants must not submit any personal or identifying participant information, nor any information that could be used to re-identify an individual or group of participants to third-party new GAI tools, as these data may then become available to others, constituting a major breach of research participant privacy. Similarly, students working with other types of confidential information, such as information disclosed as part of an industry partnership, must not submit these data to third-party GAI tools, as this could breach non-disclosure terms in an agreement.
Students wishing to use new GAI technology for processing such data must have documented appropriate permissions, such as explicit approval from a Research Ethics Board or industry partner.
Concerns about bias, accuracy and novelty
New GAI technology may produce incorrect or biased content. GAI technology can reproduce existing biases in the content they are trained on, include outdated information, and present untrue statements as facts. Students remain responsible for the content of their SRC output, no matter what sources are used.
New GAI technologies are also predictive and may not generate the type of novel content expected of graduate students in certain programs nor arrange existing knowledge in such a way as to reveal the need for the novel contribution made by the research underlying a graduate student thesis.
Cultivating disciplinary scholarly writing practices is a fundamental aspect of graduate education. The use of GAI technology, while potentially offering assistance in writing tasks, may hinder the development of essential writing skills that heavily rely on practice. Over-reliance on AI to alleviate writing burdens may, in the long run, undermine the development of invaluable writing skills, potentially impacting the academic growth of graduate students.
Potential copyright and intellectual property infringement
The legality regarding intellectual property and copyright in the context of GAI is rapidly evolving, and the full implications are not yet clear. Researchers, including graduate students, must exercise caution in using GAI technology because some uses may infringe on copyright or other intellectual property protections. Similarly, providing data to an AI tool may complicate future attempts to enforce intellectual property protections. GAI may also produce content plagiarizing others’ work, failing to cite sources or make appropriate attribution.
It is important to note that regulations and data laws differ in different jurisdictions and that liability and ownership of input or generated output data may not always be clear as the legal system works to respond to the changing landscape of generative AI use.
There are additional TMU resources that can be used to learn more about new GAI technology, including the Community Update and Guidance on the Use of Generative Artificial Intelligence in Learning and Teaching at TMU and the (google doc) ChatGP/Generative AI resources (external link) , both developed by the Centre for Excellence in Teaching and Learning (CELT) at TMU.
As well as TMU resources, many external resources and guidance are available on using new (google doc) GAI technology specific to postsecondary education and graduate studies in Canada (external link) . Other examples of helpful resources include the University of Massachusetts Resource Guide on Artificial Intelligence (external link) , the University of Washington guides to AI (external link) use in healthcare studies and across other disciplines, and the McGill University Library Guide to Artificial Intelligence (external link) , which includes guidance on using AI tools in research, on developing AI literacy and on citing AI.
Another important resource is the HESA AI Observatory (external link) , which tracks policies and guidelines for using GAI in Canadian and international Universities.
This guidance was prepared using several (word file) resources, with a focus on the Guidance on the Appropriate Use of Generative Artificial Intelligence in Graduate Theses (external link) produced by the School of Graduate Studies at the University of Toronto, the Generative AI FAQ (external link) published by the University of Waterloo, and the University of Alberta Generative AI and Graduate Education, Supervision and Mentoring (external link) .
Revised 12.07.2023