Navigating the Ethical Horizon: Insights from our AI Ethics Model UN Debate on Automated Online Proctoring
In the ever-evolving landscape of artificial intelligence, where innovation meets ethical considerations, our research group recently hosted a thought-provoking event series—the AI Ethics Debates. This unique debate series brought together students from our research community to deliberate on the various ethical dimensions of artificial intelligence, mirroring the diplomatic spirit of a Model UN conference. Against the backdrop of emerging technologies and their impact on society, the debate provided a platform for these aspiring researchers to channel their insights, concerns, and recommendations. In this post we highlight the responses of participating teams.
Scenario: Universities should use online proctoring during exams.
Even before the pandemic, some universities used AI-based online proctoring technology during exams. The COVID-19 pandemic, however, speeded up the adoption of the technology as education moved online. Since then, the market for such services has expanded and is expected to grow in the next few years. The market for global online proctoring for the higher education market was valued at US$ 445.19 million in 2022 and is expected to expand at a rate of 20.55% in the next six years, reaching US$ 1,366.09 million by 2028, according to the “Online Proctoring Services for Higher Education Market” report. Some of the companies in the market offering the technology include Proctorio, Verificient, Mission College, ProctorU, Mercer-Mettl, ExamSoft, Respondus, Examity, ProctorExam, ProctorFree, Kryterion, Pearson Vue, Honorlock, and PSI Services. The report identifies live proctoring, automated proctoring, and recorded proctoring as the services these companies offer. However, scholars and students have raised ethical concerns regarding the usage of the services, citing issues of academic integrity, fairness, harm, transparency, privacy, autonomy, liberty, and trust.
Group responses:
The Corporate Team
Pratheeksha Nair, Mykelle Pacquing, Maushumi Bhattacharjee
As a team, our stand is that online proctoring is a system that can help improve the overall quality of general proctoring while retaining teachers'/course instructors' autonomy. From a marketing perspective, there is a need for responsibly developing and improving such online proctoring systems considering that online/remote education is increasingly becoming popular. Such technologies also have the potential to reduce cheating in exams and improve the efficiency of proctoring by reducing the load on professors, especially in large online classes. Additionally, the recorded videos used for proctoring also act as supporting evidence in the form of video & audio recordings to help with the review.
From the technical perspective, online proctoring suits students who prefer remote learning and teachers who aim to uphold academic integrity in online exams. AI proctoring offers a range of features, allowing teachers to balance AI involvement with ethical concerns. It's not intended to replace in-person proctoring but rather cater to remote classes and students favoring online exam accessibility. From a legal standpoint, we strive to ensure that such automated proctoring solutions are supported by a comprehensive legal contract that addresses and safeguards the interests of all involved parties, particularly user privacy. We advocate for GDPR compliance, emphasizing efficient methods for obtaining user consent and strictly limiting the use of collected data to purposes explicitly communicated to users beforehand. Additionally, adherence to guidelines set forth by the Office of the Privacy Commissioner of Canada for online and AI-powered proctoring services is promoted to ensure alignment with the latest standards and regulations.
The Academic Team
Nicol Maxime, Jun Meng, Talha Suboor, online proctoring
Our team's stance in the debate on AI ethics, with a focus on AI-based online proctoring in universities, was grounded in strong opposition to the hasty adoption of such systems without comprehensive consideration of their extensive impacts. We highlighted significant privacy concerns, underscoring the invasive nature of AI proctoring that involves extensive data collection, including video, audio, and personal behavior analysis. This not only infringes upon students' privacy rights but also poses risks of capturing sensitive personal information, necessitating stringent data security and explicit consent protocols. We drew parallels to civil liberties, pointing out that while citizens are not required to identify themselves unless suspected of a crime, students subjected to AI proctoring are not only forced to identify themselves but also endure continuous monitoring and analysis, an alarming precedent that could lead to broader societal surveillance.
Our arguments extended to the issues of accessibility, discrimination, fairness, and bias in AI systems. We criticized the disproportionate impact on socio-economically disadvantaged students and those with disabilities, and highlighted the high false positive rates and inherent biases against certain demographics, questioning the systems' fairness and accuracy. The psychological impact of such surveillance on students, potentially leading to stress and anxiety, was also a major concern. We emphasized that this approach undermines student autonomy and trust, transforming the educational environment into one of suspicion. The validity of AI proctoring in accurately identifying cheating was also contested, as these systems often lack the necessary contextual understanding, leading to potential misjudgments.
A significant part of our debate delved into the broader ethical and societal implications of such technologies. We questioned the potential for AI proctoring systems to set a dangerous precedent for wider societal surveillance, drawing a disturbing parallel to a hypothetical scenario where police departments might adopt similar systems for citizen monitoring. Such a possibility raises grave concerns about civil liberties and the misuse of surveillance technology, highlighting the need for cautious and ethical implementation of AI systems. We advocated for exploring alternative, less invasive assessment methods that prioritize student welfare and educational integrity, and urged for legal frameworks to ensure privacy and opt-out options. In summary, our team strongly advocated for a more ethical and considerate approach to academic integrity, one that weighs the far-reaching consequences of AI proctoring on individual rights, student well-being, and the foundational principles of a free society.