For many professors, the term “Academic Integrity” evokes notions of student academic offences like plagiarism or cheating. This is part of it, certainly, but doing all college work in an accountable and honest manner is a shared obligation. Faculty members have a responsibility to be honest about their own use of AI tools, to be ethical in how those tools are employed, and to be transparent in their expectations and guidance around student use.
Ethical use demands that faculty use their judgement and professional expertise to guide the initial and final stages of every academic use of AI tools. These human “first and final touches” ensure that AI is not in the driver’s seat, but rather serves as a GPS that provides efficient shortcuts to the destination.
Accordingly, faculty should uphold academic integrity in their use of AI so that it…
- Supplements and enhances creativity in teaching, not to become a substitute for it.
- Fosters class engagement, not to isolate students from peer and professor contact.
- Streamlines administrative tasks, not to eliminate human empathy, voice, and judgement.
- Supports assessment, not to undermine faculty expertise in assigning grades and feedback.
- Makes learning accessible to all learners, not to perpetuate biased or generic content.
- Ensures the privacy of student data, not to expose it to software and websites that misuse it.
- Enters the practical skill set for students, not to replace thinking and learning.
The following sections will detail how you can apply this in your teaching:
AI and Your Courses
Make it clear if and how students should use these tools, including sample statements and icons.
AI and Academic Offences
Process and policies for preventing, checking for, and dealing with problematic AI use.
Ethical Considerations
As faculty, students, and the college community continue to adopt AI tools as a normalized part of the academic environment, it is important to consider the broader ethical implications of these tools. Even with good intentions and when copyright and academic integrity are upheld in the production of useful output, GenAI technologies come with their own host of challenges that deserve thoughtful consideration.
For one, the data used to train AI models largely comes from an aggregate set of information that can lead to racial, gender, cultural, and economic biases inherent to the “average” norms held in that data, and which may not only contrast in the broader global perspective, but with the diverse population of Fanshawe’s community. Accepting the claims of GenAI at face value challenges our desire to value all humanity and to have a teaching and learning environment that is inclusive for all. Furthermore, these data sets contain vast amounts of copyrighted material, often integrated without permission or attribution. Built on the backs of others whose contributions may never be acknowledged or compensated, these technologies rely on a problematic interpretation of intellectual property rights.
The ecological impact of GenAI also warrants our awareness. The race to develop more advanced AI models relies on faster, larger, and more expansive hardware, requiring the extraction of significant natural resources and increasing energy consumption. There is also a less visible cost in human resources. The development of these tools often relies on the digital equivalent of sweatshop labour, where workers in lesser-developed regions are underpaid and suffer poor working conditions to serve the technological desires of a more privileged class.
The promise of new technologies is that optimization and economies of scale will reduce these negative impacts, and that the technology itself may be able to find new solutions for environmental, economic, and social issues which will outweigh any deficits it has incurred. However, in the present, we have a responsibility as educators to weigh these factors carefully and strive to use GenAI in ways that align with our values around sustainability and humanity. By doing so, we can employ the benefits of this technology while being mindful of ways to mitigate its downsides.
For additional information on AI and Academic Integrity, please refer directly to the Fanshawe College’s AI Framework and Appendices as well as the Academic Integrity Policy (A136).
Quiz: Ethical Use of Artificial Intelligence in Education
Would you like to test your understanding of how AI and Ethics relate to the faculty role at Fanshawe College? See if you can answer the following 16 questions correctly. The questions are based on this page and the others within this section. In what situations can you use AI? Find out here -- we promise (ethically) that your score won’t be reported!