Open letter to the Queen’s community on generative AI and teaching
Message from the Vice-Provost (Teaching and Learning)
January 2026
Colleagues,
Over the past year, many instructors across the University have been grappling with how generative AI is reshaping teaching and assessment. I want to acknowledge those experiences directly. Questions around academic integrity, assessment design, and accessibility are increasingly present in our classrooms, alongside something more human: the uncertainty and added labour many instructors are experiencing as they try to respond thoughtfully in a fast-moving landscape.
Through ongoing work across the University, Queen’s has taken a deliberate stance in this space. Our guidance asks instructors to determine whether and how generative AI may be used in their courses, and to communicate those expectations clearly to students. This is a principled choice. It recognises disciplinary diversity and respects academic freedom.
Some colleagues have raised the possibility of a single, overarching policy. At this juncture, I do not see the value of creating a single, stand-alone AI policy. That decision is intentional. Generative AI raises questions that intersect with our existing academic policies and procedures, rather than sitting neatly in a single policy. This reflects my belief that there is no one-size-fits-all approach to the use or non-use of generative AI. Instead, we must meet this moment with discussion, reflection, and principled re-evaluation of how we support teaching in the pursuit of knowledge.
At the same time, this does not mean instructors are on their own. What I am hearing from colleagues is not a call for rigid rules but for greater clarity about expectations for AI use, how colleagues are redesigning assessments, and how academic integrity processes apply in practice. To begin supporting these conversations, the Centre for Teaching and Learning has convened a Generative AI Community of Practice, where instructors can share their approaches, challenges, and insights in a collegial, discipline-inclusive forum.
More broadly, I hear a desire for greater coherence and shared understanding about how generative AI should be addressed in assessments, what constitutes appropriate use, and how instructors are supported when concerns arise. The University’s guiding principles for responsible use of AI provide a foundation for these conversations and reflect shared values around integrity, transparency, and learning-centred practice.
Our collective task as a university community is to strengthen the scaffolding around professional judgement, not to replace it. That means creating more opportunities to share practice, supporting program-level conversations, and ensuring instructors feel backed when they make principled, pedagogically grounded decisions about shifting assessment formats, setting boundaries around AI use, and navigating academic integrity processes.
We do have Senate-approved academic integrity procedures that apply when AI is used inappropriately. These provide a consistent framework for fairness and due process. The procedure exists to back instructors up when issues arise, not dictate how they teach.
In parallel, SCADP is undertaking consultation with the community on large language models to inform Senate discussion. The Special Advisor to the Provost on AI is coordinating work in the teaching and learning domain, and the AI Nexus is also helping surface emerging priorities from across the University. This work is intended to inform governance and support practice.
Ultimately, this moment reminds us that teaching is a professional practice. Generative AI does not change that. If anything, it makes our expertise more important.
Thank you to our teaching teams for their continued commitment to thoughtful teaching and learning at Queen’s.
Sincerely,
Gavan Watson, PhD
Vice-Provost, Teaching and Learning
Past open letters
Open letter to the Queen’s instructors on credit Standing (CR) as a student-centred response in disrupted courses (March, 2025)