At Queen’s University, we promote the responsible and ethical use of artificial intelligence to support academic and research excellence. To ensure the safe and appropriate use of generative AI software, the university has conducted a series of security and privacy assessments through the Security Assessment Process (SAP). These evaluations help identify potential risks, protect user privacy and institutional data, and inform appropriate use guidelines.
All generative AI applications have been carefully assessed by the Information Security Office. This page provides an overview of AI tools that have been vetted for use, as well as those that are strongly discouraged due to security or ethical concerns. Before using any AI system or application, please ensure that an SAP has been completed. Explore the assessment summaries below to learn more about AI applications and their compliance with university policies and best practices.
Generative AI Approved for Use
The following applications have been reviewed, vetted, and approved for potential use at Queen’s University. Like all artificial intelligence tools these tools are to be used thoughtfully, in alignment with Queen’s Policies and our shared values. The user of an AI tool is responsible for its outputs and their use. These tools are to be used in thoughtful fulfillment of the duties and responsibilities of Queen’s personnel. Please pay attention to the Acceptable Data Classification Levels, as not all tools can be used for all purposes.
Chatbots
- LibreChat Recommended -
Purpose: LibreChat is our newly released generative AI interface developed right here at Queen’s University. It is powered by Azure OpenAI models. As the data remains within Queen’s domain it can be used by Queen’s staff and instructors to support and accelerate administrative tasks, supporting teaching and learning, and research. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
Security Awareness: Within your scope of practice and existing role, you can safely use Queen’s University data as defined below within this product, given the enterprise data protection standards in place.
Acceptable Data Classification Levels: General, Internal, Confidential -
Microsoft 365 Copilot Recommended -
Purpose: An enterprise level tool capable of conversational AI, dynamic instructor led teaching, self-directed learning, research.AI-powered assistance within various Microsoft applications. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
Security Awareness: Within your scope of practice and existing role, you can safely use Queen’s University data as defined below within this product, given the enterprise data protection standards in place.
Acceptable Data Classification Levels: General, Internal, Confidential - OpenAI ChatGPT -
Purpose: Can be used for conversational AI, dynamic instructor-led teaching, self-directed learning, and research. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
Security Awareness: Use of confidential university data within this product is discouraged at this time.
Acceptable Data Classification Levels: General, Internal -
Google Gemini -
Purpose: Multimodal LLM integrated with Google products and services. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
Security Awareness: Use of confidential university data within this product is discouraged at this time.
Acceptable Data Classification Levels: General, Internal - Anthropic Claude -
Purpose: Advanced AI language model focused on safe and reliable conversational AI. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
Security Awareness: Use of confidential university data within this product is discouraged at this time.
Acceptable Data Classification Levels: General, Internal
Common AI Powered Apps
-
Otter.ai -
Purpose: Advanced AI language model focused on safe and reliable conversational AI. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
Security Awareness: Be mindful of recording consent laws.
Acceptable Data Classification Levels: General, Internal -
Genio (formerly Glean) for Education AI -
Purpose: AI to transcribe audio recordings and generate outlines. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
Security Awareness: Be mindful of ethical AI usage.
Acceptable Data Classification Levels: General, Internal -
Auris AI -
Purpose: AI generated transcripts and subtitles. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
Security Awareness: Do not share personal information.
Acceptable Data Classification Levels: General -
Captions AI -
Purpose: Generate and edit talking videos with AI. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
Security Awareness: Do not share personal information.
Acceptable Data Classification Levels: General -
VoiceGain AI -
Purpose: AI for speech-to-text transcription and voice recognition. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
Security Awareness: Do not share personal information.
Acceptable Data Classification Levels: General -
Wordly AI -
Purpose: Real-time translation and captioning services for meetings. This tool can make errors and the user is responsible for ensuring that the product is accurate if used. The errors can be insulting or offensive and therefore the user should review the created product before sharing.
Security Awareness: Be cautious with confidential conversations.
Acceptable Data Classification Levels: General
Generative AI Applications - Not Recommended
The use of the following applications is discouraged at this time as a precautionary measure to protect the Queen’s University community, data, and systems.
- DeepSeek Not Recommended
DeepSeek has raised significant privacy and security concerns, along with demonstrated issues related to content and accuracy bias. Additionally, several security vulnerabilities have been discovered, which are substantial enough to warrant the recommendation of avoiding its use within Queen’s digital environment.
- xAI Grok Not Recommended
It is advised to avoid using xAI Grok for confidential university discussions, as it may involve the use of user data for training purposes. Additionally, Grok has shown vulnerabilities to data exfiltration attacks, where sensitive information can be inadvertently leaked through AI interactions. Given Grok’s “anti-woke” stance and the lack of sufficient guardrails, there is also a risk of it generating biased or incorrect information.
- 01.AI Yi Not Recommended
It is crucial to implement robust data privacy and security measures to safeguard user information and ensure compliance with relevant regulations when using 01.ai’s Yi. Additionally, compatibility issues with the current Yi 34B infrastructure have been identified, necessitating specialized expertise to fine-tune the model effectively.