Security Assessment for AI

AI Applications

At Queen’s University, we promote the responsible and ethical use of artificial intelligence to support academic and research excellence. To ensure the safe and appropriate use of generative AI software, the university has conducted a series of security and privacy assessments through the Security Assessment Process (SAP). These evaluations help identify potential risks, protect user privacy and institutional data, and inform appropriate use guidelines.

All generative AI applications have been carefully assessed by the Information Security Office. This page provides an overview of AI tools that have been vetted for use, as well as those that are strongly discouraged due to security or ethical concerns. Before using any AI system or application, please ensure that an SAP has been completed. Explore the assessment summaries below to learn more about AI applications and their compliance with university policies and best practices.

Generative AI Approved for Use

The following applications have been reviewed, vetted, and approved for potential use at Queen’s University. Like all artificial intelligence tools these tools are to be used thoughtfully, in alignment with Queen’s Policies and our shared values. The user of an AI tool is responsible for its outputs and their use. These tools are to be used in thoughtful fulfillment of the duties and responsibilities of Queen’s personnel. Please pay attention to the Acceptable Data Classification Levels, as not all tools can be used for all purposes.

Chatbots

  1. LibreChat Recommended -
    Purpose: LibreChat is our newly released generative AI interface developed right here at Queen’s University. It is powered by Azure OpenAI models. As the data remains within Queen’s domain it can be used by Queen’s staff and instructors to support and accelerate administrative tasks, supporting teaching and learning, and research. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
    Security Awareness: Within your scope of practice and existing role, you can safely use Queen’s University data as defined below within this product, given the enterprise data protection standards in place.
    Acceptable Data Classification Levels: General, Internal, Confidential
  2. Microsoft 365 Copilot Recommended -
    Purpose: An enterprise level tool capable of conversational AI, dynamic instructor led teaching, self-directed learning, research.AI-powered assistance within various Microsoft applications. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
    Security Awareness: Within your scope of practice and existing role, you can safely use Queen’s University data as defined below within this product, given the enterprise data protection standards in place.
    Acceptable Data Classification Levels: General, Internal, Confidential

  3. OpenAI ChatGPT -
    Purpose: Can be used for conversational AI, dynamic instructor-led teaching, self-directed learning, and research. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
    Security Awareness: Use of confidential university data within this product is discouraged at this time.
    Acceptable Data Classification Levels: General, Internal
  4. Google Gemini -
    Purpose: Multimodal LLM integrated with Google products and services. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
    Security Awareness: Use of confidential university data within this product is discouraged at this time.
    Acceptable Data Classification Levels: General, Internal

  5. Anthropic Claude -
    Purpose: Advanced AI language model focused on safe and reliable conversational AI. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
    Security Awareness: Use of confidential university data within this product is discouraged at this time.
    Acceptable Data Classification Levels: General, Internal

  6.  

Common AI Powered Apps

  1. Otter.ai -
    Purpose: Advanced AI language model focused on safe and reliable conversational AI. It can be useful for ideation, brainstorming, and first drafts as appropriate. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the output is accurate if used.
    Security Awareness: Be mindful of recording consent laws.
    Acceptable Data Classification Levels: General, Internal

  2. Genio (formerly Glean) for Education AI -
    Purpose: AI to transcribe audio recordings and generate outlines. It should be used only in alignment with fair use, privacy, confidentiality, intellectual property, and academic integrity considerations. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
    Security Awareness: Be mindful of ethical AI usage.
    Acceptable Data Classification Levels: General, Internal

  3. Auris AI -
    Purpose: AI generated transcripts and subtitles. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
    Security Awareness: Do not share personal information.
    Acceptable Data Classification Levels: General

  4. Captions AI -
    Purpose: Generate and edit talking videos with AI. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
    Security Awareness: Do not share personal information.
    Acceptable Data Classification Levels: General

  5. VoiceGain AI -
    Purpose: AI for speech-to-text transcription and voice recognition. This tool can make errors, and the user is responsible for ensuring that the transcript is accurate if used.
    Security Awareness: Do not share personal information.
    Acceptable Data Classification Levels: General

  6. Wordly AI -
    Purpose: Real-time translation and captioning services for meetings. This tool can make errors and the user is responsible for ensuring that the product is accurate if used. The errors can be insulting or offensive and therefore the user should review the created product before sharing.
    Security Awareness: Be cautious with confidential conversations.
    Acceptable Data Classification Levels: General