Accountability

Guiding Principles for the Responsible Use of Generative AI

Queen’s University is committed to empowering responsible use of Generative AI (GenAI) within our community. To guide students, staff, and faculty, we’ve established five key principles for informed GenAI adoption.

Each principle includes self-assessment questions to help users evaluate whether their GenAI use is Prohibited, Permitted, Encouraged, or Required. These questions ensure that benefits outweigh risks and align with responsible practices. Queen’s core directive on AI use is that a human being is in charge of what an AI does and it augments human achievement even if it replaces a repetitive or tedious task that frees up the overseeing human to focus their attention on other priority items.

In some cases, additional AI or Algorithmic Impact Assessments by experts may be necessary to ensure safe and effective GenAI use.

GenAI systems introduce opportunities for efficiency and may change how individuals within the University community work, but their use does not change the meaningful impact of work. By integrating these advanced technologies thoughtfully, we can enhance creativity, foster innovation, and streamline administrative tasks, allowing more time for critical thinking and personal interactions. Community members are responsible and accountable for using GenAI in ways that support human intelligence and capabilities, rather than replace them.

Does the use of AI for this purpose:

  1. Unlock increased productivity within?
  2. Create content or opportunities that enhance the outcomes of human activities?
  3. Benevolently make or modify decisions that would otherwise have been made by a human?
  4. Efficiently handle repetitive or tedious tasks, thus freeing up the people who oversee them to tackle other priority items?

The use of GenAI technologies may increase data security and privacy risks, and we must collectively ensure that the benefits of AI are realized without compromising the integrity and confidentiality of our data. Community members are responsible and accountable for understanding how data is used, stored, retained, and disclosed by the technologies they use, and for ensuring that safeguards protect information and mitigate security and privacy risks throughout the lifetime of the system.

  1. Is information that is classified as “Confidential” or “Internal” being processed  with the GenAI tool? (refer to Queen’s Data Classification Standard)
  • Does the GenAI tool clearly define:
    1. The ownership rights of the data and information submitted?
    2. The ownership rights of the data and information created?
    3. How the data and information, and the systems used to process that data and information, will be protected?

The results and output produced by GenAI is prone to biases, error, or falsities. It is essential to critically evaluate content created by GenAI and cross-check it with reliable sources. Community members are responsible and accountable for ensuring the quality, accuracy, and appropriateness of outputs and decisions of GenAI systems, and that they are free of gender, cultural, and other biases. The person using any AI tool is responsible for the outputs if they are used.

Will information created by the GenAI technology be reviewed:

  1. For quality and accuracy to ensure that no misleading or false information (hallucinations) is included?
  2. To ensure that it is relevant and appropriate for use?
  3. To ensure that gender, cultural, systemic, and other biases are 

Equally important is the quality of the input data used to generate AI content. Community members are responsible and accountable for ensuring the quality, accuracy and appropriateness of the data being used to produce GenAI content.

Will information provided to the GenAI technology be reviewed to ensure:

  1. That no misleading or false information is included?
  2. That it is relevant and appropriate for use?
  3. That it is free of gender, culture, and other forms of discrimination?

People will have different reactions to the use of and interaction with GenAI tools, and should be made aware when they are being used. By fostering transparency and inclusivity, we can ensure that all voices are heard and concerns are addressed. A collaborative approach helps build trust and promotes a shared understanding of the benefits and limitations of GenAI technologies. Community members are responsible for engaging in open conversations about how they use GenAI to create content or make decisions that may impact other members or groups within the community.

  1. For use in this context, is there a requirement to cite or disclose the use of GenAI for this purpose?
  2. Do the outcomes or decisions made by GenAI for this purpose have impacts (positive or negative) for community members?
  3. Are community members affected by outcomes or decisions made using GenAI for this purpose and have they been informed of its use?
  4. Has a mechanism been established for community members to provide feedback on the outcomes and decisions made by GenAI?

As emphasized in the description of the previously listed guiding principles, accountability for the use of GenAI systems rests with community members, namely the AI users. By fostering a culture of responsible use, we can harness the potential of GenAI while upholding our commitment to research impact, academic integrity, and operational excellence. Continuous education and training on responsible practices are essential to empower community members to make informed decisions and contribute positively to our collective goals. Community members are responsible and accountable for ensuring that their use of GenAI is ethical, appropriate, necessary, and aligns with the University vision and values.

  1. Does the use of GenAI for this purpose violate University policy?
  2. Has the use of GenAI been authorised or approved for this purpose?
  3. Have guidelines and/or instructions been provided for the use of GenAI for this purpose?
  4. Is the use of GenAI for this purpose integrated into or in support of an existing University operational or research administrative process, or academic assignment?
  5. Is training required (for myself and others) prior to engaging with GenAI for this purpose?

Further guidance for our community members will need to  be developed by providing guiding answers in support of each of the self-assessment questions. For example:

SELF-ASSESSMENT QUESTION

GUIDING ANSWER

Guiding Principle:  Safeguarding Data

  1. Is information that is classified as “Confidential” or “Internal” being processed with the GenAI tool?

If the answer is “Yes, confidential and/or internal information is processed”, the generative artificial intelligence use case is prohibited, unless explicitly authorized by an appropriate university authority, based on teh results of thorough analysis done through AIA*.

If the answer is “Unsure”, community members are required to review and ensure compliance with relevant data classification, handling, and sharing guidelines, or request support. For more information on how to classify this data, please review Queen’s Data Classification Standards.

*Note that it may be permitted to use AI tools which process confidential information so long as the appropriate assessments and safeguards are observed.