What is the University of Glasgow's position on AI?
The University of Glasgow believes artificial intelligence (AI) tools are potentially transformative as well as disruptive. They increasingly feature in academic and professional workplaces.
You will graduate into an AI-augmented world. The university therefore has a responsibility to prepare you for this world, providing space to experiment with, and understand the potential of, AI in an ethical way.
Consequently, rather than seek to prohibit your use of these tools, we want to support you in learning how to use them effectively, ethically, critically, and transparently.
Quick Guidance for Students
The term 'AI tools' is currently used to refer to a range of tools and resources (not just ChatGPT), which range from being unsuitable for any academic work to being helpful when used appropriately and with academic integrity.
Using machine learning, AI tools can produce human-like text, images, and information, and can respond to specific queries. Tools that can replicate and create the complex responses and behaviours of humans are referred to as 'generative AI'. There are also many computational aids that are used for similar purposes.
It is important to note that using any form of AI or other computational aids in your university coursework, study, exams, or research without acknowledging that input counts as academic misconduct.
- assume that all AI tools are equally effective, equally responsible, equally resourceful, and equally capable of being used with academic integrity. AI tools replicate biased results, and do not provide contextualised, evaluated or critiqued evaluation of information.
- pay for AI services or tools; your courses will never require you to pay for external software.
- use AI tools as a replacement for your own understanding, analysis, or summary of a topic.
- rely on AI to produce references, resources, materials or any other forms of content. AI is liable to produce 'hallucinations' where it can make up false information and references.
- upload full copies of your work, essay questions, reports, results, and discussion into any AI tool. AI tools should not be used to conduct research or investigation into a topic. If your lecturers want you to use AI for specific activites, they will provide guidance on what is allowed or expected, and what is not.
- acknowledge the use of any form of AI in your coursework for all submissions.
- question the validity and accuracy of any output, data, results, and information you receive from AI tools.
- ensure that all your submissions are the result of your own thought, workings, analysis, and critique.
- keep up to date with your course guidelines and information around academic integrity and AI. Pay particular attention to your marking criteria and ILOs; it is your responsibility to demonstrate how you meet these.
- be aware of how research AI tools are advertised: they'll often promise time-management and efficiency benefits but will in practice break academic integrity rules.
Continue reading below for more detailed information on the University of Glasgow's policies about what is allowable, what is forbidden, and how we advise that you use AI in your studies.
How should I reference Artificial Intelligence?
The current consensus on how to reference any use of AI is to treat it as if it were private correspondence.
The reasons for this are:
- Like private correspondence, the prompts and responses you enter into and receive from AI are unique to you
- Like private correspondence, AI is a problematic source as it cannot be easily replicated and verified
- Like private correspondence, each prompt and response session with AI is time-bound, specific and unique to that moment in time.
The specific rules for many referencing styles are still to be finalised, but the general rules are:
- Name the AI platform used (e.g., OpenAI ChatGPT or Google Bard)
- Include details on the date of use of AI
- Ideally, include details on the prompts input (and, if possible, the responses received)
- Include details of the person who input the prompts
- Keep records of the responses output by AI, even if you do not include these in the submission itself
- Be clear, open and transparent in your use of AI
- Do not present any of the responses from AI as your own writing, thought or work. This constitutes academic misconduct, which could lead to disciplinary measures being taken against you.
An example: citing AI in Harvard
The information required for Harvard is:
- Name of AI (e.g., OpenAI ChatGPT or Google Bard)
- Date (day, month and year of when you entered the prompt(s)) and received the response(s)
- Receiver of communication (the person who entered and received the prompt(s) and response(s) - this would be your name if you used AI)
Your in-text citation would look like this:
'The use of AI in academic writing presents challenges for how to correctly and accurately cite (OpenAI ChatGPT, 2023)'
Your corresponding reference list would look like this:
OpenAI ChatGPT. 2023. ChatGPT Response to Andrew Struan, 14 September.
For styles other than Harvard referencing, look for 'personal correspondence' as a source type in the relevant guide from the UofG Library list of referencing styles. These guides may change as the academic consensus evolves around citing this new type of generative source, and as new AI technologies continue to emerge.
Introduction to Artificial Intelligence (AI)
Artificial Intelligence (AI) is the work of computers to mimic, replicate and create the complex responses and behaviours of humans. Using machine learning, AI can produce human-like text, images and information, and it can respond to specific queries.
The availability of a variety of AI platforms is rapidly changing. ChatGPT, Google Bard, CoPilot and Notion are some of the currently popular tools being used, but as the technology grows there will be increased competition amongst and change within sector-leading platforms. The platforms that produce text are more accurately known as Large Language Models (LLM), but 'AI' is more commonly used to describe these tools.
For University-level study and research, AI and its many uses pose a challenge to how we prove our learning for assessment in a way that has integrity and merit.
The key element of AI is its ability to mimic human-like responses to queries, questions and interrogations. The responses are, however, only replications of human-like output. AI does not have the ability to critique, evaluate or prioritise information.
More significantly, AI does not have the ability to fact-check; as a result, AI can often produce results that are factually incorrect. You should never rely on AI to produce accurate, truthful, critical or reflective information, results of analysis.
This page is presented as a way of ensuring you engage with AI in an ethical, transparent manner. This guidance does not prohibit the use of AI in your research and writing process; instead, it outlines how the University of Glasgow believes you can make use of the strengths of AI while maintaining good academic practice, rigour and integrity in your submitted work.
The University of Glasgow’s position on the inappropriate use of AI in research and writing is clear. University policy is that all students are marked and assessed 'in recognition of a student's personal achievement. All work submitted by students for assessment is accepted on the understanding that it is the student's own effort'.
Work that is therefore not your own effort – in other words, work submitted that is the result of the work of AI – does not meet this crucial requirement for our assessments.
The University Regulations on plagiarism define it as:
'the submission or presentation of work, in any form, which is not one's own, without acknowledgement of the sources' (Plagiarism Statement - see section 32.2)
As with all pieces of assessment submitted, then, we expect that your work is a reflection of your own effort. We discuss below some of the ways in which AI can be of benefit to the research and writing process, but it is your responsibility to ensure that the work submitted is a true reflection of your effort.
That is to say, all your submitted work must be of your own creation, your own critical evaluation process, and your own experience. We expect your submitted work to clearly and transparently acknowledge any sources – including AI – that have helped you reach your conclusion or that have added to the work in any way.
The fundamental rule with AI and academic integrity is this:
If you make use of AI at any point in your research or writing process, no matter at what stage, you must acknowledge the use of that source/platform as you would any other piece of evidence/material in your submission.
We strongly recommend, however, that you treat AI with caution: AI tools do not know the meaning of what they produce, cannot be critical and evaluative, and are prone to biases, inaccuracies and mistakes.
Using AI for study, research and writing – without breaking our academic integrity rules
If we use any form of AI, it must be with the understanding that it does not understand what things mean. This is an important first step in engaging with AI in a successful way.
The difference between our intelligence and AI is, to put it simply: we understand meanings, contexts, what things say and do, and the connections between different pieces of information. AI cannot do this.
AI platforms can, however, help us with some of the steps of our work process. So, what can AI do?
- AI can provide a quick summary of information.
- AI can provide details on large volumes of information in a quick time period, but there are dangers involved here around inaccuracies and made-up information.
- If using AI in this way, think of it as the first step. The second step is to check for accuracy, critique the outputs, and ensure you never rely simply on the original outputs.
- AI can identify key points in texts
- AI can produce output of key points in text by scanning the language therein. This can be a useful starting point with reading.
- If using AI in this way, again think of it as a first step. AI does not understand what the text means, and it cannot therefore critique or evaluate any of its outputs. It uses language models to predict outputs based on the highest probability of the next word. So, when identifying key points in texts, AI uses this probability method to display key points based on the words used in the text.
- AI can help you refine your wording
- Think of AI here as a kind-of conversation partner: you can ask it to rework some writing to enhance clarity. You can ask it to refine the output to make the text more simplistic, more complex, longer, shorter, and so on.
- Importantly, however, you must remember that AI uses predictions of the most likely next word. This is all it is doing: it does not have the capacity to understand why things might be right, might be wrong, might be stronger, or might be weaker.
Separately from AI, there are plenty of other digital tools that can help with your academic work. These resources and tools do not mimic human language in the way AI does, but they are powerful and useful tools. EndNote, for example, helps you catalogue your references and resources, while software like ResearchRabbit, Google Scholar, etc., can help with searching for resources.
Use of these digital tools is a part of modern-day study. If you have questions about using digital tools for study or research, you can speak with SLD by making a one-to-one appointment.
AI: Important Limitations, Important Problems
Current AI tools are limited in the capacity to create meaning that is true, accurate, critical and responsive. You must judge any outputs from AI tools with scepticism, suspicion and a critical eye.
Key problems with AI include:
- AI gets things wrong.
- AI will produce incorrect (and sometimes nonsensical) outputs.
- AI does not know right/correct from wrong, and will present all outputs as if they are equally valid and true.
- AI is biased.
- AI tools reflect, prioritise and enhance biases and stereotypes.
- AI is not rational and does not understand the complexities around the information available on the internet. It therefore cannot judge inaccurate, offensive statements.
- AI makes things up.
- Called 'hallucinations', some AI tools will make up false references to texts that do not exist. While these references often look to be real, a quick search will reveal that the supposed reference or data does not exist.
- This is because AI tools are simply predicting the most likely next word each time – think of it like asking your phone to write a sentence based on clicking the next autocorrect word each time.
- AI tools cannot access all the necessary data and information.
- AI tools cannot access anything behind a paywall. For your academic work, the University Library subscribes to a massive collection of academic texts. AI cannot access these paid resources, and so it does not have the ability to reproduce any information from those texts.
- Many AI tools also do not have access to the current state of the internet, and therefore will not be trained on the most recent information.