Generative AI Guidance for Researchers

For all of the guidance that follows, there is one overriding principle: any use of generative AI tools must be accompanied by critical analysis and oversight on the part of the user.

This page covers general guidance for researchers at the University of Glasgow around the use of Generative AI in their work and research. There are existing policies in place for undergraduate and postgraduate taught students, and this guidance will draw on these. The principles around the use of Generative AI apply to all staff, students and researchers at the University.  

The University’s position on the inappropriate use of AI in writing is clear: work that is not your own effort - in other words, work submitted that is the work of AI – does not meet this crucial requirement for assessment or research integrity. 

Please note that AI tools and their carefully considered use have tremendous potential to enhance research and learning. None of this guidance should be interpreted as wishing to impede this. Rather this guidance seeks to support the appropriate and informed use of AI tools and to support academic and research integrity.  Where AI tools form part of your research design or methods, the tool kit within your discipline, or are a subject of your research, your use of them as a researcher should be covered by relevant ethical approval and data protection processes - and therefore meet the key principle above.

Definitions/Glossary of useful terms

AI: Artificial Intelligence 

Generative AI: a type of AI technology with the ability to generate new content. You, the user, can enter a prompt (this can be text based, images, designs, music etc.) and the technology will return a response. This is often shortened to GenAI. 

NB: Generative AI cannot generate ‘novel’ content, but it can generate ‘new’ content. This effectively translates to being able to find new ways to put existing content together to create something new, but it is unable to have a truly novel idea of its own.

Some examples of Generative AI tools are: ChatGPT, Google Bard, DallE, CoPilot.

GenAI tools work on the basis of probabilities and predictions, and do not actually understand you, the user.

LLM: Large Language Model. Generative AI tools are based on LLMs which are text-based databases that store information to allow the GenAI tool to function.  

Neural Network: a computer system modelled on the human brain and nervous system. These are computational models inspired by the ways that human brains work.

GPT: generative pre-trained transformer. This is a type of neural network architecture which comes ‘pre-trained’ on a LLM and is able to ‘generate’ new content.

Algorithm: a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

Research and Academic Integrity

We strongly recommend that you treat AI with caution: AI tools do not know the meaning of what they produce, cannot be critical and evaluative, and are prone to biases, inaccuracies and mistakes.

The University’s position on plagiarism in the University Regulations is: 'the submission or presentation of work, in any form, which is not one's own, without acknowledgement of the sources' (Plagiarism Statement - see section 32.2)

We discuss below some of the ways in which AI may be beneficial in the research and writing process, but it is your responsibility to ensure that all work submitted is a true reflection of your effort. That is to say, all your submitted work must be of your own creation, your own critical evaluation process, and your own experience. We expect your work to clearly and transparently acknowledge any sources – including AI – that have added to the work in any way. 

The fundamental rule with AI and academic/research integrity is this:

If you make use of AI at any point in your research or writing process, no matter at what stage, you must fully and transparently acknowledge the use of that source/platform as you would any other piece of evidence/material in your submission.

We will cover the citation and acknowledgement of AI in a separate section below.

 

Generative AI tools: Risks and Limitations

Generative AI can and frequently does get things wrong. The onus is on you, the user, to ensure that if you choose to use these tools that the content they produce is accurate and true, that you are using tools ethically, and ensuring the privacy and integrity of your work.

Key problems include: 

  • AI gets things wrong.
    • AI will produce incorrect (and sometimes nonsensical) outputs.
    • AI does not know right/correct from wrong and will present all outputs as if they are equally valid and true.
    • The 'garbage in/garbage out' principle holds here as the quality and clarity of an input into an AI  tool (e.g. how you frame a question or prompt) is linked to the quality, clarity, and usefulness of the output.
  • AI is biased.
    • AI tools reflect and amplify biases and stereotypes. 
    • AI is not rational and does not understand the complexities around the information available on the internet. It therefore cannot judge inaccurate, offensive statements or assess validity or accuracy.
  • AI makes things up.
    • Called 'hallucinations', some AI tools will make up false references to texts that do not exist. While these references often look authentic, a quick search will reveal that the supposed reference or data does not exist.
    • AI tools are simply predicting the most likely next word each time – think of it like asking your phone to write a sentence based on clicking the next autocorrect word each time.
  • AI tools are unreliable 
    • AI tools cannot access all the necessary data and information and are, in most cases, not 'up to date' or 'trained' on the most recent information available. 
    • AI tools may provide different answers to the same or similar inputs.
    • AI tools cannot access anything behind a paywall. For your academic work, the University Library subscribes to a huge collection of academic texts. AI cannot access these paid resources, and so it does not have the ability to reproduce any information from those texts.
    • Even were an AI tool to have a breadth of input and be reasonably up to date, it is impossible to fully assess the completeness of any output or ensure that the most relevant or reliable sources were used.
  • Privacy concerns
    • Many tools incorporate inputs into how AI models are 'trained' to respond and therefore researchers should exercise great care in putting their data or work into these tools.
    • Exposing your data, ideas, or research, or those of others without permission, to an AI tool may, in effect, put it into the public domain, compromise confidentiality, or allow the work to be used without attribution, accountability, context, or completeness. While this is a risk in making any research publicly available, the lack of attribution or association with the creator or owner of the work increases the risk of misuse or misunderstanding and potentially complicates intellectual property ownership.
    • Whether using a free tool or under a subscription or licence, please review the terms, conditions, and privacy statements very carefully.  

Using AI tools to check your writing

We would strongly discourage the use of AI tools to check your writing. As seen above, generative AI tools can frequently get things wrong. Furthermore, the University strongly discourages the use of proofreaders (which includes essay writing companies) as it is difficult to discern where the boundary is between 'proofreading' and 'writing'. It is important to note that if you use, or have used, generative AI tools to check your writing, this can fall under the banner of academic misconduct and plagiarism.

The Avoiding Academic Misconduct policy makes it clear in the first point that you must not use AI tools to prepare your work:

‘Make sure all work you submit – essays, lab reports, presentations, exam answers, etc – is entirely your own work. You must not copy, translate, or lightly edit, someone else’s work, you must not have any other person, service or AI tool prepare your work, and you must not prepare your work with another person (except in specific assignments where it is clearly marked as a group effort).’

This is clarified with a point further in the policy where it states that ‘Getting someone else to do the work for you, whether this is a friend, family member or commercial service, including services offering 'proof-reading for a fee’ will be considered misconduct. An important distinction to be made here is that is what is not permitted is where someone or something else is contributing substantively to what you present for assessment. Some additional context for this clarification for PGRs (due to the scope and scale of the assessed work) is found in the PGR Code of Practice in Section 10: 

  • Proof-reading one’s own work is an important writing skill and students are therefore encouraged to do this. However, there may be times that students would consider engaging the services of a proofreader. While the use of a proofreader is broadly permitted, students and supervisors should be clear about what a proofreader can and cannot do.
  • Students have sole responsibility for the work they submit and therefore should review very carefully any changes suggested by a proofreader. 
  • Proofreaders may assist with the identification of typographical, spelling and punctuation errors; formatting and layout errors such as page numbering or line spacing; and/or grammatical and syntactical errors. Proof-readers may not add, edit, re-write, rearrange, or restructure content; alter the content or meaning of the work; undertake fact-checking or data checking or correction; undertake translation of any work into English; and/or edit content so as to comply with word limits.

It is not recommended to use generative AI tools to proofread your work as it is easy to get this wrong. You also need to be careful to protect your work and your ideas.

If you need someone to proofread your work for written English (and not content), the University has a peer proofreading service available to all students - although please note this is for all students so PGRs wishing to engage support for a full thesis need to allow significant time for this.

For further information about what’s available to you if you are an ESL student, get in touch with the English for Academic Study team. 

PGRs wishing to contract with an external proofreader should ensure that they are engaging an experienced professional who understands the boundaries of what is permitted. Some professional guidance and a contacts directory are available through the Chartered Institute of Editing and Proofreading.

How to cite/acknowledge usage of generative AI tools in your work

Citation

It is important to understand that AI tools cannot be the author of a work.  The tool cannot produce original ideas or take any responsibility for the outputs. 

However, the current consensus on how to reference any use of AI is to treat it as if it were private correspondence.

The reasons for this are:

  • Like private correspondence, the prompts and responses you enter into and receive from AI are unique to you
  • Like private correspondence, AI is a problematic source as it cannot be easily replicated and verified
  • Like private correspondence, each prompt and response session with AI is time-bound, specific and unique to that moment in time.

The specific rules for many referencing styles are still to be finalised, but the general rules are:

  • Name the AI platform used (e.g., OpenAI ChatGPT or Google Bard) 
  • Include details on the date of use of AI
  • Ideally, include details on the prompts input (and, if possible, the responses received)
  • Include details of the person who input the prompts
  • Keep records of the responses output by AI, even if you do not include these in the submission itself
  • Be clear, open and transparent in your use of AI
  • Do not present any of the responses from AI as your own. This constitutes academic misconduct, which could lead to disciplinary measures being taken against you.

An example: citing AI in Harvard

The information required for Harvard is:

  • Name of AI tool (e.g., OpenAI ChatGPT or Google Bard)
  • Date (day, month and year of when you entered the prompt(s)) and received the response(s)
  • Receiver of communication (the person who entered and received the prompt(s) and response(s) - this would be your name if you used AI)

Your in-text citation would look like this:

'The use of AI in academic writing presents challenges for how to correctly and accurately cite (OpenAI ChatGPT, 2024)'

Your corresponding reference list would look like this:

OpenAI ChatGPT. 2024. ChatGPT Response to Caitlin Diver, 9 January 2024.

For styles other than Harvard referencing, look for 'personal correspondence' as a source type in the relevant guide from the UofG Library list of referencing styles. Please note that these guides may change as the academic consensus evolves around citing this new type of source, and as new AI technologies continue to emerge. It would be advisable to check for updates on a regular basis.

Acknowledgement, rather than citation

It may be more appropriate to acknowledge the use of AI tools rather than to cite them, e.g. depending on the guidance for submitting your assessment or the guidance provided by your publisher.

A basic acknowledgement should include

  • Name and version of the generative AI system used, e.g. ChatGPT-3.5
  • the company that made the AI system, e.g. OpenAI
  • URL of the AI system.
  • Brief description of how the tool was used
  • Date the content/output was generated

For example: 

I acknowledge the use of ChatGPT 3.5 (Open AI, https://chat.openai.com) as a tool to proofread the final version of this work. 

You may also wish, depending on the circumstances, to include prompts that were used, copies of outputs that were generated or how you used or edited the content generated.

 

Research specific tools

Some tools describe and market themselves specifically as ‘AI Research Assistants’. A non-exhaustive list of these tools includes: Elicit, Scite, Scholarcy.  These tools will usually emphasise their time-saving benefits for researchers.  If you choose to use them, you must understand their limitations and exercise caution. While these tools can carry out searches and superficially summarise their findings, they cannot evaluate studies for you. You still must read each one carefully and come to your own conclusions over its merits. You should also take care to check search results, to check the databases used by the tools, and take the time to look at the tool’s own guidelines on proper usage.

Quick guidance for supervisors

Do
Don’t

…ensure that PGRs know they can use supervisory meetings to ask questions about AI tools they encounter and their reliability (and suitability) for research

…immediately close down any discussion of AI. Use of AI might point to specific difficulties (analysing articles, structuring work, writing in a second language) which could then be appropriately discussed and addressed in supervisory meetings

…emphasise that PGRs should question the validity and accuracy of any output, data, results, and information received from AI tools. Make clear that these tools cannot replace their own expertise and insight

…assume that PGRs fully understand the problems in the output, data, results and information received from these tools. A PGR who is having difficulty in a specific area may not clearly understand why the output from AI tools is poor or unreliable.

…remind PGRs that all submitted drafts should be the result of their own thought processes, workings, analysis, and critique. Ensure that they understand what skills they are expected to demonstrate for assessment

…automatically assume that PGRs understand the way in which they carry out their research is as valuable as the eventual output of the project

…keep up to date with the institution’s guidelines and information around academic integrity and AI: this advice will be updated as appropriate

…forget to remind PGRs to keep up to date with both institutional guidelines and journal regulation, which might differ in subtle but important ways.

…be aware of how research AI tools are advertised: they'll often promise time-management and efficiency benefits. Open discussion of expectations around time management and work rate should begin early in the supervisory relationship to avoid PGRs resorting to these tools

…forget to remind PGRs that they should not upload any of their work – data, results, discussion, reports, etc into any AI tool. AI tools should not be used to conduct research or investigations into a topic.

Updates to Guidance

15 April 2024

  • addition of 'resources' section with links
  • addition of this 'updates' section
  • updated 'intro' section to clarify that this guidance should not impede research which specifically encompasses AI as a subject, tool, or method
  • updated section on 'Using AI tools to check your writing' to clarify the use of proof-readers for PGRs
  • updated section on 'Generative AI tools and their limitations' to 'Generative AI tools: Risks and Limitations' and updated text to add clarity about possible risks 
  • updated section on citation to include reference to acknowledgement of use