What are the limitations of AI tools?

Current AI tools have limits in terms of their ability to create meaning in the real world. The nature of these systems means that they do not understand the words that are being produced. While AI tools provide opportunities for students to build critical analysis and evaluation skills, they are not a substitute for refined writing, critical thinking and evaluations skills. By studying and critiquing outputs from AI tools, students will gain better insights into the power and limitations. Much in the same way they must critically evaluate other medium such as research papers and websites.

Some key limitations are as follows:

  • Whilst output can appear plausible and well written, AI tools frequently get things wrong and cannot be relied upon for factual accuracy. 
  • They perform better in topics which are widely written about, and less well in niche or specialist areas. 
  • Unlike a normal internet search, some AI tools cannot access current websites and therefore, may produce out-of-date responses.
  • Some AI tools do not consistently provide (correct) references, sometimes fabricating these (NB: some AI tools can create references e.g., Perplexity). 
  • They can reflect and perpetuate stereotypes, biases, and Western perspectives, owing to the type and provenance of the data they have been fed, compounding the problem noted above about reliance of mainstream perspectives.
  • Their use has a high consumption e.g., one ChatGPT search uses more energy than one Google search.
  • The process by which these tools are built can present ethical issues. For example, some developers have outsourced data labelling to low-wage workers operating in poor conditions.

The creators of ChatGPT have provided the following guidance for educators and students which gives more guidance on capabilities and limitations than is provided here.