Skip to Main Content

Using Generative Artificial Intelligence: What is generative AI?

Generative AI creates new content—such as text, images, or music—by learning patterns from vast training datasets. A prime example is Large Language Models (LLMs) like ChatGPT, which generate human-like text based on input prompts. These AI chatbots can:

  1. Understand and follow a conversation's context
  2. Remember information from previous messages
  3. Improve their answers based on back-and-forth communication

This means you can have ongoing conversations with the AI, asking follow-up questions or requesting clarifications without having to start over each time.


There are many inherent limitations and risks paused by generative AI tools and especially by language models like ChatGPT. It is important to consider these risks and limitations before and whilst using them:

Limitations

Large Language Models (LLMs)  have no concept of truth and can only give answers that are most plausible based on the data they are trained on. As such they have inherent limitations:

  • Make stuff up (hallucinations)
  • Answers may not always be accurate
  • Biased output (reproduces the biases of its training data and of its developers)   
  • Lack of currency (ChatGPT 3.5 training data stops at 2021)
  • Output can sound formulaic and generic
  • Only able to cope with limited amount of text

Furthermore, there is a real danger that, fooled by the appearance of reliability, people become overreliant on them

Ethical / Legal risks

These are some of the ethical and legal implications of generative AIs:

  • Malicious uses (cheating, misinformation campaigns, scamming...)
  • Intellectual Property and copyright issues
  • Privacy - what happens to the data that AI tools collect about us?
  • Perpetuate bias
  • Lack of transparency - what happens to the data we input into it and what is included in its training data? 
  • Lack of AI-specific legislation and regulatory standards
  • Environmental impact - carbon footprint emission generated by training AI tools
  • Risks created by the business model - few very powerful companies competing against each other may not be incentivized to ensure the safety of these tools.
Social / Cultural / Economic risks

Generative AI will have social, cultural and economic implications including:

  • Future of work - job losses?
  • Impact on creative work - loss of authenticity and originality?
  • Widen the digital divide? (if tools have to be paid for)

In the field of education, generative AI offers numerous potential benefits, provided it is used critically and responsibly. It's crucial to thoroughly verify all AI outputs rather than accepting them at face value. With this cautious approach, AI can help:

  • Suggest an outline for a presentation or assignment
  • Provide feedback on writing
  • Point out grammatical mistakes in a piece of text you have written
  • Help understand a difficult concept/text by asking question from it or asking it to summarise it in a simple way
  • Translate materials in a different language 
  • Draft emails or letters
  • Brainstorm ideas
  • Be a practice partner when preparing for seminar, asking questions, trying out arguments