The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Does AI Act rise to Europe’s challenge in regulating GPT-4?
By Patrick Glauner  |  Apr 11, 2023
Does AI Act rise to Europe’s challenge in regulating GPT-4?
Image courtesy of and under license from
GPT-4 is promising, but further scrutiny is needed. AI Prof Patrick Glauner presents GPT-4’s key features, puts it into context, and talks the regulatory challenges Europe must tackle in dealing with generative AI models without choking innovation or yielding to the US and China.


Transformer models at a glance

Transformers are state-of-the art natural language processing (NLP) neural network models. They apply the attention mechanism1 that has demonstrated the ability to learn correlations between tokens (parts of words) in text particularly well. The attention mechanism was initially published in 2017, and since then it has been further improved in various subsequent publications and applications. In mid-2020, the Generative Pre-trained Transformer 3 (GPT-3) was published by OpenAI.

The term ‘generative pre-trained transformer’ may be difficult to understand for those who are not familiar with it, so here is a brief explanation: A GPT is a transformer that generates content - which is mostly text - and which was pre-trained on a large amount of text data. GPT-3 is a powerful NLP artificial intelligence (AI) that is based on a transformer and that offers various functionalities including text classification, question answering and text generation. GPT-3 is available in different sizes, with the largest one having around 175 billion parameters.

During the last two and a half years, several competitors have been released.2 One of these is Wu Dao 2.0, which has 1.75 trillion parameters and has shown extremely promising results.3 In addition, the attention mechanism has also been applied to other data such as vision. Vision transformers, such as DALL·E 2 or Stable Diffusion, have also shown promise and have led to further improvements in the state of the art.


Ever since the release of GPT-3, rumors have been swirling about the release of its successor, GPT-4. In the meantime, ChatGPT was released in late 2022. It is a model based on the improved GPT-3.5 and has also res

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.