06 April 2023

ChatGPT is superintelligent, understands human language and ... fabricates facts.

ChatGPT lies about Junk DNA 25 March 2023
 

Recently I read a blogpost: ChatGPT lies about Junk DNA. Well, some answers may be wrong, but it is really nonsense to say:'ChatGPT lies'! One must have at least a basic understanding of recent developments in the field of AI to have a well-founded opinion about the matter. I consider the errors as undesirable side-effects of software that itself is nearly indistinguishable from human intelligence.

Lying? The dictionary definition of lying is "to make a false statement with the intention to deceive". An AI computerprogram can make factual errors, but cannot have an intention at all, let alone an intention to deceive. Nor can it have the intention to spread misinformation [7]. Only humans can lie. Computersoftware and dogs cannot lie. Claiming otherwise is anthropomorphism. Of course lying and spreading misinformation is a bad thing. But in a webinar about ChatGPT a scientist involved in AI language research said that ChatGPT is not designed to produce truths. So, if it is not designed to deliver the truth [3], it simply doesn't make sense to expect it produces true statements all the time. It is absolutely wrong to conclude that ChatGPT is a worthless piece of software.

 

picture generated by: 'This person does not exist'.

The errors ChatGPT makes can be compared with the above image generated by AI software. This AI software generates beautiful images 99% of the time. It is not worthless because of an occasional error. To err is human! Another website shows perfect human faces also generated by AI (https://loremfaces.com/) with even less errors.

ChatGPT is a milestone in the AI field of artificial language processing and production. It can read and interpret human questions and it can produce grammatically correct sentences which are real answers, not just any text with the words from the question included. And that's no small feat! Language, the ability to produce grammatically correct sentences, is considered a unique human capability. What ChatGPT does is a far greater accomplishment than what search engines are able to do so far. One can even claim that ChatGPT passes the Turing test (1). The Turing Test is a test of a machine's ability to have a natural language conversation and exhibit intelligent behaviour indistinguishable from a human. ChatGPT is precisely doing that. Truth is not part of the definition of the Turing test. But intelligence is. Recently, Scientific American published an article: I Gave ChatGPT an IQ Test. Here’s What I Discovered. What did the author find? The Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers! Please note this is the same test used for humans! Read the article for the details [2].

Another revolution in the AI field: Artificial intelligence powers protein-folding predictions (Nature). Deep-learning algorithms such as AlphaFold2 and RoseTTAFold can now predict a protein’s 3D shape from its linear sequence — a huge boon to structural biologists. "Zhang considers AlphaFold2 to be a striking demonstration of the power of deep learning."


So, dismissing ChatGPT because it makes errors and fabricates data is
throwing the baby out with the bathwater. Itself a human error. Just read the Wikipedia List of scientific misconduct incidents where you learn about 75 scientists (!) that have falsified or fabricated data (intentionally) [4],[6]. Investigations suggest that, in some fields, at least one-quarter of clinical trials might be problematic or even entirely made up [5]. Do we reject science as a whole because some scientists fabricate data? Pay as much attention to scientific integrity as to ChatGPT. Fortunately, ChatGPT is not allowed to  publish in scientific journals. If you don't like ChatGPT, don't use it.

 

Stephen Wolfram very positive about ChatGPT

9-10 April 2023

"I think, is that language is at a fundamental level somehow simpler than it seems. ...  The basic concept of ChatGPT is at some level rather simple. Start from a huge sample of human-created text from the web, books, etc. Then train a neural net to generate text that’s “like this”. And in particular, make it able to start from a “prompt” and then continue with text that’s “like what it’s been trained with”. ...

But the remarkable—and unexpected—thing is that this process can produce text that’s successfully “like” what’s out there on the web, in books, etc.  ...

What does it take to produce “meaningful human language”? In the past, we might have assumed it could be nothing short of a human brain. But now we know it can be done quite respectably by the neural net of ChatGPT. ...

I think we have to view this as a—potentially surprising—scientific discovery: that somehow in a neural net like ChatGPT’s it’s possible to capture the essence of what human brains manage to do in generating language.  ...

But it’s amazing how human-like the results are.... this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. ...

What ChatGPT does in generating text is very impressive."

From: Stephen Wolfram (2023) What Is ChatGPT Doing … and Why Does It Work? February 14, 2023 (this is an in depth analysis of ChatGPT).

Wolfram doesn't discuss the fact that ChatGPT also produces grammatically correct false statements ('hallucinations'). But I think it follows easily from his analysis. So, ChatGPT has the same difficulty in distinguishing true and false statements as humans.

(Thank you Bert Morrien for pointing out Wolfram's article.)

 

 

Further Reading

  • Hong Yang (2023) How I use ChatGPT responsibly in my teaching, Nature 12 April 2023 Quote: "... because current technology makes it difficult to detect work written by the model." ('model' is ChatGPT). That means it passed the Turing Test.
  • Gary Marcus (2023) AI Platforms like ChatGPT Are Easy to Use but Also Potentially Dangerous, Scientific American December 19, 2022
  • Large language model, is a very useful Wikipedia article which contains a List of large language models (ChatGPT is one of them). Particularly interesting are: Emergent abilities of LLMs. [added 20 April 2023]
  • How generative AI is building better antibodies, Nature, 4 May 2023. Language models similar to those behind ChatGPT have been used to improve antibody therapies against COVID-19, Ebola and other viruses. . [added 11 May 2023]
  • Elsevier’s chatbot, called Scopus AI: users ask natural-language questions; in response, the bot uses a version of the LLM GPT-3.5 to return a fluent summary paragraph about a research topic, together with cited references. However, they can make up non-existent references. Scopus AI is therefore constrained: it has been prompted to generate its answer only by reference to five or ten research abstracts. (Nature) [added 10 Aug 2023]

 


Notes

  1. It was designed by Alan Turing in 1950 (1) Turing was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. 
  2. Read for background information the Wikipedia articles: Superintelligence, ChatGPT (includes reception, limitations) and Turing Test. Read also the blog Introducing ChatGPT from the developers of ChatGPT section Limitations: "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth". Precisely! If the author of the blog "ChatGPT lies..." could point out where 'the source of truth' can be found, the problem of false statements is solved for once and for all. [10 Apr 2023]
  3. Gary Marcus (2023) AI Platforms like ChatGPT Are Easy to Use but Also Potentially Dangerous, Scientific American December 19, 2022: "In technical terms, they are models of sequences of words (that is, how people use language), not models of how the world works. They are often correct because language often mirrors the world, but at the same time these systems do not actually reason about the world and how it works
  4. The latest fraud: University investigation found prominent spider biologist fabricated, falsified data, Science 11 May 2023.
  5. Medicine is plagued by untrustworthy clinical trials. How many studies are faked or flawed? Nature 18 Jul 2023 
  6. A very disturbing picture’: another retraction imminent for controversial physicist: Nature, 25 Jul 2023. Ranga Dias will have a second paper revoked. A journal’s investigation found apparent data fabrication.
  7.  Nature about ChatGPT: Role play with large language models, 8 Nov 2023. "it makes little sense to speak of an agent’s beliefs or intentions in a literal sense. So it cannot assert a falsehood in good faith, nor can it deliberately deceive the user. Neither of these concepts is directly applicable." 16 Nov 2023
     

1 comment:

  1. Hoi Bert, dank voor je verwijzing naar het artikel van Stephen Wolfram. Als er iemand is die er verstand van heeft en ChatGPT goed kan uitleggen dan is hij het wel. Artikel: 'What Is ChatGPT Doing … and Why Does It Work?'. Hij is zeer onder de indruk van de prestaties van ChatGPT gezien het ten diepste een zeer eenvoudig statistisch trucje gebruikt om zinnen te genereren. Dat verklaart tegelijkertijd dat het grammaticaal correcte zinnen kan produceren EN grammaticaal correcte verzinsels.
    Lees de conclusie: 'So … What Is ChatGPT Doing, and Why Does It Work?' als een samenvatting van zijn oordeel. Nogmaals dank voor de verwijzing!

    ReplyDelete

Comments to posts >30 days old are being moderated.
Safari causes problems, please use Firefox or Chrome for adding comments.