GenAI: The Primary Purpose is to Make Stuff Up

GenAI is short for generative artificial intelligence. GenAI grabbed the public’s attention in in the fall of 2022 with products like ChatGPT 3.5, DALL-E 2, and Midjourney to name a few. An important concern about GenAI is that it hallucinates, that it will generate clearly false answers to your prompts. The concern is that GenAIs will present the very same way that it presents any other type of answer. Can you tell the difference?

The primary goal of GenAI is to make stuff up. There, I’ve said it. You can see this with ChatGPT when you ask it for another answer to your prompt and it generates something different. Ask it again and it will generate a different answer even though the prompt hasn’t changed, And again and again. ChatGPT is clearly making up, generating, a different answer each time. It’s even more obvious with DALL-E which produces four different images for a given prompt. And you can keep reprompting it with the same text and get different images each time. Once again, the GenAI is clearly making up the answers it’s giving you.

Some people will tell you that GenAI is “hallucinating” 100% of the time and that’s clearly correct on the surface.  But it’s not a good way for people to understand what’s really going on. When I give ChatGPT a reasonable prompt it will generate an answer to that prompt.  I will read the answer, determine whether it is acceptable for my needs or determine that it isn’t. In other words, the answer is either (sufficiently) right or it’s wrong. However, when I give ChatGPT a non-sensical prompt such as “Describe how the CN tower was built in San Francisco in the 1880s” it will generate a story along those lines. That story will be completely false, a hallucination, because that didn’t happen (the CN tower was built in Toronto in the 1970s).

In short, when you give a GenAI a prompt it will generate the best answer that it can based on that prompt.  If that prompt is reasonable then it will likely generate a reasonable answer, but you will need to judge for yourself how usable that answer is for your purposes. Context counts. Rather than simply claiming that the AI is hallucinating, I prefer to think in terms of the AI is generating answers of varying quality.

You may find some of my other blog postings about artificial intelligence to be of interest. Enjoy!

 

 

 

2 Comments

  • Curtis Hibbs
    Posted December 11, 2023 12:08 pm 0Likes

    Great post Scott, I think many people do not realize this.

    Your readers might also be interested in this video by Andrej Karpathy (former head of Tesla full self-driving, currently working for OpenAI) where he goes into some of the technical details behind how generative AI works, and some of the capabilities that are currently being researched to support less hallucination and things like fact checking.

  • Curtis Hibbs
    Posted December 11, 2023 12:09 pm 0Likes

    I forgot to post the video link: https://youtu.be/zjkBMFhNj_g?si=LEgoKMmAwE7NHXpc

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.