Not so Intelligent

You have probably heard about the growth of artificial intelligence (AI) technology. It seems like every day or so we hear about some new AI application in medicine, machine operations, piloting a vehicle, or some other area. Sometimes we even hear about the “rise of the machines” and the threats that machines made smart with AI might pose to humans.

Yesterday I heard from one of my scientist colleagues at LLNL. He was fooling around with ChatGPT and asked it a technical question to see how well it might respond. ChatGPT is a relatively new AI-based computer program designed to interact with humans in a conversation. They call it a chatbot because it is an automated system (i.e., a robot) that “talks” with people.

My friend asked ChatGPT if anyone has used a certain specialized scientific technique and, if so, to give references. It responded with a politely worded yes answer, and gave him three bibliographic references to peer-reviewed papers published in scientific journals. It all looked very good, except all three references were bogus! The journal names, page numbers, dates, and paper titles all looked plausible, but while the journals were real, the papers were fake. When he looked up the pages cited in the journals, the content on those pages had nothing to do with the topic of his question.

He tried again, with a more specific question involving research in which (as far as anyone knows) he was the first published author. This time ChatGPT came back with a more extensive answer along with a supporting reference, but the answer and reference were wrong. They contained relevant key words, but a simple check showed them to be factually wrong.

Moral of this particular story: don’t count on ChatGPT for science. Broader lesson: maintain a healthy skepticism about AI applications. And a thought question: If ChatGPT gives plausible but bogus answers to a specific question, how far would you trust Siri or Alexa?

2 thoughts on “Not so Intelligent

  1. Your friend’s experience brings to mind a question that’s new to me. How can an AI program be made to understand and value the concept of truth? When we are apparently unable to teach that concept to humans, how can we teach it to machines?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s