Chatbots / AI / LLM

“Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.

“When a chatbot gets something wrong, it’s not because it made an error. It’s because on that roll of the dice, it happened to string together a group of words that, when read by a human, represents something false. But it was working entirely as designed. It was supposed to make a sentence & it did.

“I expect that consumer-facing AI programs will continue to improve and they may become much more useful tools for everyday life in the future.

“But I think it was a disastrous mistake that today’s models were taught to be convincing before they were taught to be right.

“It may be true (I don’t know enough neuroscience to say) that LLMs & human brains use similar techniques to make connections between concepts & learn. But most humans don’t speak confidently & coherently about something unless they actually know it. The ones who do… well, we have words for them.

“If a human told you things that were correct 80% of the time but claimed, flat out, with absolute confidence, that they were correct 100% of the time, you would dislike them & never trust a word they say. All I’m really suggesting is for people to treat chatbots with that same distrust & antagonism.

“Maybe the problem is that I just really really hate being lied to.”

— Dr. Katie Mack

photo of the author and the author's best friend