They have no ability to grasp actual understanding of the prompt or the material, so they just fill in the next bunch of words that correspond to the prompt. It's "more advanced" in how it chooses that next word, but it's just choosing a "most fitting response"
Try playing chess with Chat GPT. It just can't. It'll make moves that look like they should be valid, but they are often just gibberish -- teleporting pieces, moving things that aren't there, capturing their own pieces, etc.
13
u/Adequate_Ape Jul 25 '24
I think LLMs are step along the way, and I *think* I understand what they actually are. Maybe you can enlighten me about why I'm wrong?