![]() “I realize that what people ‘actually’ mean is they don't want an LLM Assistant (a product like ChatGPT etc.) to hallucinate. Although, what is important here is correctly defining the problem: “It's only when the dreams go into deemed factually incorrect territory that we label it a ‘hallucination’,” continues Karpathy, adding that it only looks like a bug: “Hallucination is not a bug, it is LLM's greatest feature.”Īt the same time, Karpathy is not hiding from the fact that AI chatbots indeed have issues. We are using our prompts to initiate and steer this dreaming process to a hopefully useful result. The Slovakia-born expert on deep neural nets and natural language processing hints we, the users are some sort of directors. The prompts start the dream, and based on the… ![]() ![]() Because, in some sense, hallucination is all LLMs do. ![]() I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |