• 0 Posts
  • 763 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle

















  • And AFD won’t “send migrants back” because that would remove their favorite Boogeyman. Expect don’t symbolic tinkering and not that much more, otherwise they have to start all over again with another minority to blame. It won’t improve these people’s economic situation. EU exit and austerity is back on the menu with AFD.


  • It does remind me of that recent Joe Scott video about the split brain. One part of the brain would do something and the other part of the brain that didn’t get the info because of the split just makes up some semi-plausible answer. It’s like one part of the brain does work at least partially like an LLM.

    It’s more like our brain is like a corporation, with a spokesperson, a president and vice president and a number of departments that with semi-independently. Having an LLM is like having only the spokesperson and not the rest of the work force in that building that makes up an AGI.


  • they have to provide an answer

    Indeed. That’s the G in chatGPT. It stands for generative. It looks at all the previous words and “predicts” the most likely next word. You could see this very clearly with chatGPT-2. It just generated good looking nonsense based on a few words.

    Then you have the P in chatGPT, pre-trained. If it happens to have received training data on what you’re asking, that data is shown. It it’s not trained on that data, it just uses what is more likely to appear and generates something that looks good enough for the prompt. It appears to hallucinate, lie, make stuff up.

    It’s just how the thing works. There is serious research to fix this and a recent paper claimed to have a solution so the LLM knows it doesn’t know.