LLM takes this idea of Bullshit and takes it even further. The model has no concept of truth or facts. It can only pick the most likely word to follow the sequence it has.
A perfect illustration of this for me personally was when I tried early on in the LLM hype cycle (in like 2023? maybe?) playing around with an autocomplete example that said something like “Paris is the capital of France” with a high degree of confidence (which seems impressive until you mess with it) and changing the wording slightly to be a different city…still a high degree of confidence.
LLM takes this idea of Bullshit and takes it even further. The model has no concept of truth or facts. It can only pick the most likely word to follow the sequence it has.
A perfect illustration of this for me personally was when I tried early on in the LLM hype cycle (in like 2023? maybe?) playing around with an autocomplete example that said something like “Paris is the capital of France” with a high degree of confidence (which seems impressive until you mess with it) and changing the wording slightly to be a different city…still a high degree of confidence.