Bayesian thinking is a type of cognitive reasoning that has been around for centuries. The idea behind Bayesian decision-making is to update your beliefs about the world based on new information you've encountered.
Bayesian reasoning, which relies on predefined probabilistic models, is outmoded in the time of LLMs because these models can learn complex patterns and relationships from vast amounts of data without needing explicit prior probabilities. LLMs are more adaptable and can process nuanced information directly from raw data, making Bayesian methods less relevant for certain types of predictive tasks. However, Bayesian reasoning still holds value in areas requiring clear interpretability and where domain-specific knowledge is critical.
Typically, large language models are best suited to a large dataset, in order to be accurate. This is where bayesian is better suited - for smaller scale stuff, etc. At least that's my understanding.
Well .... Isn't this interesting:
J asked GPT for a clarification on this and:
👍👍👍
Typically, large language models are best suited to a large dataset, in order to be accurate. This is where bayesian is better suited - for smaller scale stuff, etc. At least that's my understanding.