[Haskell-cafe] Formal Verification & Modern AI Safety

Ben Franksen ben.franksen at online.de
Thu Jan 9 19:14:08 UTC 2025


On 07.01.25 11:02, Mostafa Touny via Haskell-Cafe wrote:
> Hello Haskellers,
> I hope you are doing well.
> 
> Amazon started to adopt automated theorem provers, to mitigate modern Generative AI hallucination.
> 
> See "Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks": https://aws.amazon.com/blogs/aws/prevent-factual-errors-from-llm-hallucinations-with-mathematically-sound-automated-reasoning-checks-preview/
> 
> This community has ties to formal proofs, type-driven programming, and Logic programming.
> 
> Do you see any special potential of Haskell for recent AI safety?

According to the IMO very insightful article "ChatGPT is bullshit"
(https://link.springer.com/article/10.1007/s10676-024-09775-5)
"hallucination" is a misleading term for the phenomenon. It can be much
better understood as "bullshit" in the precise sense of uttering
statements with complete disregard to factual truth. This is a
fundamental design property of LLMs and the authors convincingly argue
that attempts to mitigate the problem by supplying additional external
"truth oracles" are unlikely to work.

One of the most illustrative examples the authors cite is that of an LLM
"inventing" non-existent scientific papers to corroborate their claims.
This is not at all surprising, given how these models work.

Sorry, not what you were asking, but something to perhaps consider.

Cheers
Ben



More information about the Haskell-Cafe mailing list