Discussion about this post

User's avatar
Samuel's avatar

I’ve been asking most of the LLM tools I use (Perplexity, foundation LLMs, etc.) to provide a confidence score out of 100 when responding to my questions, as a way to gauge the accuracy of their answers. Interestingly, I’ve never seen a score lower than 90/100… Either they’re calibrated to be overconfident, or my questions are just too mundane. :-)

Expand full comment
1 more comment...

No posts