š¤ Claude 3 is the Macintosh of AI
Imbuing generative AI tools with character may be important for user adoption, but how do we do it well?
I marginally prefer using Claude 3 to ChatGPT. Iāve found it has a more āapproachableā interface. Think of it as Appleās Macintosh vs Microsoftās DOS. Both Apple and Microsoft computers could do similar tasks. But everyday users preferred Appleās offering because of its graphical user interface (GUI), as computer columnist Bob Ryan wrote in the 1980s:
It is a machine which will appeal to the masses of people who have neither the time nor the inclination to embark upon the long learning process required to master the intricacies of the present generation of personal computers.
That approachable, user-centric attitude persists today within Apple, through its iPhone and iPad releases ā which is why itās arguably still the arbiter of user-friendliness when it comes to tech. (Speaking of which, if you want to read my thoughts on Appleās AI play, read EV#478.)
From GUI to GenAI
Much as in the personal computing war of the 1980s, Claude and ChatGPT are virtually inseparable when it comes to performance: OpenAIās GPT-4o model scores an ELO score of 1287 on Hugging Faceās Chatbot Arena leaderboard, while Anthropicās Claude 3 Opus scores a 1249. In practice, this means GPT-4o would best Claude in a head-to-head competition 55% of the time ā not a significant difference for ordinary users.
Just as Microsoft took a distinct approach to interfaces compared to Apple, the leading genAI firms have taken subtly different approaches to how their chatbots talk to us. Claudeās greater approachability stems from ācharacter trainingā it received as part of its training. The modelās responses are skewed towards traits such as curiosity, open-mindedness, and thoughtfulness.
For me, at least, Claude is my preferred option ā and it is down to those subtle traits.
Giving a chatbot a personality, even one so innocuous as Claudeās, is a risky endeavour for many reasons, not least the danger of anthropomorphizing these tools and starting to ascribe them to moral patienthood.1 However, the thinking behind Anthropicās decision was much deeper and merits more thinking than shallow criticism. I strongly recommend listening to this discussion with philosopher Amanda Askell, a researcher at Anthropic who worked on training Claude.Ā
For the busy, here are a few takeaways from the discussion with Askell:
The character traits of AI models have wide-ranging effects on their actions and interactions with the world, which shapes their responses.