🤔 Claude 3 is the Macintosh of AI
Imbuing generative AI tools with character may be important for user adoption, but how do we do it well?
I marginally prefer using Claude 3 to ChatGPT. I’ve found it has a more ‘approachable’ interface. Think of it as Apple’s Macintosh vs Microsoft’s DOS. Both Apple and Microsoft computers could do similar tasks. But everyday users preferred Apple’s offering because of its graphical user interface (GUI), as computer columnist Bob Ryan wrote in the 1980s:
It is a machine which will appeal to the masses of people who have neither the time nor the inclination to embark upon the long learning process required to master the intricacies of the present generation of personal computers.
That approachable, user-centric attitude persists today within Apple, through its iPhone and iPad releases – which is why it’s arguably still the arbiter of user-friendliness when it comes to tech. (Speaking of which, if you want to read my thoughts on Apple’s AI play, read EV#478.)
From GUI to GenAI
Much as in the personal computing war of the 1980s, Claude and ChatGPT are virtually inseparable when it comes to performance: OpenAI’s GPT-4o model scores an ELO score of 1287 on Hugging Face’s Chatbot Arena leaderboard, while Anthropic’s Claude 3 Opus scores a 1249. In practice, this means GPT-4o would best Claude in a head-to-head competition 55% of the time – not a significant difference for ordinary users.
Just as Microsoft took a distinct approach to interfaces compared to Apple, the leading genAI firms have taken subtly different approaches to how their chatbots talk to us. Claude’s greater approachability stems from “character training” it received as part of its training. The model’s responses are skewed towards traits such as curiosity, open-mindedness, and thoughtfulness.
For me, at least, Claude is my preferred option – and it is down to those subtle traits.
Giving a chatbot a personality, even one so innocuous as Claude’s, is a risky endeavour for many reasons, not least the danger of anthropomorphizing these tools and starting to ascribe them to moral patienthood.1 However, the thinking behind Anthropic’s decision was much deeper and merits more thinking than shallow criticism. I strongly recommend listening to this discussion with philosopher Amanda Askell, a researcher at Anthropic who worked on training Claude.
For the busy, here are a few takeaways from the discussion with Askell:
The character traits of AI models have wide-ranging effects on their actions and interactions with the world, which shapes their responses.