John Caswell writes about the role of conversation, saying "conversation is an art form we’re mostly pretty rubbish at". New tools that employ LLM’s, such as GPT-3 can only be used by those learning to prompt them effectively. Essentially we’re learning to have a conversation with LLMs so that its outputs are usable for the prompter. (As I’m writing this my feedreader updates to show a follow-up post about prompting by John.)
Last August I wrote about articles by Henrik Olaf Karlsson and Matt Webb that discuss prompting as a skill with newly increasing importance.
Prompting to get a certain type of output instrumentalises a conversation partner, which is fine for using LLM’s, but not for conversations with people. In human conversation the prompting is less to ensure output that is useful to the prompter but to assist the other to express themselves as best as they can (meaning usefulness will be a guaranteed side effect if you are interested in your conversational counterparts). In human conversation the other is another conscious actor in the same social system (the conversation) as you are.
John takes the need for us to learn to better prompt LLM’s and asks whether we’ll also learn how to better prompt conversations with other people. That would be great. Many conversations take the form of the listener listening less to the content of what others say and more listening for the right time to jump in with what they themselves want to say. Broadcast driven versus curiosity driven. Me and you, we all do this. Getting consciously better at avoiding that common pattern is a win for all.
In parallel Donald Clark wrote that the race to innovate services on top of LLM’s is on, spurred by OpenAI’s public release of Chat-GPT in November. The race is indeed on, although I wonder whether those getting in the race all have an actual sense of what they’re racing and are racing towards. The generic use of LLM’s currently in the eye of public discussion I think might be less promising than gearing it towards specific contexts. Back in August I mentioned Elicit that helps you kick-off literature search based on a research question for instance. And other niche applications are sure to be interesting too.
The generic models are definitely capable to hallucinate in ways that reinforce our tendency towards anthropomorphism (which needs little reinforcement already). Very very ELIZA. Even if on occasion it creeps you out when Bing’s implementation of GPT declares its love for you and starts suggesting you don’t really love your life partner.
I associated what Karlsson wrote with the way one can interact with one’s personal knowledge management system the way Luhmann described his note cards as a communication partner. Luhmann talks about the value of being surprised by whatever person or system you’re communicating with. (The anthropomorphism kicks in if we based on that surprisal then ascribe intention to the system we’re communicating with).
Being good at prompting is relevant in my work where change in complex environments is often the focus. Getting better at prompting machines may lift all boats.
I wonder if as part of the race that Donald Clark mentions, we will see LLM’s applied as personal tools. Where I feed a more open LLM like BLOOM my blog archive and my notes, running it as a personal instance (for which the full BLOOM model is too big, I know), and then use it to have conversations with myself. Prompting that system to have exchanges about the things I previously wrote down in my own words. With results that phrase things in my own idiom and style. Now that would be very interesting to experiment with. What valuable results and insight progression would it yield? Can I have a salon with myself and my system and/or with perhaps a few others and their systems? What pathways into the uncanny valley will it open up? For instance, is there a way to radicalise (like social media can) yourself by the feedback loops of association between your various notes, notions and follow-up questions/prompts?
An image generate with Stable Diffusion with the prompt “A group of fashionable people having a conversation over coffee in a salon, in the style of an oil on canvas painting”, public domain
@ton This was an interesting read, which prompted me to think better about these weirdly fascinating new development, thanks.
@moonmehta thank you Jatan!
You ask some excellent questions Ton.
A self reflecting (but objective) conversational partner that’s primed to challenge but act as a principled* accelerant. As it learns it will feed back principle* based prompts to us.
*Principles that define our working framework – our mentality.
One of the vital elements in a good/valuable conversation is impartiality/objectivity. It’s a challenge with humans, there’s always some kind of bias and judgement. AI too of course.
I think our instincts allow us to navigate it if we are conscious about the conversation and can use a framework of some sort. I’m finding the discovery process interesting as so much is coming out. As of now I have formed no fixed workflow and I suspect like you are experimenting widely.
I do think you are on to something.
Given such an enormous shake up of literally everything – tools, techniques, giving us open access to these resources, learning models, applications, new language and taxonomy, fresh utilities and eye watering industry transformation to come.
Perhaps the one thing we will need is a personal life support system (salon) as creative individuals – all purposefully designed to ensure we aren’t subsumed by global scale echo chambers as we have in the recent past.
Good to be riffing on this topic
Thank you for your thoughts, John! Good to see you ‘here’. Salons as support system resonates. I tend to see (small) groups of people (self selected by context and a shared need to address) as the relevant unit of agency.