Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson

5 reactions on “Communicating with GPT-3

  1. Ton Zijlstra commented a great post by Henrik Karlsson about the large language model GTP-3, which caused me to finally try it out.

    My first impression is similar to theirs: “Just wow”, and it took me quite a while until I reached some limits (in particular when asking GPT to “Write a fictitious debate between xxx and yyy about zzz.”)

    One undeniable affordance, however, of the machine’s responses is to get inspirations and stimulation for consideration. This is also the big topic of the note-takers and zettlekastlers crowd, for example using the autolinking of “unlinked references”. And I am noticing that it is probably a matter of taste and preferences, or perhaps even a matter of different working styles: If I am permanently working at my limits there is no room left for organic associations, and then I might be more impressed by an abundance of ideas and artificial creativity?

    Perhaps I am too much of an ungrateful grumpy killjoy, but the abundance of artificial serendipitous stimulations makes me think of how onerous it will be to sift through them all to find out which ones are the most relevant ones for me.

    Let’s contrast this sort of inspiration with the sort that comes through blog reactions. Karlsson explicitly compares blog posts to search queries and to the new kind of ‘conversations’ that we can have with GPT-3, and I think it is indeed very appropriate to see the interaction with these tools as a ‘communication’. Also Luhmann used this metaphor for his Zettelkasten, as Ton points out, and when we use GPT, the back and forth of ‘prompts’ and ‘completions’ is a dialog, too. So there are many beneficial similarities to blog comments and trackbacks.

    Image based on (CC-BY-NC)

    However, blog respondents are not anonymous mass products. They have a background. They care about the topic I write about, and I care about theirs. I subscribe to people whose interests are not always the same as mine but often still close enough to be inspiring. And I trust that it is relevant what they are writing. (Formerly, we talked about bloggers as ‘fuzzy categories‘ and about ‘online resonance‘ and about the skill of ‘picking‘ from the abundance.) The grounding in a shared context and a known background, makes it easier for me to understand, and benefit from, their reactions, probably in a similar way as neural ‘priming’ works.

    This is all missing when I process suggestions from a machine that does not know me and that I don’t know (I don’t even know what it knows and what it merely confabulates, and at what point its algorithm switches to the live web to look up more). It is unpersonal — even if it may impersonate Plato in a debate.
    Like this:Like Loading…


Comments are closed.