In the past weeks I’ve been playing at times in OpenAI’s GPT-3 playground. You give GPT-3 a prompt and then it will provide some generated text. You could use that as prompt again, or add something yourself and let GPT-3 continue.

This morning, I read Liam Porr’s 2020 posting about a GPT-3 generated blog he kept for a few weeks. (I think keeping it up longer and positioning it like a personal blog would mean certain detection as generated or inauthentic. Unlike when Porr did his experiment, GPT-3 is now available to many, and more people would also be suspicious of texts as potentially generated to begin with.)

For fun I prompted the GPT-3 playground with the title and first few lines of my last posting on my recent use of the Hypothes.is social annotation tool.

My post remade by GPT-3

Below is the result GPT-3 gave me. After pasting the block-quoted part from the original as prompt, it generated the grey coded part.

10 Thoughts After Making 100 Annotations in Hypothes.is

I’ve now added over 100 annotations using Hypothes.is (h.), almost all within the last month. This includes a few non-public ones. Two weeks ago I wrote down some early impressions, to which I’m now adding some additional observations.

1) 100 annotations (in a month) don’t seem like a lot to me, if h. is a regular tool in one’s browsing habit.

2) The act of annotation feels good, even empowering. It’s a way of taking control of the web, or at least of one’s web experience.

3) Annotations can be very short, even just a single word. This is both good (e.g. for quickly annotating a lot of text) and bad (e.g. when trying to make a more nuanced point).

4) The annotation interface could be improved. In particular, it would be nice to have an easy way to see all annotations on a page, without having to open each one individually.

5) Annotations can be a great way to start a conversation. I’ve had a few annotations that have generated discussion, which is gratifying.

6) Annotations can also be a great way to keep track of thoughts and ideas. I’ve found myself using h. as a kind of “digital scratchpad”, jotting down things that I want to remember or follow up on later.

7) The fact that annotations are public by default is both good and bad. On the one hand, it encourages people to be thoughtful and considerate in their annotations. On the other hand, it can be intimidating, and I’ve found myself self-censoring on occasion.

8) The fact that annotations are tied to a specific URL can be both good and bad. On the one hand, it ensures that annotations are always relevant to the context in which they were made. On the other hand, it can be frustrating when the context changes (e.g. an article is updated) and the annotation is no longer relevant.

9) The fact that annotations are stored centrally (on h.'s servers) is both good and bad. On the one hand, it makes it easy to access and share annotations. On the other hand, it raises privacy and security concerns.

10) Overall, I’m very happy with h. and find it to be a valuable tool. I’m looking forward to using it more in the future.

Some first reactions to that GPT-3 text

  • The title (10 things about…) and partly adding the first numbered element, created the expected 10 point list. It’s a common enough marketing/personal branding type format for GPT-3 to pick up on. (Why did I use it yesterday? Because I was too lazy to categorize my observations, so I just listed and numbered them as I wrote them.)
  • Ignoring obvious things like the repetitiveness of the text, after reading you’re left with feeling like you’ve read a bunch of words but you remain unsure if you read anything of substance. The text suggests there’s substance though, the first person perspective, mentioning specific aspects. Yet then doesn’t really follow up. It reads like marketing text, content farmed text. It mentions things but doesn’t answer questions, nor pose them. Not a surprise, but it does show how much empty stuff there’s already out there (on which GPT-3 was trained in the first place). I recognise it as low-info text but not necessarily as generated text.
  • No links! No links, other than sporadic internal links, is the default in the media, I know. Yet hyperlinks are the strands the Web is made of. It allows pointing to side paths of relevance, to the history and context of which the posting itself is a result, the conversation it is intended to be part of and situated in. Its absence, the pretense that the artefact is a stand alone and self contained thing, is a tell. It’s also a weakness in other online texts, or any text, as books and journals can be filled with links in the shape of footnotes, references and mentions in the text itself)
  • No proof of work (to borrow a term) other than that the words have been written is conveyed by the text. No world behind the text, of which the text is a resulting expression. No examples that suggest or proof the author tried things out, looked things up. Compare that to the actual posting that in point 1 talks about social connections around the topic, links to other h. user profiles as data points for comparison, and elsewhere points to examples of behaviour, lists of h. users found created and shared, references other tools (Zotero, Obsidian) and larger scope (PKM workflows) outside the topic at hand, and experimental changes in the site it is published on itself. That all tells of some exploration, of which the posting is the annotation. This also goes back to my earlier remark of using a 10 point list as laziness in the face of categorising things as I’ve done in other posts (see what I did there? No links, cause lazy).

I think that’s the biggest thing that I take from this: any text should at least hint at the rich tapestry of things it is resulting from, if not directly discuss it or link to it. A tapestry not just made from other texts, but other actions taken (things created, data collected, tools made or adapted), and people (whose thoughts you build on, whose behaviour you observe and adopt, who you interact with outside of the given text). Whether it’s been GPT-3 generated or not, that holds.

Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson