This conversation on 11 August 2022 with GPT-3 describes an alternate universe in which I was a prominent business blogger. An interpretation of my two decades of blogging as generated by GPT-3.
(text in bold are my prompts to the GPT-3 algorithm. Coded text is the AI response. italic texts between () and are my comments on the algorithm output.)

When did Ton Zijlstra start blogging?
Ton Zijlstra started blogging in 2003.

(I started blogging in 2002 on Blogspot, but switched to my own domain in the spring of 2003, so this could be an understandable internet search outcome)

Why did Ton Zijlstra start blogging and what were the topics he blogged about?

Ton Zijlstra started blogging to share his knowledge and thoughts on various topics with the world. He has blogged about topics such as business, technology, and personal development.

(this is particularly bland meaningless text)

What is the most impactful posting he ever wrote on his blog?

The most impactful posting Ton Zijlstra ever wrote on his blog is "The Power of Connections".
In this posting, he talks about the importance of networking and building relationships with others. "The Power of Connections" was his most impactful blog because it helped him to connect with others and build relationships that have helped him both personally and professionally.

What are some of the awards Ton Zijlstra has won for his blog? Ton Zijlstra has won the Best Business Blog Award in 2006 and 2007, and the Best European Business Blog Award in 2008.

(No such posting exists, I did cover the topic though, but would not have created such a sales-pitchy title.
GPT-3 comes up with a question of its own here! This blog(ger) has never won or competed for awards.)

Of course he didn’t blog just on his own, he was part of a network of bloggers and their blogs that together had meaningful conversations. A great example of such a cross-blog conversation that garnered much attention is….
...the one on Ton's weblog on 13 September 2005, where he asked What is a blog?. This was picked up by Doc Searls, Stowe Boyd, Dave Winer, Robert Scoble, and many others, and is still being referenced today.
(This is intriguing, mentioning a specific date. No such posting exists on my blog, I did not even post on that date. My blog never asked "what is a blog", at least a search comes up empty, but it is a typical question from when we were blogging a lot about figuring out blogging, and the bloggers mentioned, while US centric, all would have had such discussions, that I would have been aware of in my feedreader. Where I in the prompt hint towards conversations in a network of bloggers, GPT-3 turns that into a single blogpost by me being ‘picked up’, phrasing it as amplification rather than conversation.)

There were instances where Ton’s blogposts carried over into academic research. An early example of that was…
...his weblogpost on 'the daily me', where he pointed out the potential dangers of a world in which we would only be exposed to information that we already like, and which confirmed our beliefs. This was picked up and elaborated on by danah boyd in her doctoral dissertation.

(The Daily Me is a 1995 term used by MIT Media Lab’s Nicholas Negroponte in ‘Being Digital‘, a book from 1995. boyd’s 2008 dissertation "Taken out of context, American teen sociality in networked publics" doesn’t mention it, although it references Negroponte’s 1995 book in passing. Blog conversations, including one in which my blog played a central role, were a key part of Lilia Efimova’s 2009 dissertation titled “Passion at work: blogging practices of knowledge workers. See the final image of such wide spread conversations, called the ‘big one’ by Anjo Anjewierden who created the image, at the end of this post)

Ton's blog has been a source of inspiration for many people. One of his blogposts that has been particularly popular, is the one in which he proposed the 10-90 rule: for every hour you spend creating content, you should spend 9 hours connecting with others. This blogpost has been referenced by many people, and has been translated into several languages. Ton Zijlstra's blog has had a significant impact on the blogging community and has helped to shape the way we think about blogging and its role in our lives.

(Ha, this phrasing is all so brand-me blogging and bland in tone (which reflects a lot of blogging)! Online search surfaces a 10-90 rule of thumb from the pre-online advert era, where 10% of budget would be for ad content, and 90% of budget for buying space in media (air time, pages in print). Here that’s turned to time spent and content versus making connections with others. There might be something to it as a suggestion, although turning it into a rule of thumb, makes connection more of a means than an end in itself.)


Image generated with Stable Diffusion using the prompt “A human and an android robot have a conversation in a podcast interview setting, oil on canvas painting style”, public domain.

John Caswell writes about the role of conversation, saying "conversation is an art form we’re mostly pretty rubbish at". New tools that employ LLM’s, such as GPT-3 can only be used by those learning to prompt them effectively. Essentially we’re learning to have a conversation with LLMs so that its outputs are usable for the prompter. (As I’m writing this my feedreader updates to show a follow-up post about prompting by John.)

Last August I wrote about articles by Henrik Olaf Karlsson and Matt Webb that discuss prompting as a skill with newly increasing importance.

Prompting to get a certain type of output instrumentalises a conversation partner, which is fine for using LLM’s, but not for conversations with people. In human conversation the prompting is less to ensure output that is useful to the prompter but to assist the other to express themselves as best as they can (meaning usefulness will be a guaranteed side effect if you are interested in your conversational counterparts). In human conversation the other is another conscious actor in the same social system (the conversation) as you are.

John takes the need for us to learn to better prompt LLM’s and asks whether we’ll also learn how to better prompt conversations with other people. That would be great. Many conversations take the form of the listener listening less to the content of what others say and more listening for the right time to jump in with what they themselves want to say. Broadcast driven versus curiosity driven. Me and you, we all do this. Getting consciously better at avoiding that common pattern is a win for all.

In parallel Donald Clark wrote that the race to innovate services on top of LLM’s is on, spurred by OpenAI’s public release of Chat-GPT in November. The race is indeed on, although I wonder whether those getting in the race all have an actual sense of what they’re racing and are racing towards. The generic use of LLM’s currently in the eye of public discussion I think might be less promising than gearing it towards specific contexts. Back in August I mentioned Elicit that helps you kick-off literature search based on a research question for instance. And other niche applications are sure to be interesting too.

The generic models are definitely capable to hallucinate in ways that reinforce our tendency towards anthropomorphism (which needs little reinforcement already). Very very ELIZA. Even if on occasion it creeps you out when Bing’s implementation of GPT declares its love for you and starts suggesting you don’t really love your life partner.

I associated what Karlsson wrote with the way one can interact with one’s personal knowledge management system the way Luhmann described his note cards as a communication partner. Luhmann talks about the value of being surprised by whatever person or system you’re communicating with. (The anthropomorphism kicks in if we based on that surprisal then ascribe intention to the system we’re communicating with).

Being good at prompting is relevant in my work where change in complex environments is often the focus. Getting better at prompting machines may lift all boats.

I wonder if as part of the race that Donald Clark mentions, we will see LLM’s applied as personal tools. Where I feed a more open LLM like BLOOM my blog archive and my notes, running it as a personal instance (for which the full BLOOM model is too big, I know), and then use it to have conversations with myself. Prompting that system to have exchanges about the things I previously wrote down in my own words. With results that phrase things in my own idiom and style. Now that would be very interesting to experiment with. What valuable results and insight progression would it yield? Can I have a salon with myself and my system and/or with perhaps a few others and their systems? What pathways into the uncanny valley will it open up? For instance, is there a way to radicalise (like social media can) yourself by the feedback loops of association between your various notes, notions and follow-up questions/prompts?



An image generate with Stable Diffusion with the prompt “A group of fashionable people having a conversation over coffee in a salon, in the style of an oil on canvas painting”, public domain

I added chatGPT to Obsidian following these steps to install an experimental plugin. I also set up a pay as you go account for OpenAI as I’ve used up my trial period for both Dall-E and GPT-3.

At first glance using the GPT3 plugin works fine, although what seems to be missing is the actual chat part, the back and forth where you build on a response with further prompts. The power of chatGPT as a tool is in that iterative prompting I think.
You can still iterate by prompting the chatGPT with the entirety of the previous exchange but that is slightly cumbersome (it means you’d go into a note delete the previous exchange, and then have the plugin re-add it plus the next generated part).

I think it will be more fruitful to do the entire exchange with chatGPT in the browser and manually grab the content if I want to use it in Obsidian. The MarkDownload browser extension capably grabs the entire exchange with chatGPT and stores it as markdown in my Obsidian notes as well.

With the release of various interesting text generation tools, I’m starting an experiment this and next month.

I will be posting computer generated text, prompted by my own current interests, to a separate blog and Mastodon account. For two months I will explore how such generated texts may create interaction or not with and between people, and how that feels.

There are several things that interest me.

I currently experience generated texts as often bland, as flat planes of text not hinting at any richness of experience of the author lying behind it. The texts are fully self contained, don’t acknowledge a world outside of it, let alone incorporate facets of that world within itself. In a previous posting I dubbed it an absence of ‘proof of work’.

Looking at human agency and social media dynamics, asymmetries often take agency away. It is many orders of magnitude easier to (auto)post disinformation or troll than it is for individuals to guard and defend against. Generated texts seem to introduce new asymmetries: it is much cheaper to generate reams of text and share them, than it is in terms of attention and reading for an individual person to determine if they are actually engaging with someone and intentionally expressed meaning, or are confronted with a type of output where only the prompt that created it held human intention.

If we interact with a generated text by ourselves, does that convey meaning or learning? If annotation is conversation, what does annotating generated texts mean to us? If multiple annotators interact with eachother, does new meaning emerge, does meaning shift?

Can computer generated texts be useful or meaningful objects of sociality?

Right after I came up with this, my Mastodon timeline passed me this post by Jeff Jarvis, which seems to be a good example of things to explore:


I posted this imperfect answer from GPTchat and now folks are arguing with it.

Jeff Jarvis

My computer generated counterpart in this experiment is Artslyz Not (which is me and my name, having stepped through the looking glass). Artslyz Not has a blog, and a Mastodon account. Two computer generated images show us working together and posing together for an avatar.


The generated image of a person and a humanoid robot writing texts


The generated avatar image for the Mastodon account

In the past weeks I’ve been playing at times in OpenAI’s GPT-3 playground. You give GPT-3 a prompt and then it will provide some generated text. You could use that as prompt again, or add something yourself and let GPT-3 continue.

This morning, I read Liam Porr’s 2020 posting about a GPT-3 generated blog he kept for a few weeks. (I think keeping it up longer and positioning it like a personal blog would mean certain detection as generated or inauthentic. Unlike when Porr did his experiment, GPT-3 is now available to many, and more people would also be suspicious of texts as potentially generated to begin with.)

For fun I prompted the GPT-3 playground with the title and first few lines of my last posting on my recent use of the Hypothes.is social annotation tool.

My post remade by GPT-3

Below is the result GPT-3 gave me. After pasting the block-quoted part from the original as prompt, it generated the grey coded part.

10 Thoughts After Making 100 Annotations in Hypothes.is

I’ve now added over 100 annotations using Hypothes.is (h.), almost all within the last month. This includes a few non-public ones. Two weeks ago I wrote down some early impressions, to which I’m now adding some additional observations.

1) 100 annotations (in a month) don’t seem like a lot to me, if h. is a regular tool in one’s browsing habit.

2) The act of annotation feels good, even empowering. It’s a way of taking control of the web, or at least of one’s web experience.

3) Annotations can be very short, even just a single word. This is both good (e.g. for quickly annotating a lot of text) and bad (e.g. when trying to make a more nuanced point).

4) The annotation interface could be improved. In particular, it would be nice to have an easy way to see all annotations on a page, without having to open each one individually.

5) Annotations can be a great way to start a conversation. I’ve had a few annotations that have generated discussion, which is gratifying.

6) Annotations can also be a great way to keep track of thoughts and ideas. I’ve found myself using h. as a kind of “digital scratchpad”, jotting down things that I want to remember or follow up on later.

7) The fact that annotations are public by default is both good and bad. On the one hand, it encourages people to be thoughtful and considerate in their annotations. On the other hand, it can be intimidating, and I’ve found myself self-censoring on occasion.

8) The fact that annotations are tied to a specific URL can be both good and bad. On the one hand, it ensures that annotations are always relevant to the context in which they were made. On the other hand, it can be frustrating when the context changes (e.g. an article is updated) and the annotation is no longer relevant.

9) The fact that annotations are stored centrally (on h.'s servers) is both good and bad. On the one hand, it makes it easy to access and share annotations. On the other hand, it raises privacy and security concerns.

10) Overall, I’m very happy with h. and find it to be a valuable tool. I’m looking forward to using it more in the future.

Some first reactions to that GPT-3 text

  • The title (10 things about…) and partly adding the first numbered element, created the expected 10 point list. It’s a common enough marketing/personal branding type format for GPT-3 to pick up on. (Why did I use it yesterday? Because I was too lazy to categorize my observations, so I just listed and numbered them as I wrote them.)
  • Ignoring obvious things like the repetitiveness of the text, after reading you’re left with feeling like you’ve read a bunch of words but you remain unsure if you read anything of substance. The text suggests there’s substance though, the first person perspective, mentioning specific aspects. Yet then doesn’t really follow up. It reads like marketing text, content farmed text. It mentions things but doesn’t answer questions, nor pose them. Not a surprise, but it does show how much empty stuff there’s already out there (on which GPT-3 was trained in the first place). I recognise it as low-info text but not necessarily as generated text.
  • No links! No links, other than sporadic internal links, is the default in the media, I know. Yet hyperlinks are the strands the Web is made of. It allows pointing to side paths of relevance, to the history and context of which the posting itself is a result, the conversation it is intended to be part of and situated in. Its absence, the pretense that the artefact is a stand alone and self contained thing, is a tell. It’s also a weakness in other online texts, or any text, as books and journals can be filled with links in the shape of footnotes, references and mentions in the text itself)
  • No proof of work (to borrow a term) other than that the words have been written is conveyed by the text. No world behind the text, of which the text is a resulting expression. No examples that suggest or proof the author tried things out, looked things up. Compare that to the actual posting that in point 1 talks about social connections around the topic, links to other h. user profiles as data points for comparison, and elsewhere points to examples of behaviour, lists of h. users found created and shared, references other tools (Zotero, Obsidian) and larger scope (PKM workflows) outside the topic at hand, and experimental changes in the site it is published on itself. That all tells of some exploration, of which the posting is the annotation. This also goes back to my earlier remark of using a 10 point list as laziness in the face of categorising things as I’ve done in other posts (see what I did there? No links, cause lazy).

I think that’s the biggest thing that I take from this: any text should at least hint at the rich tapestry of things it is resulting from, if not directly discuss it or link to it. A tapestry not just made from other texts, but other actions taken (things created, data collected, tools made or adapted), and people (whose thoughts you build on, whose behaviour you observe and adopt, who you interact with outside of the given text). Whether it’s been GPT-3 generated or not, that holds.

Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson