With the release of various interesting text generation tools, I’m starting an experiment this and next month.

I will be posting computer generated text, prompted by my own current interests, to a separate blog and Mastodon account. For two months I will explore how such generated texts may create interaction or not with and between people, and how that feels.

There are several things that interest me.

I currently experience generated texts as often bland, as flat planes of text not hinting at any richness of experience of the author lying behind it. The texts are fully self contained, don’t acknowledge a world outside of it, let alone incorporate facets of that world within itself. In a previous posting I dubbed it an absence of ‘proof of work’.

Looking at human agency and social media dynamics, asymmetries often take agency away. It is many orders of magnitude easier to (auto)post disinformation or troll than it is for individuals to guard and defend against. Generated texts seem to introduce new asymmetries: it is much cheaper to generate reams of text and share them, than it is in terms of attention and reading for an individual person to determine if they are actually engaging with someone and intentionally expressed meaning, or are confronted with a type of output where only the prompt that created it held human intention.

If we interact with a generated text by ourselves, does that convey meaning or learning? If annotation is conversation, what does annotating generated texts mean to us? If multiple annotators interact with eachother, does new meaning emerge, does meaning shift?

Can computer generated texts be useful or meaningful objects of sociality?

Right after I came up with this, my Mastodon timeline passed me this post by Jeff Jarvis, which seems to be a good example of things to explore:


I posted this imperfect answer from GPTchat and now folks are arguing with it.

Jeff Jarvis

My computer generated counterpart in this experiment is Artslyz Not (which is me and my name, having stepped through the looking glass). Artslyz Not has a blog, and a Mastodon account. Two computer generated images show us working together and posing together for an avatar.


The generated image of a person and a humanoid robot writing texts


The generated avatar image for the Mastodon account

Bookmarked Target_Is_New, Issue 212 by Iskander Smit

Iskander asks what about users, next to makers, when it comes to responsible AI? For a slightly different type of user at least, such responsibilities are being formulated in the proposed EU AI Regulation, as well as the connected AI Liability Directive. There not just the producers and distributors of AI containing services or products have responsibilities, but also those who deploy them in practice, or those who use its outputs. He’s right that most discussions focus on within the established system of making, training and deploying AI, and we should also look outside the system. Where in this case the people using AI, or using their output reside. That’s why I like the EU’s legislative approach, as it doesn’t aim to regulate the system as seen from within it, but focuses on access conditions for such products to the European market, and the impact it has within society. Of course, these proposals are still under negotiation, and it’s wait and see what will remain at the end of that process.

As I wrote down as thoughts while listening to Dasha Simons; we are all convinced of the importance of explainability, transparency, and even interpretability, all focused on making the system responsible and, with them, the makers of the system. But what about the responsibility of the users? Are they also part of the equation, should they be responsible too? As the AI (or what term we use) is continuous learning and shaping, the prompts we give are more than a means to retrieve the best results; it is also part of the upbringing of the AI. We are, as users, also responsible for good AI as the producers are.

Iskander Smit

The DALL-E imaging algorithm is now available to everyone to play with. Do realise while the entity providing DALL-E is called OpenAI, it is no such thing.

An AI generated color photograph of a 1960s green Volkswagen camper van parked underneath a night sky with the milky way running from front to back, as a commercial airliner with two contrails flies past from left to right

Generated with DALL-E using the prompt “A color photograph of a 1960s green Volkswagen camper van parked underneath a night sky with the milky way running from front to back, as a commercial airliner with two contrails flies past from left to right.”, a single image selected from the first iteration of four.

Stable Diffusion, an open sourced model that can be run on your own hardware, produces this with the same prompt


Generated with Stable Diffusion using the prompt “A color photograph of a 1960s green Volkswagen camper van parked underneath a night sky with the milky way running from front to back, as a commercial airliner with two contrails flies past from left to right.”, a single image selected from the first iteration of four.

Bookmarked AI Liability Directive (PDF) (by the European Commission)

This should be interesting to compare with the proposed AI Regulation. The AI Regulation specifies under which conditions an AI product or service, or the output of one will or won’t be admitted on the EU market (literally a CE mark for AI), based on a risk assessment that includes risks to civic rights, equipment safety and critical infrastructure. That text is still very much under negotiation, but is a building block of the combined EU digital and data strategies. The EU in parallel is modernising their product liability rules, and now include damage caused by AI related products and services within that scope. Both are listed on the EC’s page concerning AI so some integration is to be expected. Is the proposal already anticipating parts of the AI Regulation, or does it try to replicate some of it within this Directive? Is it fully aligned with the proposed AI Regulation or are there surprising contrasts? As this proposal is a Directive (which needs to be translated into national law in each Member State), and the AI Regulation becomes law without such national translation that too is a dynamic of interest, in the sense that this Directive builds on existing national legal instruments. There was a consultation on this Directive late 2021. Which DG created this proposal, DG Just?

The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.

The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. … In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.

In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. … Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation. Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.

European Commission

The interwebs have been full off AI generated imagery. The AI script used is OpenAI’s Dall-E 2 (Wall-E & Dali). Images are created based on a textual prompt (e.g. Michelangelo’s David with sunglasses on the beach), natural language interpretation is then used to make a composite image. Some of the examples going ’round were quite impressive (See OpenAI’s site e.g., and the Kermit in [Movie Title Here] overview was much fun too).


One of the images resulting from when I entered the prompt ‘Lego Movie with Kermit’ using Dall-E Mini. I consider this a Public Domain image as the image does not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply (meaning neither AI nor macaques).

OpenAI hasn’t released the Dall-E algorithm for others to play with, but there is a Dall-E mini available, seemingly trained on a much smaller data set.

I played around with it a little bit. My experimentation leads to the conclusion that either Dall-E mini suffers from “stereotypes in gives you stereotypes out”, with its clear bias towards Netherlands’ more basic icons of windmills (renewable energy ftw!) and tulip fields. That, or it means whatever happens in the coming decades we here in the Rhine delta won’t see much change.

Except for Thai flags, we’ll be waving those, apparently.

The past of Holland:

Holland now:

The innovation of Holland:

The future of Holland:

Four sets of images resulting from prompts entered by me into the Dall-E mini algorithm. The prompts were The past of Holland, Hollond, the innovation of Holland, the future of Holland. All result in windmills and tulip fields. Note in the bottom left of the future of Holland that Thai flags will be waved. I consider these as Public Domain images as they do not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply. Their arrangement in this blog post does carry copyright though, and the Creative Commons license top-right applies to the arrangement. IANAL.

Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)

A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience 🙂

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.