The DALL-E imaging algorithm is now available to everyone to play with. Do realise while the entity providing DALL-E is called OpenAI, it is no such thing.

An AI generated color photograph of a 1960s green Volkswagen camper van parked underneath a night sky with the milky way running from front to back, as a commercial airliner with two contrails flies past from left to right

Generated with DALL-E using the prompt “A color photograph of a 1960s green Volkswagen camper van parked underneath a night sky with the milky way running from front to back, as a commercial airliner with two contrails flies past from left to right.”, a single image selected from the first iteration of four.

Stable Diffusion, an open sourced model that can be run on your own hardware, produces this with the same prompt


Generated with Stable Diffusion using the prompt “A color photograph of a 1960s green Volkswagen camper van parked underneath a night sky with the milky way running from front to back, as a commercial airliner with two contrails flies past from left to right.”, a single image selected from the first iteration of four.

Bookmarked AI Liability Directive (PDF) (by the European Commission)

This should be interesting to compare with the proposed AI Regulation. The AI Regulation specifies under which conditions an AI product or service, or the output of one will or won’t be admitted on the EU market (literally a CE mark for AI), based on a risk assessment that includes risks to civic rights, equipment safety and critical infrastructure. That text is still very much under negotiation, but is a building block of the combined EU digital and data strategies. The EU in parallel is modernising their product liability rules, and now include damage caused by AI related products and services within that scope. Both are listed on the EC’s page concerning AI so some integration is to be expected. Is the proposal already anticipating parts of the AI Regulation, or does it try to replicate some of it within this Directive? Is it fully aligned with the proposed AI Regulation or are there surprising contrasts? As this proposal is a Directive (which needs to be translated into national law in each Member State), and the AI Regulation becomes law without such national translation that too is a dynamic of interest, in the sense that this Directive builds on existing national legal instruments. There was a consultation on this Directive late 2021. Which DG created this proposal, DG Just?

The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.

The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. … In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.

In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. … Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation. Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.

European Commission

The interwebs have been full off AI generated imagery. The AI script used is OpenAI’s Dall-E 2 (Wall-E & Dali). Images are created based on a textual prompt (e.g. Michelangelo’s David with sunglasses on the beach), natural language interpretation is then used to make a composite image. Some of the examples going ’round were quite impressive (See OpenAI’s site e.g., and the Kermit in [Movie Title Here] overview was much fun too).


One of the images resulting from when I entered the prompt ‘Lego Movie with Kermit’ using Dall-E Mini. I consider this a Public Domain image as the image does not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply (meaning neither AI nor macaques).

OpenAI hasn’t released the Dall-E algorithm for others to play with, but there is a Dall-E mini available, seemingly trained on a much smaller data set.

I played around with it a little bit. My experimentation leads to the conclusion that either Dall-E mini suffers from “stereotypes in gives you stereotypes out”, with its clear bias towards Netherlands’ more basic icons of windmills (renewable energy ftw!) and tulip fields. That, or it means whatever happens in the coming decades we here in the Rhine delta won’t see much change.

Except for Thai flags, we’ll be waving those, apparently.

The past of Holland:

Holland now:

The innovation of Holland:

The future of Holland:

Four sets of images resulting from prompts entered by me into the Dall-E mini algorithm. The prompts were The past of Holland, Hollond, the innovation of Holland, the future of Holland. All result in windmills and tulip fields. Note in the bottom left of the future of Holland that Thai flags will be waved. I consider these as Public Domain images as they do not pass the ‘creativity involved’ threshold which generally presupposes a human creator, for copyright to apply. Their arrangement in this blog post does carry copyright though, and the Creative Commons license top-right applies to the arrangement. IANAL.

Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)

A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience 🙂

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.

Bookmarked Dust Rising: Machine learning and the ontology of the real (by David Weinberger)

I am looking forward to reading this. Will need to put aside some time to be able to really focus, given the author, and the amount of time taken to write it.

…an article I worked on for a couple of years. It’s only 2,200 words, but they were hard words to find because the ideas were, and are, hard for me. … The article argues, roughly, that the sorts of generalizations that machine learning models embody are very different from the sort of generalizations the West has taken as the truths that matter.

David Weinberger

My first reading of the yet to be published EU Regulation on the European Approach for Artificial Intelligence, based on a leaked version, I find pretty good. A logical approach, laid out in the 92 recitals preceding the articles, based on risk assessment, where erosion of human and citizen rights or risk to key infrastructure and services and product safety is deemed high risk by definition. High risk means more strict conditions, following some of the building blocks of the GDPR, also when it comes to governance and penalties. Those conditions are tied to being allowed to put a product on the market, and are tied to how they perform in practice (not just how they’re intended). I find that an elegant combination, risk assessment based on citizen rights and critical systems, and connected to well-worn mechanisms of market access and market monitoring. It places those conditions on both producers and users, as well as other parties involved along the supply chain. The EU approach to data and AI align well this way it seems, and express the European geopolitical proposition concerning data and AI, centered on civic rights, into codified law. That codification, like the GDPR, is how the EU exports its norms to elsewhere.

The text should be published soon by the EC, and I’ll try a write-up in more detail then.