Bookmarked WordPress AI: Generative Content & Blocks (by Joe Hoyle, found via Chuck Grimmett)

As many others I am fascinated by what generative algorithms like ChatGPT for texts and Stable Diffusion for images can do. Particularly I find it fascinating to explore what it might do if embedded in my own workflows, or how it might change my workflows. So the link above showing an integration of ChatGPT in WordPress’ Gutenberg block editor drew my attention.

The accompanying video shows a mix of two features. First having ChatGPT generate some text, or actually a table with specific data, and having ChatGPT in ‘co-pilot’ style generate code for Gutenberg blocks. I think the latter might be actually useful, as I’ve seen generative AI put to good use in that area. The former, having ChatGPT write part of your posting is clearly not advisable. And the video shows it too, although the authors don’t point it out or haven’t reflected on the fact that ChatGPT is not a search engine but geared to coming up with plausible stuff without being aware of its actual information (the contrast with generating code is that code is much more highly structured in itself so probabilities collapse easier to the same outcome).

The blogpost in the video is made by generating a list of lunar missions, and then turning them into a table, adding their budgets and sorting them chronologically. This looks very cool in the vid, but some things jump out as not ok. Results jump around the table for instance: Apollo 13 moves from 1970 to 2013 and changes budget. See image below. None of the listed budgets for Apollo missions, nor their total, match up with the detailed costs overview of Apollo missions (GoogleDocs spreadsheet). The budget column being imaginary and the table rows jumping around makes the result entirely unfit for usage of course. It also isn’t a useful prompt: needing to fact check every table field is likely more effort and less motivating than researching the table yourself from actual online resources directly.

It looks incredibly cool ‘see me writing a blogpost by merely typing in my wishes, and the work being done instantly’, and there are definitely times I’d wish that to be possible. To translate a mere idea or thought into some output directly however means I’d skip confronting such an idea with reality, with counter arguments etc. Most of my ideas only look cool inside my head, and need serious change to be sensibly made manifest in the world outside my head. This video is a bit like that, an idea that looks cool in one’s head but is great rubbish in practice. ChatGPT is hallucinating factoids and can’t be trusted to create your output. Using it in the context of discovery (as opposed to the justification context of your output such as in this video) is possible and potentially useful. However this integration within the Gutenberg writing back-end of WordPress puts you in the output context directly so it leads you to believe the generated plausible rubbish is output and not just prompting fodder for your writing. ‘Human made’ is misleading you with this video, and I wouldn’t be surprised if they’re misleading themselves as well. A bit like staging the ‘saw someone in half and put them together again’ magician’s trick in an operating room and inviting surgeons to re-imagine their work.

Taking a native-first approach to integrating generative AI into WordPress, we’ve been experimenting with approaches to a “WordPress Copilot” that can “speak” Gutenberg / block-editor.

Copy-pasting paragraphs between ChatGPT and WordPress only goes so far, while having the tools directly embedded in the editor … open up a world of possibilities and productivity wins…

Joe Hoyle


An android robot is filling out a table listing Apollo missions on a whiteboard, generated image using Midjourney

My first reaction to the open letter calling to ‘Pause Giant AI Experiments‘ is one of suspicion. It doesn’t at all read like a call for genuine reflection on generative models nor like a call for actual inclusivity of stakeholders concerned. It’s still a techno-optimistic promise as we see all the time, and an invitation to place our trust in bigtech. Given some of the signatories it feels like market protection too. Sort-of like how large re-users of government data all of a sudden were happy to pay a lot for data access, after lobbying for free access for decades, when they discovered free data meant a much lower barrier to entry for new players. Which makes me wonder who the pause is actually useful for.

Bookmarked The push to AI is meant to devalue the open web so we will move to web3 for compensation (by Mita Williams)

Adding this interesting perspective from Mita Williams to my notes on the effects of generative AI. She positions generative AI as bypassing the open web entirely (abstracted away into the models the AIs run on). Thus sharing is disincentivised as sharing no longer brings traffic or conversation, if it is only used as model-fodder. I’m not at all sure if that is indeed the case, but from as early as YouTube’s 2016 Flickr images database being used for AI model training, such as IBM’s 2019 facial recognition efforts, it’s been a concern. Leading to questions about whether existing (Creative Commons) licenses are fit for purpose anymore. Specifically Williams pointing to not only the impact on an individual creator but also on the level of communities they form, are part of and interact in, strikes me as worth thinking more about. The erosion of (open source, maker, collaborative etc) community structures is a whole other level of potential societal damage.

Mita Williams suggests the described erosion is not an effect but an actual aim by tech companies, part of a bait and switch. A re-siloing, an enclosing of commons, where being able to see something in return for online sharing again is the lure. Where the open web may fall by the wayside and become even more niche than it already is.

…these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic…

Those who are currently wrecking everything that we collectively built on the internet already have the answer waiting for us: web3.

…the decimation of the existing incentive models for internet creators and communities (as flawed as they are) is not a bug: it’s a feature.

Mita Williams

Bookmarked Target_Is_New, Issue 212 by Iskander Smit

Iskander asks what about users, next to makers, when it comes to responsible AI? For a slightly different type of user at least, such responsibilities are being formulated in the proposed EU AI Regulation, as well as the connected AI Liability Directive. There not just the producers and distributors of AI containing services or products have responsibilities, but also those who deploy them in practice, or those who use its outputs. He’s right that most discussions focus on within the established system of making, training and deploying AI, and we should also look outside the system. Where in this case the people using AI, or using their output reside. That’s why I like the EU’s legislative approach, as it doesn’t aim to regulate the system as seen from within it, but focuses on access conditions for such products to the European market, and the impact it has within society. Of course, these proposals are still under negotiation, and it’s wait and see what will remain at the end of that process.

As I wrote down as thoughts while listening to Dasha Simons; we are all convinced of the importance of explainability, transparency, and even interpretability, all focused on making the system responsible and, with them, the makers of the system. But what about the responsibility of the users? Are they also part of the equation, should they be responsible too? As the AI (or what term we use) is continuous learning and shaping, the prompts we give are more than a means to retrieve the best results; it is also part of the upbringing of the AI. We are, as users, also responsible for good AI as the producers are.

Iskander Smit

Last Tuesday I provided the opening keynote at BeGeo, the annual conference of Belgium’s geospatial sector, organised by the Belgian National Geographic Institute. My talk was part of the opening plenary session, after the welcome by NGI’s administrator-general Ingrid Vanden Berghe, and opening remarks by Belgian Minister for Defence Ludivine Dedonder.

With both the Digital and Data strategies the EU is shaping a geopolitical proposition w.r.t. digitisation and data. Anchored to maximising societal benefits and strengthening citizen rights and European values as measures of success, a wide range of novel legal instruments is being created. Those instruments provide companies, citizens, knowledge institutes and governments alike with new opportunities as well as responsibilities concerning the use of data. The forming of a single European market for data sharing and usage, the EU data space, also has consequences for all applications, including AI and digital twins, that are dependent on that data and produce data in return.

Geo-data has a key role to play, not just in the Green Deal data space, and finds itself at the center of a variety of ethical issues as it is often the linking pin for other data sources. The EU data space will shape the environment in which data and geo-data is being shared and used for the coming decade, and requires elevating the role and visibility of geo data across other sectors. I explored the EU Data space as geo-data’s new frontier, to provide the audience with an additional perspective and some questions for their participation at the BeGeo conference.

The slides are embedded below, which you can also embed in your own website, and can be downloaded as PDF.

Bookmarked AI Liability Directive (PDF) (by the European Commission)

This should be interesting to compare with the proposed AI Regulation. The AI Regulation specifies under which conditions an AI product or service, or the output of one will or won’t be admitted on the EU market (literally a CE mark for AI), based on a risk assessment that includes risks to civic rights, equipment safety and critical infrastructure. That text is still very much under negotiation, but is a building block of the combined EU digital and data strategies. The EU in parallel is modernising their product liability rules, and now include damage caused by AI related products and services within that scope. Both are listed on the EC’s page concerning AI so some integration is to be expected. Is the proposal already anticipating parts of the AI Regulation, or does it try to replicate some of it within this Directive? Is it fully aligned with the proposed AI Regulation or are there surprising contrasts? As this proposal is a Directive (which needs to be translated into national law in each Member State), and the AI Regulation becomes law without such national translation that too is a dynamic of interest, in the sense that this Directive builds on existing national legal instruments. There was a consultation on this Directive late 2021. Which DG created this proposal, DG Just?

The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.

The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. … In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.

In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. … Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation. Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.

European Commission