Bookmarked Commission opens non-compliance investigations against Alphabet, Apple and Meta under the Digital Markets Act (by European Commission)

With the large horizontal legal framework for the single digital market and the single market for data mostly in force and applicable, the EC is initiating first actions. This announcement focuses on app store aspects, on steering (third parties being able to provide users with other paths of paying for services than e.g. Apple’s app store), on (un-)installing any app and freedom to change settings, as well as providers preferencing own services above those of others. Five investigations for suspected non-compliance involving Google (Alphabet), Apple, and Meta (Facebook) have been announced. Amazon and Microsoft are also being investigated in order to clarify aspects that may lead to suspicions of non-compliance.

The investigation into Facebook is about their ‘pay or consent’ model, which is Facebook’s latest attempt to circumvent their GDPR obligations that consent should be freely given. It was clear that their move, even if it allows them to steer clear of GDPR (which is still very uncertain), it would create issues under the Digital Markets Act (DMA).

In the same press release the EC announces that Facebook Messenger is getting a 6 month extension of the period in which to comply with interoperability demands.

The Commission suspects that the measures put in place by these gatekeepers fall short of effective compliance of their obligations under the DMA. … The Commission has also adopted five retention orders addressed to Alphabet, Amazon, Apple, Meta, and Microsoft, asking them to retain documents which might be used to assess their compliance with the DMA obligations, so as to preserve available evidence and ensure effective enforcement.

European Commission

In reply to Creating a custom GPT to learn about my blog (and about myself) by Peter Rukavina

It’s not surprising that GPT-4 doesn’t work like a search engine and has a hard time surfacing factual statements from source texts. Like one of the commenters I wonder what that means for the data analysis you also asked for. Perhaps those too are merely plausible, but not actually analysed. Especially the day of the week thing, as that wasn’t in the data, and I wouldn’t expect GPT to determine all weekdays for posts in the process of answering your prompt.

I am interested in doing what you did, but then with 25 years of notes and annotations. And rather with a different model with less ethical issues attached. To have a chat about my interests and links between things. Unlike the fact based questions he’s asked the tool that doesn’t necessarily need it to be correct, just plausible enough to surface associations. Such associations might prompt my own thinking and my own searches working with the same material.

Also makes me think if what Wolfram Alpha is doing these days gets a play in your own use of GPT+, as they are all about interpreting questions and then giving the answer directly. There’s a difference between things that face the general public, and things that are internal or even personal tools, like yours.

Have you asked it things based more on association yet? Like “based on the posts ingested what would be likely new interests for Peter to explore” e.g.? Can you use it to create new associations, help you generate new ideas in line with your writing/interests/activities shown in the posts?

So my early experiments show me that as a data analysis copilot, a custom GPT is a very helpful guide… In terms of the GPT’s ability to “understand” me from my blog, though, I stand unimpressed.

Peter Rukavina

In 1967 French literary critic Roland Barthes declared the death of the author (in English, no less). An author’s intentions and biography are not the means to explain definitively what the meaning of a text (of fiction) is. It’s the reader that determines meaning.

Barthes reduces the author to merely a scriptor, a scribe, who doesn’t exist other than for their role of penning the text. It positions the work fully separate of its maker.

I don’t disagree with the notion that readers glean meaning in layers from a text, far beyond what an author might have intended. But thinking about the author’s intent, in light of their biography or not, is one of those layers for readers to interpret. It doesn’t make the author the sole decider on meaning, but the author’s perspective can be used to create meaning by any reader. Separating the author from their work entirely is cutting yourself of from one source of potential meaning. Even when reduced to the role of scribe, such meaning will leak forth: the monks of old who tagged the transcripts they made and turned those into Indexes that are a common way of interpreting on which topics a text touches or puts emphasis. So despite Barthes pronouncement, I never accepted the brain death of the author, yet also didn’t much care specifically about their existence for me to find meaning in texts either.

With the advent of texts made by generative AI I think bringing the author and their intentions in scope of creating meaning is necessary however. It is a necessity as proof of human creation. Being able to perceive the author behind a text, the entanglement of its creation with their live, is the now very much needed Reverse Turing test. With algorithmic text generation there is indeed only a scriptor, one incapable of conveying meaning themselves.
To determine the human origin of a text, the author’s own meaning, intention and existence must shine through in a text, or be its context made explicit. Because our default assumption must be that it was generated.

The author is being resurrected. Because we now have fully automated scriptors. Long live the author!

Bookmarked WordPress AI: Generative Content & Blocks (by Joe Hoyle, found via Chuck Grimmett)

As many others I am fascinated by what generative algorithms like ChatGPT for texts and Stable Diffusion for images can do. Particularly I find it fascinating to explore what it might do if embedded in my own workflows, or how it might change my workflows. So the link above showing an integration of ChatGPT in WordPress’ Gutenberg block editor drew my attention.

The accompanying video shows a mix of two features. First having ChatGPT generate some text, or actually a table with specific data, and having ChatGPT in ‘co-pilot’ style generate code for Gutenberg blocks. I think the latter might be actually useful, as I’ve seen generative AI put to good use in that area. The former, having ChatGPT write part of your posting is clearly not advisable. And the video shows it too, although the authors don’t point it out or haven’t reflected on the fact that ChatGPT is not a search engine but geared to coming up with plausible stuff without being aware of its actual information (the contrast with generating code is that code is much more highly structured in itself so probabilities collapse easier to the same outcome).

The blogpost in the video is made by generating a list of lunar missions, and then turning them into a table, adding their budgets and sorting them chronologically. This looks very cool in the vid, but some things jump out as not ok. Results jump around the table for instance: Apollo 13 moves from 1970 to 2013 and changes budget. See image below. None of the listed budgets for Apollo missions, nor their total, match up with the detailed costs overview of Apollo missions (GoogleDocs spreadsheet). The budget column being imaginary and the table rows jumping around makes the result entirely unfit for usage of course. It also isn’t a useful prompt: needing to fact check every table field is likely more effort and less motivating than researching the table yourself from actual online resources directly.

It looks incredibly cool ‘see me writing a blogpost by merely typing in my wishes, and the work being done instantly’, and there are definitely times I’d wish that to be possible. To translate a mere idea or thought into some output directly however means I’d skip confronting such an idea with reality, with counter arguments etc. Most of my ideas only look cool inside my head, and need serious change to be sensibly made manifest in the world outside my head. This video is a bit like that, an idea that looks cool in one’s head but is great rubbish in practice. ChatGPT is hallucinating factoids and can’t be trusted to create your output. Using it in the context of discovery (as opposed to the justification context of your output such as in this video) is possible and potentially useful. However this integration within the Gutenberg writing back-end of WordPress puts you in the output context directly so it leads you to believe the generated plausible rubbish is output and not just prompting fodder for your writing. ‘Human made’ is misleading you with this video, and I wouldn’t be surprised if they’re misleading themselves as well. A bit like staging the ‘saw someone in half and put them together again’ magician’s trick in an operating room and inviting surgeons to re-imagine their work.

Taking a native-first approach to integrating generative AI into WordPress, we’ve been experimenting with approaches to a “WordPress Copilot” that can “speak” Gutenberg / block-editor.

Copy-pasting paragraphs between ChatGPT and WordPress only goes so far, while having the tools directly embedded in the editor … open up a world of possibilities and productivity wins…

Joe Hoyle


An android robot is filling out a table listing Apollo missions on a whiteboard, generated image using Midjourney

My first reaction to the open letter calling to ‘Pause Giant AI Experiments‘ is one of suspicion. It doesn’t at all read like a call for genuine reflection on generative models nor like a call for actual inclusivity of stakeholders concerned. It’s still a techno-optimistic promise as we see all the time, and an invitation to place our trust in bigtech. Given some of the signatories it feels like market protection too. Sort-of like how large re-users of government data all of a sudden were happy to pay a lot for data access, after lobbying for free access for decades, when they discovered free data meant a much lower barrier to entry for new players. Which makes me wonder who the pause is actually useful for.

Bookmarked The push to AI is meant to devalue the open web so we will move to web3 for compensation (by Mita Williams)

Adding this interesting perspective from Mita Williams to my notes on the effects of generative AI. She positions generative AI as bypassing the open web entirely (abstracted away into the models the AIs run on). Thus sharing is disincentivised as sharing no longer brings traffic or conversation, if it is only used as model-fodder. I’m not at all sure if that is indeed the case, but from as early as YouTube’s 2016 Flickr images database being used for AI model training, such as IBM’s 2019 facial recognition efforts, it’s been a concern. Leading to questions about whether existing (Creative Commons) licenses are fit for purpose anymore. Specifically Williams pointing to not only the impact on an individual creator but also on the level of communities they form, are part of and interact in, strikes me as worth thinking more about. The erosion of (open source, maker, collaborative etc) community structures is a whole other level of potential societal damage.

Mita Williams suggests the described erosion is not an effect but an actual aim by tech companies, part of a bait and switch. A re-siloing, an enclosing of commons, where being able to see something in return for online sharing again is the lure. Where the open web may fall by the wayside and become even more niche than it already is.

…these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic…

Those who are currently wrecking everything that we collectively built on the internet already have the answer waiting for us: web3.

…the decimation of the existing incentive models for internet creators and communities (as flawed as they are) is not a bug: it’s a feature.

Mita Williams