Bookmarked 1.2 billion euro fine for Facebook as a result of EDPB binding decision (by European Data Protection Board)

Finally a complaint against Facebook w.r.t. the GDPR has been judged by the Irish Data Protection Authority. This after the EDPB instructed the Irish DPA to do so in a binding decision (PDF) in April. The Irish DPA has been extremely slow in cases against big tech companies, to the point where they became co-opted by Facebook in trying to convince the other European DPA’s to fundamentally undermine the GDPR. The fine is still mild compared to what was possible, but still the largest in the GDPR’s history at 1.2 billion Euro. Facebook is also instructed to bring their operations in line with the GDPR, e.g. by ensuring data from EU based users is only stored and processed in the EU. This as there is no current way of ensuring GDPR compliance if any data gets transferred to the USA in the absence of an adequacy agreement between the EU and the US government.

A predictable response by FB is a threat to withdraw from the EU market. This would be welcome imo in cleaning up public discourse and battling disinformation, but is very unlikely to happen. The EU is Meta’s biggest market after their home market the US. I’d rather see FB finally realise that their current adtech models are not possible under the GDPR and find a way of using the GDPR like it is meant to: a quality assurance tool, under which you can do almost anything, provided you arrange what needs to be arranged up front and during your business operation.

This fine … was imposed for Meta’s transfers of personal data to the U.S. on the basis of standard contractual clauses (SCCs) since 16 July 2020. Furthermore, Meta has been ordered to bring its data transfers into compliance with the GDPR.

EDPB

Bookmarked Disinformation and its effects on social capital networks (Google Doc) by Dave Troy

This document by US journalist Dave Troy positions resistance against disinformation not as a matter of factchecking and technology but as one of reshaping social capital and cultural network topologies. I plan to read this, especially the premises part looks interesting. Some upfront associations are with Valdis Krebs’ work on the US democratic / conservative party divide where he visualised it based on cultural artefacts, i.e. books people bought (2003-2008), to show spheres and overlaps, and with the Finnish work on increasing civic skills which to me seems a mix of critical crap detection skills woven into a social/societal framework. Networks around a belief or a piece of disinformation for me also point back to what I mentioned earlier about generated (and thus fake) texts, how attempts to detect such fakes usually center on the artefact not on the richer tapestry of information connections (last 2 bullet points and final paragraph) around it (I called it provenance and entanglement as indicators of authenticity recently, entanglement being the multiple ways it is part of a wider network fabric). And there’s the more general notion of Connectivism where learning and knowledge are situated in networks too.

The related problems of disinformation, misinformation, and radicalization have been popularly misunderstood as technology or fact-checking problems, but this ignores the mechanism of action, which is the reconfiguration of social capital. By recasting these problems as one problem rooted in the reconfiguration of social capital and network topology, we can consider solutions that might maximize public health and favor democracy over fascism …

Dave Troy

Bookmarked In Norway, the Electric Vehicle Future Has Already Arrived (by Jack Ewing)

De verregaande elektrificatie van auto’s in Noorwegen heeft een paar interessante effecten. De luchtkwaliteit in Oslo is sterk verbeterd. Stikstof emissie is ‘bijna opgelost’ in die stad. Niet alleen omdat er zo veel elektrische auto’s rondrijden, maar ook omdat aannemers bouwmachines verregaand hebben geëlektrificeerd. Ook dat beperkt de stikstof uitstoot verder. Zonder problemen met een overbelast elektriciteitsnetwerk ook.

Dat komt niet uit de lucht vallen. In 2013 sprak ik een Noorse jurist op een conferentie in Ljubljana die me toen verbaasde door me te vertellen dat elektrische auto’s in Noorwegen al de meest verkochte nieuwe auto’s waren. Dus al meer dan een decennium worden er vooral EVs op de weg gebracht. De Noorse overheid begon al in de jaren 90 (!) met het stimuleren van elektrisch rijden d.m.v. subsidies en belastingvoordelen. Dat is zelfs nog ruim een decennium voor we hier een premier kregen die in 2013 visie als vies en hinderlijk woord publiek afschafte.

We are on the verge of solving the NOx problem

Tobias Wolf, Oslo’s chief engineer for air quality

Bookmarked The Expanding Dark Forest and Generative AI by Maggie Appleton

I very much enjoyed this talk that Maggie Appleton gave at Causal Islands in Toronto, Canada, 25-27 April 2023. It reminds me of the fun and insightful keynotes at Reboot conferences a long time ago, some of which shifted my perspectives longterm.

This talk is about the impact on how we will experience and use the web when generative algorithms create most of its content. Appleton explores the potential effects of that and the futures that might result. She puts human agency at the center when it comes to how to choose our path forward in experimenting and using ‘algogens’ on the web, and how to navigate an internet where nobody believes you’re human.

Appleton is a product designer with Ought, on products that use language models to augment and extend human (cognitive) capabilities. Ought makes Elicit, a tool that surfaces (and summarises) potentially useful papers for your research questions. I use Elicit every now and then, and really should use it more often.

An exploration of the problems and possible futures of flooding the web with generative AI content

Maggie Appleton

On the internet nobody knows you’re a dog.

Peter Steiner, 1993

It seems after years of trollbots and content farms, with generative algorithms we are more rapidly moving past the point where the basic assumption on the web still can be that an (anonymous) author is human until it becomes clear it’s otherwise. Improving our crap detection skills from now on means a different default:

On the internet nobody believes you’re human.

until proven otherwise.

I can now share an article directly from my feed reader to my Hypothes.is account, annotated with a few remarks.

One of the things I often do when feed reading is opening some articles up in the browser with the purpose of possibly saving them to Hypothes.is for (later) annotation. You know how it goes with open tabs in browsers, hundreds will be opened up and then neglected, until you give up and quite the entire session.

My annotation of things I read starts with saving the article to Hypothes.is, and provide a single annotation for the entire page that includes a web archive link to the article and a brief motivation or some first thoughts about why I think it is of interest to me. Later I may go through the article in more detail and add more annotations, which end up in my notes. (I also do this outside of Hypothes.is, saving an entire article directly to my notes in markdown, when I don’t want to read the article in browser.)

Until now this forces me to leave my feed reader to store an article in Hypothes.is. However, in my personal feed reader I have already the opportunity to post directly from there to my websites or to my personal notes collection in Obsidian.
Hypothes.is has an API, which much like how I post to my sites from my feed reader can make it possible to directly share to Hypothes.is from inside my feed reader. This way I can continue to read, while leaving breadcrumbs in Hypothes.is (which always also end up in the inbox of my notes).

The Hypothes.is API is documented and expects JSON payloads. To read public material through the API is possible for anyone, to post you need an API key that is connected to your account (find it when logged in).

I use JSON payloads to post from my feedreader (and from inside my notes) to this site, so I copied and adapted the script to talk to the Hypotes.is API.
The result is an extremely basic and barebones script that can do only a single thing: post a page wide annotation (so no highlights, no updates etc). For now this is enough as it is precisely my usual starting point for annotation.

The script expects to receive 4 things: a URL, the title of the article, an array of tags, and my remarks. That is sent to the Hypothes.is API. In response I will get the information about the annotation I just made (ID etc.) but I disregard any response.

To the webform I use in my feedreader I added an option to send the information to Hypothes.is, rather than my websites through MicroPub, or my local notes through the filesystem. That option is what ensures the little script gets called with the right variables.

It now looks like this:


In my feed reader I have the usual form I use to post replies and bookmarks, now with an additional radio button to select ‘H.’ for Hypothes.is


Submitting the form above gets it posted to my Hypothes.is account