Bookmarked Target_Is_New, Issue 212 by Iskander Smit

Iskander asks what about users, next to makers, when it comes to responsible AI? For a slightly different type of user at least, such responsibilities are being formulated in the proposed EU AI Regulation, as well as the connected AI Liability Directive. There not just the producers and distributors of AI containing services or products have responsibilities, but also those who deploy them in practice, or those who use its outputs. He’s right that most discussions focus on within the established system of making, training and deploying AI, and we should also look outside the system. Where in this case the people using AI, or using their output reside. That’s why I like the EU’s legislative approach, as it doesn’t aim to regulate the system as seen from within it, but focuses on access conditions for such products to the European market, and the impact it has within society. Of course, these proposals are still under negotiation, and it’s wait and see what will remain at the end of that process.

As I wrote down as thoughts while listening to Dasha Simons; we are all convinced of the importance of explainability, transparency, and even interpretability, all focused on making the system responsible and, with them, the makers of the system. But what about the responsibility of the users? Are they also part of the equation, should they be responsible too? As the AI (or what term we use) is continuous learning and shaping, the prompts we give are more than a means to retrieve the best results; it is also part of the upbringing of the AI. We are, as users, also responsible for good AI as the producers are.

Iskander Smit

Last Tuesday I provided the opening keynote at BeGeo, the annual conference of Belgium’s geospatial sector, organised by the Belgian National Geographic Institute. My talk was part of the opening plenary session, after the welcome by NGI’s administrator-general Ingrid Vanden Berghe, and opening remarks by Belgian Minister for Defence Ludivine Dedonder.

With both the Digital and Data strategies the EU is shaping a geopolitical proposition w.r.t. digitisation and data. Anchored to maximising societal benefits and strengthening citizen rights and European values as measures of success, a wide range of novel legal instruments is being created. Those instruments provide companies, citizens, knowledge institutes and governments alike with new opportunities as well as responsibilities concerning the use of data. The forming of a single European market for data sharing and usage, the EU data space, also has consequences for all applications, including AI and digital twins, that are dependent on that data and produce data in return.

Geo-data has a key role to play, not just in the Green Deal data space, and finds itself at the center of a variety of ethical issues as it is often the linking pin for other data sources. The EU data space will shape the environment in which data and geo-data is being shared and used for the coming decade, and requires elevating the role and visibility of geo data across other sectors. I explored the EU Data space as geo-data’s new frontier, to provide the audience with an additional perspective and some questions for their participation at the BeGeo conference.

The slides are embedded below, which you can also embed in your own website, and can be downloaded as PDF.

Bookmarked AI Liability Directive (PDF) (by the European Commission)

This should be interesting to compare with the proposed AI Regulation. The AI Regulation specifies under which conditions an AI product or service, or the output of one will or won’t be admitted on the EU market (literally a CE mark for AI), based on a risk assessment that includes risks to civic rights, equipment safety and critical infrastructure. That text is still very much under negotiation, but is a building block of the combined EU digital and data strategies. The EU in parallel is modernising their product liability rules, and now include damage caused by AI related products and services within that scope. Both are listed on the EC’s page concerning AI so some integration is to be expected. Is the proposal already anticipating parts of the AI Regulation, or does it try to replicate some of it within this Directive? Is it fully aligned with the proposed AI Regulation or are there surprising contrasts? As this proposal is a Directive (which needs to be translated into national law in each Member State), and the AI Regulation becomes law without such national translation that too is a dynamic of interest, in the sense that this Directive builds on existing national legal instruments. There was a consultation on this Directive late 2021. Which DG created this proposal, DG Just?

The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.

The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. … In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.

In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. … Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation. Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.

European Commission

I presented during the 2022 Netherlands WordCamp edition in Arnhem on turning all WordPress sites into fully IndieWeb enabled sites. Meaning turning well over a third of the web into the open social web. Outside all the silos.

The slides are available in my self-hosted Slideshare replacement for embed and download, and shown below.

I have been blogging a long time, and can tinker a bit with code (like a home cook). I want my site to be the center of how I read and write the web. Its purpose is to create conversations with others, who write in their own spaces on the web. The IndieWeb community supports that with a number of technical building blocks that allow me a set of pretty cool things. But all that IndieWeb offers has a high threshold for entry.

The key parts of IndieWeb to me, the parts that make interaction between websites possible, that allow any site to be an active part of many conversations, are much simpler though:

  • Microformats2 so that computers know how to interpret our blogposts,
  • some class declarations, so computers know why we link to some other web page,
  • and WebMention, the protocol that lets a web page know another page is linking to them.

Making interaction possible between site authors, across sites, just by writing as they already do, is both the simplest to arrange and the most impactful. It’s not something that site authors should have to deal with though, it should be in your website’s engine. WordPress in my case, and an enormous amount of other websites.
Ensuring that WordPress Themes, and Gutenberg blocks would support and could handle Microformats2 and classes correctly therefore will have a huge impact.

Over 40% of the open web would then with a single stroke be the open social web. No need for data hungry silo’s, no place for algorithmic timelines designed to keep you hooked.

WordPress wants to be the Operating System for the Web. That OS is missing social features, and it’s not a big leap to add them with existing web protocols. No website owner would have to be a coder, be it home cooking style or professional, to use those social features and create conversations. It would just be there.

If you build WP Themes, if you create Gutenberg blocks, you’re invited to help make this happen.

(also posted to Indienews)

Bookmarked Using GPT-3 to augment human intelligence: Learning through open-ended conversations with large language models by Henrik Olof Karlsson

Wow, this essay comes with a bunch of examples of using the GPT-3 language model in such fascinating ways. Have it stage a discussion between two famous innovators and duke it out over a fundamental question, run your ideas by an impersonation of Steve Jobs, use it to first explore a new domain to you (while being aware that GPT-3 will likely confabulate a bunch of nonsense). Just wow.
Some immediate points:

  • Karlsson talks about prompt engineering, to make the model spit out what you want more closely. Prompt design is an important feature in large scale listening, to tap into a rich interpreted stream of narrated experiences. I can do prompt design to get people to share their experiences, and it would be fascinating to try that experience out on GPT-3.
  • He mentions Matt Webbs 2020 post about prompting, quoting “it’s down to the human user to interview GPT-3“. This morning I’ve started reading Luhmann’s Communicating with Slip Boxes with a view to annotation. Luhmann talks about the need for his notes collection to be thematically open ended, and the factual status or not of information to be a result of the moment of communication. GPT-3 is trained with the internet, and it hallucinates. Now here we are communicating with it, interviewing it, to elicit new thoughts, ideas and perspectives, similar to what Luhmann evocatively describes as communication with his notes. That GPT-3 results can be totally bogus is much less relevant as it’s the interaction that leads to new notions within yourself, and you’re not after using GPT-3s output as fact or as a finished result.
  • Are all of us building notes collections, especially those mimicking Luhmann as if it was the originator of such systems of note taking, actually better off learning to prompt and interrogate GPT-3?
  • Karlsson writes about treating GPT-3 as an interface to the internet, which allows using GPT-3 as a research assistant. In a much more specific way than he describes this is what the tool Elicit I just mentioned here does based on GPT-3 too. You give Elicit your research question as a prompt and it will come up with relevant papers that may help answer it.

On first reading this is like opening a treasure trove, albeit a boobytrapped one. Need to go through this in much more detail and follow up on sources and associations.

Some people already do most of their learning by prompting GPT-3 to write custom-made essays about things they are trying to understand. I’ve talked to people who prompt GPT-3 to give them legal advice and diagnose their illnesses. I’ve talked to men who let their five-year-olds hang out with GPT-3, treating it as an eternally patient uncle, answering questions, while dad gets on with work.

Henrik Olof Karlsson

Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)

A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience 🙂

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.