Bookmarked China Seeks Stricter Oversight of Generative AI with Proposed Data and Model Regulations (by Chris McKay at Maginative)

Need to read this more closely. A few things stand out at first glance:

  • This is an addition to the geo-political stances that EU, US, China put forth w.r.t. everything digital and data. A comparison with the EU AI Regulation that is under negotiation is of interest.
  • It seems focused on generative AI solely. Are there other (planned) acts covering other AI applications and development. Why is generative AI singled out here, because it has a more direct population facing character?
  • It seems to mostly front-load the responsibilities towards the companies producing generative AI applications, i.e. towards the models used and pre-release. In comparison the EU regulations incorporates responsibilities for distributors, buyers, users and even users of output only and spans the full lifetime of any application.
  • It lists specific risks in several categories. How specific are those worded, might there be an impact on how future-proof the regulation is? Are there thresholds introduced for such risks?

Let’s see if I can put some AI to work to translate the original Chinese proposed text (PDF).

Via Stephen Downes, who is also my source for the link to the original proposal in PDF.

By emphasizing corpus safety, model security, and rigorous assessment, the regulation intends to ensure that the rise of [generative] AI in China is both innovative and secure — all while upholding its socialist principles.

Chris McKay at Maginative

ODRL, Open Digital Rights Language popped up twice this week for me and I don’t think I’ve been aware of it before. Some notes for me to start exploring.

Rights Expression Languages

Rights Expression Languages, RELs, provide a machine readable way to convey or transfer usage conditions, rights, restraints, granularly w.r.t. both actions and actors. This can then be added as metadata to something. ODRL is a rights expression language, and seems to be a de facto standard.

ODRL is a W3C recommendation since 2018, and thus part of the open web standards. ODRL has its roots in the ’00s and Digital Rights Management (DRM): the abhorred protections media companies added to music and movies, and now e-books, in ways that restrains what people can do with media they bought to well below the level of what was possible before and commonly thought part of having bought something.

ODRL can be expressed in JSON or RDF and XML. A basic example from Wikipedia looks like this:


{
"@context": "http://www.w3.org/ns/odrl.jsonld",
"uid": "http://example.com/policy:001",
"permission": [{
"target": "http://example.com/mysong.mp3",
"assignee": "John Doe",
"action": "play"
}]
}

In this JSON example a policy describes that example.com grants John permission to play mysong.

ODRL in the EU Data Space

In the shaping of the EU common market for data, aka the European common data space, it is important to be able to trace provenance and usage conditions for not just data sets, but singular pieces of data, as it flows through use cases, through applications and their output back into the data space.
This week I participated in a webinar by the EU Data Space Support Center (DSSC) about their first blueprint of data space building blocks, and for federation of such data spaces.

They propose ODRL as the standard to describe usage conditions throughout data spaces.

The question of enactment

It wasn’t the first time I talked about ODRL this week. I had a conversation with Pieter Colpaert. I reached out to get some input on his current view of the landscape of civic organisations active around the EU data spaces. We also touched upon his current work at the University of Gent. His research interest is on ODRL currently, specifically on enactment. ODRL is a REL, a rights expression language. Describing rights is one thing, enacting them in practice, in technology, processes etc. is a different thing. Next to that, how do you demonstrate that you adhere to the conditions expressed and that you qualify for using the things described?

For the EU data space(s) this part sounds key to me, as none of the data involved is merely part of a single clear interaction like in the song example above. It’s part of a variety of flows in which actors likely don’t directly interact, where many different data elements come together. This includes flows through applications that tap into a data space for inputs and outputs but are otherwise outside of it. Such applications are also digital twins, federated systems of digital twins even, meaning a confluence of many different data and conditions across multiple domains (and thus data spaces). All this removes a piece of data lightyears from the neat situation where two actors share it between them in a clearly described transaction within a single-faceted use case.

Expressing the commons

It’s one thing to express restrictions or usage conditions. The DSSC in their webinar talked a lot about business models around use cases, and ODRL as a means for a data source to stay in control throughout a piece of data’s life cycle. Luckily they stopped using the phrase ‘data ownership’ as they realised it’s not meaningful (and confusing on top of it), and focused on control and maintaining having a say by an actor.
An open question for me is how you would express openness and the commons in ODRL. A shallow search surfaces some examples of trying to express Creative Commons or other licenses this way, but none recent.

Openness, can mean an absence of certain conditions, although there may be some (like adding the same absence of conditions to re-shared material or derivative works), which is not the same as setting explicit permissions. If I e.g. dedicate something to the public domain, an image for instance, then there are no permissions for me to grant, as I’ve removed myself from that role of being able to give permission. Yet, you still want to express it to ensure that it is clear for all that that is what happened, and especially that it remains that way.

Part of that question is about the overlap and distinction between rights expressed in ODRL and authorship rights. You can obviously have many conditions outside of copyright, and can have copyright elements that may be outside of what can be expressed in RELs. I wonder how for instance moral authorship rights (that an author in some (all) European jurisdictions cannot do away with) can be expressed after an author has transferred/sold the copyrights to something? Or maybe, expressing authorship rights / copyrights is not what RELs are primarily for, as it those are generic and RELs may be meant for expressing conditions around a specific asset in a specific transaction. There have been various attempts to map all kinds of licenses to RELs though, so I need to explore more.

This is relevant for the EU common data spaces as my government clients will be actors in them and bringing in both open data and closed and unsharable but re-usable data, and several different shades in between. A range of new obligations and possibilities w.r.t. data use for government are created in the EU data strategy laws and the data space is where those become actualised. Meaning it should be possible to express the corresponding usage conditions in ODRL.

ODRL gaps?

Are there gaps in the ODRL standard w.r.t. what it can cover? Or things that are hard to express in it?
I came across one paper ‘A critical reflection on ODRL’ (PDF Kebede, Sileno, Van Engers 2020), that I have yet to read, that describes some of those potential weaknesses, based on use cases in healthcare and logistics. Looking forward to digging out their specific critique.

Some good movement on EU data legislation this month! I’ve been keeping track of EU data and digital legislation in the past three years. In 2020 I helped determine the content of what has become the High Value Data implementing regulation (my focus was on earth observation, environmental and meteorological data), and since then for the Dutch government I’ve been involved in translating the incoming legislation to implementing steps and opportunities for Dutch government geo-data holders.

AI Act

The AI Act stipulates what types of algorithmic applications are allowed on the European market under which conditions. A few things are banned, the rest of the provisions are tied to a risk assessment. Higher risk applications carry heavier responsibilities and obligations for market entry. It’s a CE marking for these applications, with responsibilities for producers, distributors, users, and users of output of usage.
The Commission proposed the AI Act in april 2021, the Council responded with its version in December 2022.

Two weeks ago the European Parliament approved in plenary its version of the AI Act.
In my reading the EP both strengthens and weakens the original proposal. It strengthens it by restricting certain types of uses further than the original proposal, and adds foundational models to its scope.
It also adds a definition of what is considered AI in the context of this law. This in itself is logical as, originally the proposal did not try to define that other than listing technologies in an annex that were deemed in scope. However while adding that definition, they removed the annex. That, I think weakens the AI Act and will make future enforcement much slower and harder. Because now everything will depend on the interpretation of the definition, meaning it will be a key point of contention before the courts (‘my product is out of scope!’). Whereas by having both the definition and the annex, the legislative specifically states which things it considers in scope of the definition at the very least. As the Annex would be periodically updated, it would also remain future proof.

With the stated positions of the Council and Parliament the trilogue can now start to negotiate the final text which then needs to be approved by both Council and Parliament again.

All in all this looks like the AI Act will be finished and in force before the end of year, and will be applied by 2025.

Data Act

The Data Act is one of the building blocks of the EU Data Strategy (the others being the Data Governance Act, applied from September, the Open Data Directive, in force since mid 2021, and the implementing regulation High Value Data which the public sector must comply with by spring 2024). The Data Act contains several interesting proposals. One is requiring connected devices to not only allow users access to the (real time) data they create (think thermostats, solar panel transformers, sensors etc.), as well as allowing users to share that data with third parties. You can think of this as ‘PSD2-for-everything’. PSD2 says that banks must enable you to share your banking data with third parties (meaning you can manage your account at Bank A with the mobile app of Bank B, can connect your book keeping software etc.). The Data Act extends this to ‘everything’ that is connected. Another interesting component is that it allows public sector bodies in case of emergencies (floods e.g.) to require certain data from private sector parties, across borders. The Dutch government heavily opposed this so I am interested in seeing what the final formulation of this part is in the Act. Other provisions make it easier for people to switch platform services (e.g. cloud providers), and create space for the European Commission to set, let develop, adopt or mandate certain data standards across sectors. That last element is of relevance to the shaping of the single market for data, aka the European common data space(s), and here too I look forward to reading the final formulation.

With the Council of the European Union and the European Parliament having reached a common text, what rests is final approval by both bodies. This should be concluded under the Spanish presidency that starts this weekend, and the Data Act will then enter into force sometime this fall, with a grace period of some 18 months or so until sometime in 2025.

There’s more this month: ITS Directive

The Intelligent Transport Systems Directive (ITS Directive) was originally created in 2010, to ensure data availability about traffic conditions etc. for e.g. (multi-modal) planning purposes. In the Netherlands for instance real-time information about traffic intensity is available in this context. The Commmission proposed to revise the ITS Directive late 2021 to take into account technological developments and things like automated mobility and on-demand mobility systems. This month the Council and European Parliament agreed a common text on the new ITS Directive. I look forward to close reading the final text, also on its connections to the Data Act above, and its potential in the context of the European mobility data space. Between the Data Act and the ITS Directive I’m also interested in the position of in-car data. Our cars increasinly are mobile sensor platforms, to which the owner/driver has little to no access, which should change imo.

This looks like a very welcome development: The European Commission (EC) is to ask for status updates of all international GDPR cases with all the Member State Data Protection Authorities (DPAs) every other month. This in response to a formal complaint by the Irish Council for Civil Liberties starting in 2021 about the footdragging of the Irish DPA in their investigations of BigTech cases (which mostly have their EU activities domiciled in Ireland).

The GDPR, the EU’s data protection regulation, has been in force since mid 2018. Since then many cases have been progressing extremely slowly. To a large extent because it seems that Ireland’s DPA has been the subject of regulatory capture by BigTech, up to the point where it is defying direct instructions by the EU data protection board and taking an outside position relative to all other European DPA’s.

With bi-monthly status updates of ongoing specific cases from now being requested by the EC of each Member State, this is a step up from the multi-year self-reporting by MS that usually is done to determine potential infringements. This should have an impact on the consistency with which the GDPR gets applied, and above all on ensuring cases are being resolved at adequate speed. The glacial pace of bigger cases risks eroding confidence in the GDPR especially if smaller cases do get dealt with (the local butcher getting fined for sloppy marketing, while Facebook makes billions of person-targeted ads without people’s consent).

So kudos to ICCL for filing the complaint and working with the EU Ombudsman on this, and to the EC for taking it as an opportunity to much more closely monitor GDPR enforcement.

Bookmarked Target_Is_New, Issue 212 by Iskander Smit

Iskander asks what about users, next to makers, when it comes to responsible AI? For a slightly different type of user at least, such responsibilities are being formulated in the proposed EU AI Regulation, as well as the connected AI Liability Directive. There not just the producers and distributors of AI containing services or products have responsibilities, but also those who deploy them in practice, or those who use its outputs. He’s right that most discussions focus on within the established system of making, training and deploying AI, and we should also look outside the system. Where in this case the people using AI, or using their output reside. That’s why I like the EU’s legislative approach, as it doesn’t aim to regulate the system as seen from within it, but focuses on access conditions for such products to the European market, and the impact it has within society. Of course, these proposals are still under negotiation, and it’s wait and see what will remain at the end of that process.

As I wrote down as thoughts while listening to Dasha Simons; we are all convinced of the importance of explainability, transparency, and even interpretability, all focused on making the system responsible and, with them, the makers of the system. But what about the responsibility of the users? Are they also part of the equation, should they be responsible too? As the AI (or what term we use) is continuous learning and shaping, the prompts we give are more than a means to retrieve the best results; it is also part of the upbringing of the AI. We are, as users, also responsible for good AI as the producers are.

Iskander Smit

Bookmarked AI Liability Directive (PDF) (by the European Commission)

This should be interesting to compare with the proposed AI Regulation. The AI Regulation specifies under which conditions an AI product or service, or the output of one will or won’t be admitted on the EU market (literally a CE mark for AI), based on a risk assessment that includes risks to civic rights, equipment safety and critical infrastructure. That text is still very much under negotiation, but is a building block of the combined EU digital and data strategies. The EU in parallel is modernising their product liability rules, and now include damage caused by AI related products and services within that scope. Both are listed on the EC’s page concerning AI so some integration is to be expected. Is the proposal already anticipating parts of the AI Regulation, or does it try to replicate some of it within this Directive? Is it fully aligned with the proposed AI Regulation or are there surprising contrasts? As this proposal is a Directive (which needs to be translated into national law in each Member State), and the AI Regulation becomes law without such national translation that too is a dynamic of interest, in the sense that this Directive builds on existing national legal instruments. There was a consultation on this Directive late 2021. Which DG created this proposal, DG Just?

The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.

The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. … In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.

In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. … Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation. Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.

European Commission