Bookmarked een tweet van Frankwatching

Niets is zo persoonlijk als een machine je hartekreet laten schrijven! Technische mediatie brengt je alleen maar dichter bij elkaar. Ik hoop dat het team van Frankwatching de ironie ziet van hun eigen tekst.

Hoe zet je AI in … om persoonlijker te communiceren?

Frankwatching

(overigens valt me ook op dat het opslaan van losse tweets in The Web Archive niet lukt. Eerder lukte me dat wel. Dan maar een screenshot, met de gebruikelijke caveat)

Some good movement on EU data legislation this month! I’ve been keeping track of EU data and digital legislation in the past three years. In 2020 I helped determine the content of what has become the High Value Data implementing regulation (my focus was on earth observation, environmental and meteorological data), and since then for the Dutch government I’ve been involved in translating the incoming legislation to implementing steps and opportunities for Dutch government geo-data holders.

AI Act

The AI Act stipulates what types of algorithmic applications are allowed on the European market under which conditions. A few things are banned, the rest of the provisions are tied to a risk assessment. Higher risk applications carry heavier responsibilities and obligations for market entry. It’s a CE marking for these applications, with responsibilities for producers, distributors, users, and users of output of usage.
The Commission proposed the AI Act in april 2021, the Council responded with its version in December 2022.

Two weeks ago the European Parliament approved in plenary its version of the AI Act.
In my reading the EP both strengthens and weakens the original proposal. It strengthens it by restricting certain types of uses further than the original proposal, and adds foundational models to its scope.
It also adds a definition of what is considered AI in the context of this law. This in itself is logical as, originally the proposal did not try to define that other than listing technologies in an annex that were deemed in scope. However while adding that definition, they removed the annex. That, I think weakens the AI Act and will make future enforcement much slower and harder. Because now everything will depend on the interpretation of the definition, meaning it will be a key point of contention before the courts (‘my product is out of scope!’). Whereas by having both the definition and the annex, the legislative specifically states which things it considers in scope of the definition at the very least. As the Annex would be periodically updated, it would also remain future proof.

With the stated positions of the Council and Parliament the trilogue can now start to negotiate the final text which then needs to be approved by both Council and Parliament again.

All in all this looks like the AI Act will be finished and in force before the end of year, and will be applied by 2025.

Data Act

The Data Act is one of the building blocks of the EU Data Strategy (the others being the Data Governance Act, applied from September, the Open Data Directive, in force since mid 2021, and the implementing regulation High Value Data which the public sector must comply with by spring 2024). The Data Act contains several interesting proposals. One is requiring connected devices to not only allow users access to the (real time) data they create (think thermostats, solar panel transformers, sensors etc.), as well as allowing users to share that data with third parties. You can think of this as ‘PSD2-for-everything’. PSD2 says that banks must enable you to share your banking data with third parties (meaning you can manage your account at Bank A with the mobile app of Bank B, can connect your book keeping software etc.). The Data Act extends this to ‘everything’ that is connected. Another interesting component is that it allows public sector bodies in case of emergencies (floods e.g.) to require certain data from private sector parties, across borders. The Dutch government heavily opposed this so I am interested in seeing what the final formulation of this part is in the Act. Other provisions make it easier for people to switch platform services (e.g. cloud providers), and create space for the European Commission to set, let develop, adopt or mandate certain data standards across sectors. That last element is of relevance to the shaping of the single market for data, aka the European common data space(s), and here too I look forward to reading the final formulation.

With the Council of the European Union and the European Parliament having reached a common text, what rests is final approval by both bodies. This should be concluded under the Spanish presidency that starts this weekend, and the Data Act will then enter into force sometime this fall, with a grace period of some 18 months or so until sometime in 2025.

There’s more this month: ITS Directive

The Intelligent Transport Systems Directive (ITS Directive) was originally created in 2010, to ensure data availability about traffic conditions etc. for e.g. (multi-modal) planning purposes. In the Netherlands for instance real-time information about traffic intensity is available in this context. The Commmission proposed to revise the ITS Directive late 2021 to take into account technological developments and things like automated mobility and on-demand mobility systems. This month the Council and European Parliament agreed a common text on the new ITS Directive. I look forward to close reading the final text, also on its connections to the Data Act above, and its potential in the context of the European mobility data space. Between the Data Act and the ITS Directive I’m also interested in the position of in-car data. Our cars increasinly are mobile sensor platforms, to which the owner/driver has little to no access, which should change imo.

Bookmarked ChatGPT sees Tweets: A Double-Edged Sword by Henk van Ess

Bing Chat is connected to the internet, allowing internet searches when you ask the chatbot something. This includes Twitter. It then weaves those online finds into the texts it puts together off your prompt. Henk van Ess shows how quickly the content from a Twitter message gets incorporated (and changed if additional messages are available). With just three tweets he influenced Bing Chat output. This also opens a pathway for influence and dissemination of mis-info, especially since the recent quality changes over at Twitter. The feedback loop this creates (internet texts get generated based on existing internet texts, etc.) will easily result in a vicious circle (In her recent talk Maggie Appleton listed this as one of her possible futures, using a metaphor I can’t unsee, but which does describe it effectively: Human Centipede Epistemology)

Bing/ChatGPT’s rapid response to tweets has a double-edged sword. Bing quickly corrects itself based on tweets … But those with specific agendas or biases may attempt to abuse the system … We’ve seen it all before. This is similar to Google Bombing…

Henk van Ess

Bookmarked Will A.I. Become the New McKinsey? by Ted Chiang in the New Yorker

Ted Chiang realises that corporates are best positioned to leverage the affordances of algorithmic applications, and that that is where the risk of the ‘runaway AIs’ resides. I agree that they are best positioned, because corporations are AI’s non-digital twin, and have been recognised as such for a decade.

Brewster Kahle said (in 2014) that corporations should be seen as the 1st generation AIs, and Charlie Stross reinforced it (in 2017) by dubbing corporations ‘Slow AI’ as corporations are context blind, single purpose algorithms. That single purpose being shareholder value. Jeremy Lent (in 2017) made the same point when he dubbed corporations ‘socio-paths with global reach’ and said that the fear of runaway AI was focusing on the wrong thing because “humans have already created a force that is well on its way to devouring both humanity and the earth in just the way they fear. It’s called the Corporation“. Basically our AI overlords are already here: they likely employ you. Of course existing Slow AI is best positioned to adopt its faster young, digital algorithms. It as such can be seen as the first step of the feared iterative path of run-away AI.

The doomsday scenario is … A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.

Ted Chiang

I’ll repeat the image I used in my 2019 blogpost linked above:

Your Slow AI overlords looking down on you, photo Simone Brunozzi, CC-BY-SA

I have installed AutoGPT and started playing with it. AutoGPT is a locally installed and run piece of software (in a terminal window) that you theoretically can set a result to achieve and then let run to achieve it. It’s experimental so it is good advice to actually follow its steps along and approve individual actions it suggests doing.
It interacts with different generative AI tools (through your own API keys) and can initiate different actions, including online searches as well as spawning new interactions with LLM’s like GPT4 and using the results in its ongoing process. It chains these prompts and interactions together to get to a result (‘prompt chaining’).

I had to tweak some of the script a little bit (it calls python and pip but it needs to call python3 and pip3 on my machine) but then it works.

Initially I have it set up with OpenAI’s API, as the online guide I found were using that. However in the settings file I noticed I can also choose to use other LLM’s like the publicly available models through Huggingface, as well as image generating AIs.

I first attempted to let it write scripts to interact with the hypothes.is API. It ended up in a loop about needing to read the API documentation but not finding it. At that time I did not yet provide my own interventions (such as supplying the link to the API documentation). When I did so later it couldn’t come up with next steps, or not ingesting the full API documentation (only the first few lines) which also led to empty next steps.

Then I tried a simpler thing: give me a list of all email addresses of the people in my company.
It did a google search for my company’s website, and then looked at it. The site is in Dutch which it didn’t notice, and it concluded there wasn’t a page listing our team. I then provided it with the link to the team’s page, and it did parse that correctly ending up with a list of email addresses saved to file, while also neatly summarising what we do and what our expertise is.
While this second experiment was successfully concluded, it did require my own intervention, and the set task was relatively simple (scrape something from this here webpage). This was of limited usefulness, although it did require less time than me doing it myself. It points to the need of having a pretty clear picture of what one wants to achieve and how to achieve it, so you can provide feedback and input at the right steps in the process.

As with other generative AI tools, doing the right prompting is key, and the burden of learning effective prompting lies with the human tool user, the tool itself does not provide any guidance in this.

I appreciate it’s an early effort, but I can’t reproduce the enthusiastic results others claim. My first estimation is that those claims I’ve seen are based on hypothetical things used as prompts and then being enthusiastic about the plausible outcomes. Whereas if you try an actual issue where you know the desired result it easily falls flat. Similar to how ChatGPT can provide plausible texts except when the prompter knows what good quality output looks like for a given prompt.

It is tempting to play with this thing nevertheless, because of its positioning as a personal tool, as potential step to what I dubbed narrow band digital personal assistants earlier. I will continue to explore, first by latching onto the APIs of more open models for generative AI than OpenAI’s.

Enjoyed this one by Karl Schroeder a lot. A fun extrapolation of “not your keys-not your crypto“, set in a society in ecocollapse with AI automating most work, institutions both public and private holding on to their assets while they disappear and crumble, surveillance everywhere and everyone bumping into the demands and constraints of the planet’s carrying capacity. Will explore his other books.

Schroeder is a futurist and writes for clients as foresight consultancy.
Reading it made me ask a number of questions, around the development of AR/MR glasses, specific aspects of crypto and smart contracts (also because of its role in the book I read right before this by Suarez), reducing the cost and increasing the scale of sensors in the environment, and gaming and virtualisation. I’ve jotted those down during reading and started exploring.