Google has released the statistics for the mobility and location data they gather a.o. from all the mobile devices that share their location with them. Below are the results for our region.

It shows nicely the beginning of the soft lock-down, starting with the announcement on March 12th, that from the 13th working from home was the default, and from the evening of March 15th the closure of all restaurants, schools etc. You see the enormous decline in use of transit, the drop in general retail and recreation, the drop in workplace presence due to skiing holidays and then the work from home measure, and the peak in grocery and pharmacy visits right after when the lock-down measures came into force, resulting in empty shelves in the super markets. This type of data is probably not extremely useful on a day to day basis, but it is useful to get a general feeling for how well people are complying with measures, as well as to detect the moment when things get back to their regular patterns. I know e.g. debet and credit card transactions similarly can be and are being used to determine e.g. if a community has returned to normal after for instance a hurricane or another emergency.

Last week I changed this site to provide better language mark-up. However, even though it changed mark-up correctly, it didn’t solve the issue that made me look into it in the first place: that if you click a link to a posting in my rss-feed, your browser would not detect the right language and translate the posting for you.

As it turns out, Google Translate doesn’t make any real effort to detect the language or languages of a page. It only ever checks if there is a default language indicated in the very first <html> tag of a page (which my WordPress sets to English for the entire website), and only if there is no such default set it uses a machine learning model (CLD2) to detect what language likely was used, and then only picks the most likely one. It never checks for language mark-up. It also never contemplates if multiple languages were used in a page, even though the machine learning model returns probabilities for more than one language if present in a page.

This is surprising on two levels. One, it disregards usable information even when provided (either the language mark-up, or probabilities from the ML model). Two, it makes an entire family of wrong assumptions, of which that something or someone will always be monolingual is only the first. While discussing this in a conversation with Kevin Marks, he pointed to Stephanie Booth‘s presentation at Google that he helped set up 12 years ago, listing all that is wrong with the simplistic monolingual world-view of platforms and tech silos. A dozen years on it is still all true and relevant, nothing’s changed. No wonder Stephanie and I have been talking about multi-lingual blogging off and on for as long as we’ve been blogging.

Which all goes to say that my previous changes weren’t very useful. I realised that to make auto-translation of clicked links from my feed work, I needed to set the language attribute for an entire page in the <html> tag, and not try to mark-up only the sections that aren’t in English. (Even if it is the wrong thing to do because it also means I am saying that everything that isn’t content, menu’s, tags etc, are in the declared language. And that isn’t the case. When I write postings in Dutch or German, the entire framework of my site is still in English.). After some web searching, I found a reference to writing a small function to change the default language setting, and calling that when writing the header of a page, which I adapted. The disadvantage is this gets called for every page, regardless if needed (it’s only ever needed for a single post page, or the overview pages of Dutch and German postings). The advantage is, almost all language adaptations are now in a single spot in my theme. I’ve rolled back all previous changes to the single and category templates. Only the changes to the front page template I’ve kept, so that there is still the correct language mark-up around front page postings that are not in English.


The function I added to functions.php in my child theme.


An example of changed page language setting (to German), for a posting in German. (if you follow that link and do view source, you’ll see it)

Favorited Chrome to limit full ad blocking extensions to enterprise users by 9to5Google

Google’s Chrome is not a browser, it’s advertisement delivery software. Adtech after all is where their profit is. This is incompatible with Doc SearlsCastle doctrine of browsers, so Chrome isn’t fit for purpose.


image by Matthew Oliphant, license CC BY ND

Google shared that Chrome’s current ad blocking capabilities for extensions will soon be restricted to enterprise users. SEC filing: “New and existing technologies could affect our ability to customize ads and/or could block ads online, which would harm our business.”
9to5Google

Doing this online is a neighbouring right in the new EU Copyright Directive. Photo by Alper, license CC BY

A move that surprises absolutely no one: Google won’t pay French publishers for snippets. France is the first EU country to transcribe the new EU Copyright Directive into law. This directive contains a new neighbouring right that says if you link to something with a snippet of that link’s content (e.g. a news link, with the first paragraph of the news item), you need to seek permission to do so, and that permission may come with a charge. This in the run-up to the directive was dubbed the ‘link tax’, although that falsely suggests it concerns any type of hyperlinking.
Google, not wanting to pay publishers for the right to use snippets with their links, will stop using snippets with those links.

Photo by Nicolas Alejandro, license CC BY

Ironically the link at the top is to a publisher, Axel Springer, that lobbied intensively for the EU Copyright Directive to contain this neighbouring right. Axel Springer is also why we knew with certainty up front this part of the Copyright Directive would fail. Years ago (2013) Germany, after lobbying by the same Axel Springer publishing house, created this same neighbouring right in their copyright law. Google refused to buy a license and stopped using snippets. Axel Springer saw its traffic from search results drop by 40%, others by 80%. They soon caved and provided Google with a free of charge license, to recoup some of the traffic to their sites.

Photo by CiaoHo, license CC BY

This element of the law failed in Germany, it failed in Spain in 2015 as well. Axel Springer far from being discouraged however touted this as proof that Google needed to be regulated, and continued lobbying for the same provision to be included in the EU Copyright Directive. With success, despite everyone else explaining how it wouldn’t work there either. It really comes at no surprise therefore that now the Copyright Directive will come into force in French law, it has the exact same effect. Wait for French publishers to not exercise their new neighbouring rights in 3, 2, 1…

Photo by The JH Photography, license CC BY

News publishers have problems, I agree. Extorting anyone linking to them is no way to save their business model though (dropping toxic adtech however might actually help). It will simply mean less effective links to them, resulting in less traffic, in turn resulting in even less advert revenue for them (a loss exceeding any revenue they might hope to get from link snippet licenses). This does not demonstrate the monopoly of Google (though I don’t deny its real dominance), it demonstrates that you can’t have cake and eat it (determining how others link to you and get paid for it, but keep all your traffic as is), and it doesn’t change that news as a format is toast.

Photo by Willy Verhulst, license CC BY ND

Donald Clark writes about the use of voice tech for learning. I find I struggle enormously with voice. While I recognise several aspects put forward in that posting as likely useful in learning settings (auto transcription, text to speech, oral traditions), there are others that remain barriers to adoption to me.

For taking in information as voice. Podcasts are mentioned as a useful tool, but don’t work for me at all. I get distracted after about 30 seconds. The voices drone on, there’s often tons of fluff as the speaker is trying to get to the point (often a lack of preparation I suppose). I don’t have moments in my day I know others use to listen to podcasts: walking the dog, sitting in traffic, going for a run. Reading a transcript is very much faster, also because you get to skip the bits that don’t interest you, or reread sections that do. Which you can’t do when listening, because you don’t know when a uninteresting segment will end, or when it might segue into something of interest. And then you’ve listened to the end and can’t get those lost minutes back. (Videos have the same issue, or rather I have the same issue with videos)

For using voice to ask or control things. There are obvious privacy issues with voice assistants. Having active microphones around for one. Even if they are supposed to only fully activate upon the use of the wake-up word, they get triggered by false positives. And don’t distinguish between me and other people that maybe it shouldn’t respond to. A while ago I asked around in my network how people use their Google and Amazon microphones, and the consensus was that most settle on a small range of specific uses. For those it shouldn’t be needed to have cloud processing of what those microphones tape in your living room, those should be able to be dealt with locally, with only novel questions or instructions being processed in the cloud. (Of course that’s not the business model of these listening devices).

A very different factor in using voice to control things, or for instance dictate is self-consciousness. Switching on a microphone in a meeting has a silencing effect usually. For dictation, I won’t dictate text to software e.g. at a client’s office, or while in public (like on a train). Nor will I talk to my headset while walking down the street. I might do it at home, but only if I know I’m not distracting others around me. In the cases where I did use dictation software (which nowadays works remarkably well), I find it clashes with my thinking and formulation. Ultimately it’s easier for me to shape sentences on paper or screen where I see them take shape in front of me. When dictating it easily descends into meaninglessness, and it’s impossible to structure. Stream of thought dictation is the only bit that works somewhat, but that needs a lot of cleaning up afterwards. Judging by all podcasts I sampled over the years, it is something that happens to more people when confronted with a microphone (see the paragraph above). Maybe if it’s something more prepared like a lecture, or presentation, it might be different, but those types of speech have been prepared in writing usually, so there is likely a written source for it already. In any case, dictation never saved me any time. It is of course very different if you don’t have the use of your hands. Then dictation is your door to the world.

It makes me wonder how voice services are helping you? How is it saving you time or effort? In which cases is it more novelty than effectiveness?