At State of the Net 2018 in Trieste Hossein Derakshan (h0d3r on Twitter) talked about journalism and its future. Some of his statements stuck with me in the past weeks so yesterday I took time to watch the video of his presentation again.

In his talk he discussed the end of news. He says that discussions about the erosion of business models in the news business, quality of news, trust in sources and ethics are all side shows to a deeper shift. A shift that is both cultural and social. News is a two century old format, representative of the globalisation of communications with the birth of the telegraph. All of a sudden events from around the globe were within your perspective, and being informed made you “a man of the world”. News also served as a source of drama in our lives. “Did you hear,…”. These days those aspects of globalisation, time and drama have shifted.
Local, hyperlocal, has become more important again at the cost of global perspectives, which Hossein sees taking place in things like buying local, but also in Facebook to keep up with the lives of those around you. Similarly identity politics reduces the interest in other events to those pertaining to your group. Drama shifted away from news to performances and other media (Trumps tweets, memes, our representation on social media platforms). News and time got disentangled. Notifications and updates come at any time from any source, and deeper digging content is no longer tied to the news cycle. Journalism like the Panama Papers takes a long time to produce, but can also be published at any time without that having an impact on its value or reception.

News and journalism have become decoupled. News has become a much less compelling format, and in the words of Derakshan is dying if not dead already. With the demise of text and reason and the rise of imagery and emtions, the mess that journalism is in, what formats can journalism take to be all it can be?

Derakshan points to James Carey who said Democracy and Journalism are the same thing, as they are both defined as public conversation. Hossein sees two formats in which journalism can continue. One is literature, long-form non-fiction. This can survive away from newspapers and magazines, both online and in the form of e.g. books. Another is cinema. There’s a rise in documentaries as a way to bring more complex stories to audiences, which also allows for conveying of drama. It’s the notion of journalism as literature that stuck with me most at State of the Net.

For a number of years I’ve said that I don’t want to pay for news, but do want to pay for (investigative) journalism, and often people would respond news and journalism are the same thing. Maybe I now finally have the vocabulary to better explain the difference I perceive.

I agree that the notion of public conversation is of prime importance. Not the screaming at each-other on forums, twitter or facebook. But the way that distributed conversations can create learning, development and action, as a democratic act. Distributed conversations, like the salons of old, as a source of momentum, of emergent collective action (2013). Similarly, I position Networked Agency as a path away from despair of being powerless in the face of change, and therefore as an alternative to falling for populist oversimplification. Networked agency in that sense is very much a democratising thing.

To celebrate the launch of the GDPR last week Friday, Jaap-Henk Hoekman released his ‘little blue book’ (pdf)’ on Privacy Design Strategies (with a CC-BY-NC license). Hoekman is an associate professor with the Digital Security group of the ICS department at the Radboud University.

I heard him speak a few months ago at a Tech Solidarity meet-up, and enjoyed his insights and pragmatic approaches (PDF slides here).

Data protection by design (together with a ‘state of the art’ requirement) forms the forward looking part of the GDPR where the minimum requirements are always evolving. The GDPR is designed to have a rising floor that way.
The little blue book has an easy to understand outline, which cuts up doing privacy by design into 8 strategies, each accompanied by a number of tactics, that can all be used in parallel.

Those 8 strategies (shown in the image above) are divided into 2 groups, data oriented strategies and process oriented strategies.

Data oriented strategies:
Minimise (tactics: Select, Exclude, Strip, Destroy)
Separate (tactics: Isolate, Distribute)
Abstract (tactics: Summarise, Group, Perturb)
Hide (tactics: Restrict, Obfuscate, Dissociate, Mix)

Process oriented strategies:
Inform (tactics: Supply, Explain, Notify)
Control (tactics: Consent, Choose, Update, Retract)
Enforce (tactics: Create, Maintain, Uphold)
Demonstrate (tactics: Record, Audit, Report)

All come with examples and the final chapters provide suggestions how to apply them in an organisation.

The Washington Post now has a premium ‘EU’ option, suggesting you pay more for them to comply with the GDPR.

Reading what the offer entails of course shows something different.
The basic offer is the price you pay to read their site, but you must give consent for them to track you and to serve targeted ads.
The premium offer is the price you pay to have an completely ad-free, and thus tracking free, version of the WP. Akin to what various other outlets and e.g. many mobile apps do too.

This of course has little to do with GDPR compliance. For the free and basic subscription they still need to be compliant with the GDPR but you enter into a contract that includes your consent to get to that compliance. They will still need to explain to you what they collect and what they do with it for instance. And they do, e.g. listing all their partners they exchange visitor data with.

The premium version gives you an ad-free WP so the issue of GDPR compliance doesn’t even come up (except of course for things like commenting which is easy to handle). Which is an admission of two things:

  1. They don’t see any justification for how their ads work other than getting consent from a reader. And they see no hassle-free way to provide informed consent options, or granular controls to readers, that doesn’t impact the way ad-tech works, without running afoul of the rule that consent cannot be tied to core services (like visiting their website).
  2. They value tracking you at $30 per year.

Of course their free service is still forced consent, and thus runs afoul of the GDPR, as you cannot see their website at all without it.

Yet, just to peruse an occasional article, e.g. following a link, that forced consent is nothing your browser can’t handle with a blocker or two, and VPN if you want. After all your browser is your castle.

Today I was at a session at the Ministry for Interior Affairs in The Hague on the GDPR, organised by the center of expertise on open government.
It made me realise how I actually approach the GDPR, and how I see all the overblown reactions to it, like sending all of us a heap of mail to re-request consent where none’s needed, or taking your website or personal blog even offline. I find I approach the GDPR like I approach a quality assurance (QA) system.

One key change with the GDPR is that organisations can now be audited concerning their preventive data protection measures, which of course already mimics QA. (Next to that the GDPR is mostly an incremental change to the previous law, except for the people described by your data having articulated rights that apply globally, and having a new set of teeth in the form of substantial penalties.)

AVG mindmap
My colleague Paul facilitated the session and showed this mindmap of GDPR aspects. I think it misses the more future oriented parts.

The session today had three brief presentations.

In one a student showed some results from his thesis research on the implementation of the GDPR, in which he had spoken with a lot of data protection officers or DPO’s. These are mandatory roles for all public sector bodies, and also mandatory for some specific types of data processing companies. One of the surprising outcomes is that some of these DPO’s saw themselves, and were seen as, ‘outposts’ of the data protection authority, in other words seen as enforcers or even potentially as moles. This is not conducive to a DPO fulfilling the part of its role in raising awareness of and sensitivity to data protection issues. This strongly reminded me of when 20 years ago I was involved in creating a QA system from scratch for my then employer. Some of my colleagues saw the role of the quality assurance manager as policing their work. It took effort to show how we were not building a straightjacket around them that kept them within strict boundaries, but providing a solid skeleton to grow on, and move faster. Where audits are not hunts for breaches of compliance but a way to make emergent changes in the way people worked visible, and incorporate professionally justified ones in that skeleton.

In another presentation a civil servant of the Ministry involved in creating a register of all person related data being processed. What stood out most for me was the (rightly) pragmatic approach they took with describing current practices and data collections inside the organisation. This is a key element of QA as well. You work from descriptions of what happens, and not at what ’should’ happen or ‘ideally’ happens. QA is a practice rooted in pragmatism, where once that practice is described and agreed it will be audited.
Of course in the case of the Ministry it helps that they only have tasks mandated by law, and therefore the grounds for processing are clear by default, and if not the data should not be collected. This reduces the range of potential grey areas. Similarly for security measures, they already need to adhere to national security guidelines (called the national baseline information security), which likewise helps with avoiding new measures, proves compliance for them, and provides an auditable security requirement to go with it. This no doubt helped them to be able to take that pragmatic approach. Pragmatism is at the core of QA as well, it takes its cues from what is really happening in the organisation, what the professionals are really doing.

A third one dealt with open standards for both processes and technologies by the national Forum for Standardisation. Since 2008 a growing list of currently some 40 or so standards is mandatory for Dutch public sector bodies. In this list of standards you find a range of elements that are ready made to help with GDPR compliance. In terms of support for the rights of those described by the data, such as the right to export and portability for instance, or in terms of preventive technological security measures, and ‘by design’ data protection measures. Some of these are ISO norms themselves, or, as the mentioned national baseline information security, a compliant derivative of such ISO norms.

These elements, the ‘police’ vs ‘counsel’ perspective on the rol of a DPO, the pragmatism that needs to underpin actions, and the building blocks readily to be found elsewhere in your own practice already based on QA principles, made me realise and better articulate how I’ve been viewing the GDPR all along. As a quality assurance system for data protection.

With a quality assurance system you can still famously produce concrete swimming vests, but it will be at least done consistently. Likewise with GDPR you will still be able to do all kinds of things with data. Big Data and developing machine learning systems are hard but hopefully worthwile to do. With GDPR it will just be hard in a slightly different way, but it will also be helped by establishing some baselines and testing core assumptions. While making your purposes and ways of working available for scrutiny. Introducing QA upon its introduction does not change the way an organisation works, unless it really doesn’t have its house in order. Likewise the GDPR won’t change your organisation much if you have your house in order either.

From the QA perspective on GDPR, it is perfectly clear why it has a moving baseline (through its ‘by design’ and ‘state of the art’ requirements). From the QA perspective on GDPR it is perfectly clear what the connection is to how Europe is positioning itself geopolitically in the race concerning AI. The policing perspective after all only leads to a luddite stance concerning AI, which is not what the EU is doing, far from it. From that it is clear how the legislator intends the thrust of GDPR. As QA really.

At least I think it is…. Personal blogs don’t need to comply with the new European personal data protection regulations (already in force but enforceable from next week May 25th), says Article 2.2.c. However my blog does have a link with my professional activities, as I blog here about professional interests. One of those interests is data protection (the more you’re active in transparency and open data, the more you also start caring about data protection).

In the past few weeks Frank Meeuwsen has been writing about how to get his blog GDPR compliant (GDPR and the IndieWeb 1, 2 and 3, all in Dutch), and Peter Rukavina has been following suit. Like yours, my e-mail inbox is overflowing with GDPR related messages and requests from all the various web services and mailing lists I’m using. I had been thinking about adding a GDPR statement to this blog, but clearly needed a final nudge.

That nudge came this morning as I updated the Jetpack plugin of my WordPress blog. WordPress is the software I use to create this website, and Jetpack is a module for it, made by the same company that makes WordPress itself, Automattic. After the update, I got a pop-up stating that in my settings a new option now exists called “Privacy Policy”, which comes with a guide and suggested texts to be GDPR compliant. I was pleasantly surprised by this step by Automattic.

So I used that to write a data protection policy for this site. It is rather trivial in the sense that this website doesn’t do much, yet it is also surprisingly complicated as there are many different potential rabbit holes to go down. As it concerns not just comments or webmentions but also server logs my web hoster makes, statistics tools (some of which I don’t use but cannot switch off either), third party plugins for WordPress, embedded material from data hungry platforms like Youtube etc. I have a relatively bare bones blog (over the years I made it ever more minimalistic, stripping out things like sharing buttons most recently), and still as I’m asking myself questions that normally only legal departments would ask themselves, there are many aspects to consider. That is of course the whole point, that we ask these types of questions more often, not just of ourselves, but of every service provider we engage with.

The resulting Data Protection Policy is now available from the menu above.

Some links I think worth reading today.

Many tech companies are rushing to arrange compliance with GDPR, Europe’s new data protection regulations. What I have seen landing in my inbox thus far is not encouraging. Like with Facebook, other platforms clearly struggle, or hope to get away, with partially or completely ignoring the concepts of informed consent and unforced consent and proving consent. One would suspect the latter as Facebooks removal of 1.5 billion users from EU jurisdiction, is a clear step to reduce potential exposure.

Where consent by the data subject is the basis for data collection: Informed consent means consent needs to be explicitly given for each specific use of person related data, based on a for laymen clear explanation of the reason for collecting the data and how precisely it will be used.
Unforced means consent cannot be tied to core services of the controlling/processing company when that data isn’t necessary to perform a service. In other words “if you don’t like it, delete your account” is forced consent. Otherwise, the right to revoke one or several consents given becomes impossible.
Additionally, a company needs to be able to show that consent has been given, where consent is claimed as the basis for data collection.

Instead I got this email from Twitter earlier today:

“We encourage you to read both documents in full, and to contact us as described in our Privacy Policy if you have questions.”

and then

followed by

You can also choose to deactivate your Twitter account.

The first two bits mean consent is not informed and that it’s not even explicit consent, but merely assumed consent. The last bit means it is forced. On top of it Twitter will not be able to show content was given (as it is merely assumed from using their service). That’s not how this is meant to work. Non-compliant in other words. (IANAL though)

Just received an email from Sonos (the speaker system for streaming) about the changes they are making to their privacy statement. Like with FB in my previous posting this is triggered by the GDPR starting to be enforced from the end of May.

The mail reads in part

We’ve made these changes to comply with the high demands made by the GDPR, a law adopted in the European Union. Because we think that all owners of Sonos equipment deserve these protections, we are implementing these changes globally.

This is precisely the hoped for effect, I think. Setting high standards in a key market will lift those standards globally. It is usually more efficient to internally work according to one standard, than maintaining two or more in parallel. Good to see it happening, as it is a starting point for the positioning of Europe as a distinct player in global data politics, with ethics by design as the distinctive proposition. GDPR isn’t written as a source of red tape and compliance costs, but to level the playing field and enable companies to compete by building on data protection compliance (by demanding ‘data protection by design’ and following ‘state of the art’, which are both rising thresholds). Non-compliance in turn is becoming the more costly option (if GDPR really gets enforced, that is).

Data, especially lots of it, is the feedstock of machine learning and algorithms. And there’s a race on for who will lead in these fields. This gives it a geopolitical dimension, and makes data a key strategic resource of nations. In between the vast data lakes in corporate silos in the US and the national data spaces geared towards data driven authoritarianism like in China, what is the European answer, what is the proposition Europe can make the world? Ethics based AI. “Enlightenment Inside”.

French President Macron announced spending 1.5 billion in the coming years on AI last month. Wired published an interview with Macron. Below is an extended quote of I think key statements.

AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences……It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution…..Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale….The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values.

Macron’s actions are largely based on the report by French MP and Fields Medal winning mathematician Cédric Villani, For a Meaningful Artificial Intelligence (PDF)

My current thinking about what to bring to my open data and data governance work, as well as to technology development, especially in the context of networked agency, can be summarised under the moniker ‘ethics by design’. In a practical sense this means setting non-functional requirements at the start of a design or development process, or when tweaking or altering existing systems and processes. Non-functional requirements that reflect the values you want to safeguard or ensure, or potential negative consequences you want to mitigate. Privacy, power asymmetries, individual autonomy, equality, and democratic control are examples of this.

Today I attended the ‘Big Data Festival’ in The Hague, organised by the Dutch Ministry of Infrastructure and Water Management. Here several government organisations presented themselves and the work they do using data as an intensive resource. Stuff that speaks to the technologist in me. In parallel there were various presentations and workshops, and there I was most interested in what was said about ethical issues around data.

Author and interviewer Bas Heijne set the scene at the start by pointing to the contrast between the technology optimism concerning digitisation of years back and the more dystopian discussion (triggered by things like the Cambridge Analytica scandal and cyberwars), and sought the balance in the middle. I think that contrast is largely due to the difference in assumptions underneath the utopian and dystopian views. The techno-optimist perspective, at least in the webscene I frequented in the late 90’s and early 00’s assumed the tools would be in the hands of individuals, who would independently weave the world wide web, smart at the edges and dumb at the center. The dystopian views, including those of early criticaster like Aron Lanier, assumed, and were proven at least partly right, a centralisation into walled gardens where individuals are mere passive users or an object, and no longer a subject with autonomy. This introduces wildly different development paths concerning power distribution, equality and agency.

In the afternoon a session with professor Jeroen van den Hoven, of Delft University, focused on making the ethical challenges more tangible as well as pointed to the beginnings of practical ways to address them. It was the second time I heard him present in a month. A few weeks ago I attended an Ethics and Internet of Things workshop at University of Twente, organised by UNESCO World Commission on the Ethics of Science and Technology (COMEST). There he gave a very worthwile presentation as well.

Van den Hoven “if we don’t design for our values…”

What I call ethics by design, a term I first heard from prof Valerie Frissen, Van den Hoven calls value sensitive design. That term sounds more pragmatic but I feel conveys the point less strongly. This time he also incorporated the geopolitical aspects of data governance, which echoed what Rob van Kranenburg (IoT Council, Next Generation Internet) presented at that workshop last month (and which I really should write down separately). It was good to hear it reinforced for today’s audience of mainly civil servants, as currently there is a certain level of naivety involved in how (mainly local governments) collaborate with commercial partners around data collection and e.g. sensors in the public space.

(Malfunctioning) billboard at Utrecht Central Station a few days ago, with not thought through camera in a public space (to measure engagement with adverts). Civic resistance taped over the camera.

Value sensitive design, said Van den Hoven, should seek to combine the power of technology with the ethical values, into services and products. Instead of treating it as a dilemma with an either/or choice, which is the usual way it is framed: Social networking OR privacy, security OR privacy, surveillance capitalism OR personal autonomy, smart cities OR human messiness and serendipity. In value sensitive design it is about ensuring the individual is still a subject in the philosophical sense, and not merely the object on which data based services feed. By addressing both values and technological benefits as the same design challenge (security AND privacy, etc.), one creates a path for responsible innovation.

The audience saw both responsibilities for individual citizens as well as governments in building that path, and none thought turning one’s back on technology to fictitious simpler times would work, although some were doubtful if there was still room to stem the tide.

In a case of synchronicity I’ve read Cory Doctorow’s novel Walkaway when I was ill recently, just as Bryan Alexander scheduled it for his near future science fiction reading group. I loved reading the book, and in contrast to some other works of Doctorow the storyline kept working for me until the end.

Bryan amazingly has managed to get Doctorow to participate in a webcast as part of the Future Trends in learning series Bryan hosts. The session is planned for May 16th, and I marked my calendar for it.

In the comments Vanessa Vaile shares two worthwile links. One is an interesting recording from May last year at the New York public library in which Doctorow and Edward Snowden discuss some of the elements and underlying topics and dynamics of the Walkaway novel.

The other is a review in, that resonates a lot with me. The reviewer writes how, in contrast with lots of other science fiction that takes one large idea or large change and extrapolates on that, Doctorow takes a number of smaller ideas and smaller changes, and then works out how those might interplay and weave new complexities, where the impact on “manufacturing, politics, the economy, wealth disparity, diversity, privilege, partying, music, sex, beer, drugs, information security, tech bubbles, law, and law enforcement” is all presented in one go.

It seems futuristic, until you realize that all of these things exist today.
….. most of it could start right now, if it’s the world we choose to create.

By not having any one idea jump too far from reality, Walkaway demonstrates how close we are, right now, to enormous promise and imminent peril.

That is precisely the effect reading Walkaway had on me, leading me to think how I could contribute to bringing some of the described effects about. And how some of those things I was/am already trying to create as part of my own work flow and information processes.

Last month 27 year old Slovak journalist Jan Kuciak was murdered, together with his fiancée Martina Kušnírová. As an investigative journalist, collaborating with the OCCRP, he regularly submits freedom of information requests (FOI). Recent work concerned organized crime and corruption, specifically Italian organised crime infiltrating Slovak society. His colleagues now suspect that his name and details of what he was researching have been leaked to those he was researching by way of his FOI requests, and that that made him a target. The murder of Kuciak has led to protests in Slovakia, and the Interior Minister resigned last week because of it, and [update] this afternoon the Slovakian Prime Minister resigned as well. (The PM late 2016 referred to journalists as ‘dirty anti-Slovak prostitutes‘ in the context of anti-corruption journalism and activism)

There is no EU, or wider European, standard approach to FOI. The EU regulations for re-use of government information (open data) for instance merely say they build on the local FOI regime. In some countries stating your name and stating your interest (the reason you’re asking) is mandatory, in others one or both aren’t. In the Netherlands it isn’t necessary to state an interest, and not mandatory to disclose who you are (although for obvious reasons you do need to provide contact details to receive an answer). In practice it can be helpful, in order to get a positive decision more quickly to do state your own name and explain why you’re after certain information. That also seems to be what Jan Kuciak did. Which may have allowed his investigative targets to find out about him. In various instances, especially where a FOI request concerns someone else, those others may be contacted to get consent for publication. Dutch FOI law contains such a provision, as does e.g. Serbian law concerning the anticorruption agency. Norway has a tit-for-tat mechanism built in their public income and tax database. You can find out the income and tax of any Norwegian but only by allowing your interest being disclosed to the person whose tax filings you’re looking at.

I agree with Helen Darbishire who heads Access Info Europe who says the EU should set a standard that prevents requesters being forced to disclose their identity as it potentially undermines a fundamental right, and that requester’s identities are safeguarded by governments processing those requests. Access Info called upon European Parliament to act, in an open letter signed by many other organisations.

This week, as part of the Serbian open data week, I participated in a panel discussion, talking about international developments and experiences. A first round of comments was about general open data developments, the second round was focused on how all of that plays out on the level of local governments. This is one part of a multi-posting overview of my speaking notes.

Citizen generated data and sensors in public space

As local governments are responsible for our immediate living environment, they are also the ones most confronted with the rise in citizen generated data, and the increase in the use of sensors in our surroundings.

Where citizens generate data this can be both a clash as well as an addition to professional work with data.
A clash in the sense that citizen measurements may provide a counter argument to government positions. That the handful of sensors a local government might employ show that noise levels are within regulations, does not necessarily mean that people don’t subjectively or objectively experience it quite differently and bring the data to support their arguments.
An addition in the sense that sometimes authorities cannot measure something within accepted professional standards. The Dutch institute for environment and the Dutch meteo-office don’t measure temperatures in cities because there is no way to calibrate them (as too many factors, like heat radiance of buildings are in play). When citizens measure those temperatures and there’s a large enough number of those sensors, then trends and patterns in those measurements are however of interest to those government institutions. The exact individual measurements are still of uncertain quality, but the relative shifts are a new layer of insight. With the decreasing prices of sensors and hardware needed to collect data there will be more topics for which citizen generated data will come into existence. The Measure Your City project in my home town, for which I have an Arduino-based sensor kit in my garden is an example.

There’s a lot of potential for valuable usage of sensor data in our immediate living environment, whether citizen generated or by corporations or local government. It does mean though that local governments need to become much more aware than currently of the (unintended) consequences these projects may have. Local government needs to be extremely clear on their own different roles in this context. They are the rule-setter, the one to safeguard our public spaces, the instigator or user, and any or all of those at the same time. It needs an acute awareness of how to translate that into the way local government enters into contracts, sets limits, collaborates, and provides transparency about what exactly is happening in our shared public spaces. A recent article in the Guardian on the ‘living laboratories’ using sensor data in Dutch cities such as Utrecht, Eindhoven, Enschede and Assen shines a clear light on the type of ethical, legal and technical awareness needed. My company has recently created a design and thinking tool (in Dutch) for local governments to balance these various roles and responsibilities. This ties back to my previous point of local governments not being data professionals, and is a lack of expertise that needs to addressed.

Abraham Lincoln famously said in the 1860’s “Don’t believe everything you read on the internet.“, and he’s right of course. George Washington already warned us a century earlier that “the greatest thing about Facebook is that you can quote something and totally make up the source.” Add to it the filter bubbles that algorithms create around you on Facebook, fake news and the influencing that third parties try to do, and you can be certain that the trustworthiness of internet is now even worse than it was in the 19th or 18th century.

Sidewalk Stencil: Abraham Lincoln
“Don’t believe everything you read on the internet.”, Abraham Lincoln hit the nail on the head in 1864 already.

Dealing with crap on the internet however sometimes seems something only for professionals. Facebook should filter better, or be more transparent. Online forensic research like Bellingcat does is the only way to disprove online deception. The problem is that it absolves you and me way too easily of our own responsibility in detecting crap. If something seems too funny, coincidental or too conveniently fitting into your own believe framework, it should trigger us into taking a step back. To take time to determine for ourselves whether Lincoln really said that, whether a picture was really taken where and when it is claimed, and if a source really exists or can be determined as trustworthy.

To be able to detect crap on the internet, you need crap detection tools. My Brainstorms-friend Howard Rheingold and others have put together a useful list of crap detection tools (of which I very often use the reverse image search tools like Tineye, to verify the actual origin of a photo). The list is well maintained and growing. The listed tools help you quickly check-up on things before you share something and reinforce a vicious cycle making more and more social media platforms toxic.

Not spreading dubious material is a civic duty, just like cleaning up after yourself in a public space. This makes crap detection a critical digital information skill. Download or bookmark the list of crap detection tools, add some of the mentioned tools as plugins to your browser, and use it to your advantage.


Last week I received an e-mail from Mailchimp saying

Starting October 31, single opt-in will become the default setting for all MailChimp hosted, embedded, and pop-up signup forms. This change will impact all MailChimp users

When I read it, I thought it odd, as in the EU the double opt-in is needed, especially with the new General Data Protection Regulation coming next year.

Today I received another e-mail from Mailchimp that they were rolling their plans back for EU customers.

…because your primary contact address is in the EU, your existing forms will remain double opt-in. We made this decision after receiving a lot of feedback from EU customers who told us that single opt-in does not align with their business needs in light of the upcoming GDPR and other local requirements. We heard you, and we’re sorry that we caused confusion.

Now I am curious to see if they will send out another e-mail in the coming week also reinstating double opt-in for everyone else. Because as they already say in their own e-mail:

Double opt-in provides additional proof of consent, and we suggest you continue using double opt-in if your business will be subject to the GDPR.

That includes any non-EU business that has clients or indeed mailing list subscribers in the EU, as the rules follow the personal data of EU citizens. All those companies are subject to the GDPR as well.