…that from all the categories, those three where non-state organisations are targeted with dis-/malinfomation (i.e. SN, NN, and PN) are the most effective in enabling the agents to reach their malicious goals. Best example is still how US and UK state organisations duped independent and professional media outlets such as the New York Times into selling the war with Iraq to the public.
The model, thus, encourages to concentrate funds and efforts on non-state organisations to help them resist information warfare.
He goes on to say that public protection against public agents is too costly, or too complicated:
the public is easy to target but very hard (and expensive) to protect – mainly because of their vast numbers, their affective tendencies, and the uncertainty about the kind and degree of the impact of bad information on their minds
I feel that this is where our individual civic duty to do crap detection, and call it out when possible, or at least not spread it, comes into play as inoculation.
Came across an interesting article, and by extension the techzine it was published in: Logic.
The article was about the problematic biases and assumptions in the model of urban development used in the popular game SimCity (one of those time sinks where my 10.000 hours brought me nothing 😉 ). And how that unintentionally (the SimCity creator just wanted a fun game) may have influenced how people look at the evolution of cityscapes in real life, in ways the original 1960’s work the game is based on never has. The article is a fine example of cyber history / archeology.
The magazine it was published in, Logic (twitter), started in the spring of 2017 and is now reaching issue 7. Each issue has a specific theme, around which contributions are centered. Intelligence, Tech against Trump, Sex, Justice, Scale, Failure, Play, and soon China, have been the topics until now.
I’ve ordered the back issues, and subscribed (though technically it is cheaper to keep ordering back-issues). They pay their contributors, which is good.
Cover for the upcoming edition on tech in China. Design (like all design for Logic) by Xiaowei R. Wang.
It obviously makes no sense to block the mail system if you disagree with some of the letters sent. The deceptive method of blocking used here, targeting the back-end servers so that mail traffic simply gets ignored, while Russian Protonmail users still seemingly can access the service, is another sign that they’d rather not let you know blocking goes on at all. This is an action against end-to-end encryption.
The obvious answer is to use more end-to-end encryption, and so increase the cost of surveillance and repression. Use my protonmail address as listed on the right, or use PGP using my public key on the right to contact me. Other means of reaching me with end-to-end encryption are the messaging apps Signal and Threema, as well as Keybase (listed on the right as well).
Russia has told internet providers to enforce a block against encrypted email provider ProtonMail, the company’s chief has confirmed. The block was ordered by the state Federal Security Service, formerly the KGB, according to a Russian-language blog, which obtained and published the order aft…
Bookmarked for suggesting a good use of a QR code: put it on an artefact for people to get a printable version.
I full-heartedly agree with Jeremy: print stylesheets are one more feature of universal website design, and they go together rather well with QR codes …no matter how ugly, abused and underused they are.
There luckily is an alternative to Google’s Chart API (whose shutdown had been announced for …
Interesting, yet basically boils down to actively exercising your ‘free will’. It assumes a blank slate for the hacking, where I haven’t deliberately set out for information/contacts on certain topics. And then it suggests doing precisely that as remedy. The key quote for me here is “Humans are hacked through pre-existing fears, hatreds, biases and cravings. Hackers cannot create fear or hatred out of nothing. But when they discover what people already fear and hate it is easy to push the relevant emotional buttons and provoke even greater fury. If people cannot get to know themselves by their own efforts, perhaps the same technology the hackers use can be turned around and serve to protect us. Just as your computer has an antivirus program that screens for malware, maybe we need an antivirus for the brain. Your AI sidekick will learn by experience that you have a particular weakness – whether for funny cat videos or for infuriating Trump stories – and would block them on your behalf.“: Yuval Noah Harari on the myth of freedom
This is an important issue, always. I recognise it from my work for the World Bank and UN agencies. Is what you’re doing actually helping, or is it shoring up authorities that don’t match with your values? And are you able to recognise it and withdraw when you cross the line from the former to the latter? I’ve known entrepreneurs who kept a client blacklist of sectors, governments and companies, but often it isn’t that clear cut. I’ve avoided engagements in various countries over the years, but every client engagement can be rationalised: How McKinsey Has Helped Raise the Stature of Authoritarian Governments, and when the consequences come back to bite, Malaysia files charges against Goldman-Sachs
Some things I thought worth reading in the past days
A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
[update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.
Some links I thought worth reading the past few days
Peter Rukavina pointed me to this excellent posting on voting, in the context of violence as a state monopoly and how that vote contributes to violence. It’s this type of long form blogging that I often find so valuable as it shows you the detailed reasoning of the author. Where on FB or Twitter would you find such argumentation, and how would it ever surface in a algorithmic timeline? Added Edward Hasbrouck to my feedreader : The Practical Nomad blog: To vote, or not to vote?
This quote is very interesting. Earlier in the conversation Stephen Downes mentions “networks are grown, not constructed”. (true for communities too). Tanya Dorey adds how from a perspective of indigenous or other marginalised groups ‘facts’ my be different, and that arriving a truth therefore is a process: “For me, “truth growing” needs to involve systems, opportunities, communities, networks, etc. that cause critical engagement with ideas, beliefs and ways of thinking that are foreign, perhaps even contrary to our own. And not just on the content level, but embedded within the fabric of the system et al itself.“: A conversation during EL30.mooc.ca on truth, data, networks and graphs.
This article has a ‘but’ title, but actually is a ‘yes, and’. Saying ethics isn’t enough because we also need “A society-wide debate on values and on how we want to live in the digital age” is saying the same thing. The real money quote though is “political parties should be able to review technology through the lens of their specific world-views and formulate political positions accordingly. A party that has no position on how their values relate to digital technology or the environment cannot be expected to develop any useful agenda for the challenges we are facing in the 21st century.” : Gartner calls Digital Ethics a strategic trend for 2019 – but ethics are not enough
One of the essential elements of the EU GDPR is that it applies to anyone having data about EU citizens. As such it can set a de facto standard globally. As with environmental standards market players will tend to use one standard, not multiple for their products, and so the most stringent one is top of the list. It’s an element in how data is of geopolitical importance these days. This link is an example how GDPR is being adopted in South-Africa : Four essential pillars of GDPR compliance
A statement like “Therefore, our primary focus is to get millions of Q members registered” makes this initiative sound very spammy and pyramid like, banking like they do on FOMO. Having everyone wait for whatever they plan until they have millions of users is an odd way of getting those users. Why not have something of value now, so that it brings users in? Anyway I have an account, and you are invited. More info on: Initiative Q
Some links I thought worth reading the past few days
On how blockchain attempts to create fake scarcity in the digital realm. And why banks etc therefore are all over it: On scarcity and the blockchain by Jaap-Henk Hoepman
Doc Searl’s has consistently good blogposts about the adtech business, and how it is detrimental to publishers and citizens alike. In this blogpost he sees hope for publishing. His lists on adverts and ad tech I think should be on all our minds: Is this a turning point for publishing?
In my information routines offline figures prominently, but it usually doesn’t in my tools. There is a movement to put offline front and center as design principle it turns out: Designing Offline-First Web Apps
Hoodie is a backendless tool for building webapps, with a offline first starting point: hood.ie intro
Haven’t tested it yet, but this type of glue we need much more of, to reduce the cost of leaving silos, and to allow people to walk several walled gardens at the same time as a precursor to that: Granary
Some links I thought worth reading the past few days
Initial circumstances mostly trump intrinsic capabilities. Basically the evolutionary space available. Delayed gratification is based on affluence at the outset, not indicative of doing better in future: Why Rich Kids Are So Good at the Marshmallow Test
It’s not a problem, it’s a challenge, to stick to enlightenment ideals in developing AI. Privacy and using big data aren’t opposites. Let’s not confuse purposes and outcomes, and explore hidden assumptions. EU style AI efforts are merely hard in a different way than the surveillance capitalism variety in the US and the data driven authoritarianism variety in China : AI Has a Big Privacy Problem And Europe’s New Data Protection Law Is About to Expose It
The first example I’ve come across that looks at using blockchain for a local exchange and trading system (LETS), a local currency. Not sure why fiat currency related fears like ‘managing supply and demand’ of coins are mentioned, when you tie creation to a transaction like they describe: Hullcoin: can blockchain unlock the hidden value in Hull’s economy?