I’m currently reading a collection of essays on AI, all invited and based on the writings of Norbert Wiener (1894-1964), specifically the first (1950) edition of The Human Use of Human Beings, Cybernetics and Society. (Later editions apparently miss a critical chapter, unwelcome in the Cold War it seems). Wiener I’m familiar with as his work is important in electronic engineering, which was my original (and unfinished) field of study. Specifically his work on feedback, and its import in control systems. He was firmly from the analogue era, but his formalisation of feedback/control into cybernetics is a building block of AI thinking. Reading about him before going to sleep, I dreamt of feedback systems, sensors and actuators (not my first technology dream). The next morning I felt the urge to sketch what I dreamt. So below a series of images I created from it.
The most basic form of feedback is when a sensor (S) sends a signal to an actuator (A). S senses, informs A which does something. My finger feels the burn of a flame, my nerves send the signal, my muscles make me withdraw. The thermostat senses temperature and tells the heating system to start or stop. S watches the environment, and A influences it.
A slightly more complicated version of the same is when there is a step where a sensor’s signal is processed (P), resulting in a new signal that informs the actuator to act.
Such processing might collect not just one sensor’s input, but a range. Likewise it may control not just one but multiple actuators.
The environment is likely not empty, but filled with other sensor/actuator pairs, where an actuator’s action registers not just on its own sensor but also on some other sensor, who’s actuator’s actions also influence your own sensor’s readings. Now there’s a feedback loop that contains an active agent. Or more than one, or many active agents. This is the premise I used years ago in my thinking about information strategies.
An environment filled with active agents of various kinds that have sensors and actuators influencing each other, create all kinds of interdependencies and levels of complexity that allow emergence. It also creates a need for new sensors that can capture that emergence by being able to spot patterns. This is the point where it is easy to see why Wiener’s thinking about feedback and control isn’t just an engineering aid but also useful in looking at societal factors, and in AI.
There was a Norbert Wiener association in my electronic engineering department, representing those interested in control technology. I also had a co-student called Norbert, which we nicknamed Nurbs for some reason. This got mashed up in my dream and turned into dreaming that the unit of feedback was a Nurbs, as in microNurbs (μN) and TeraNurbs (TN). 😀