The System Modelling Perspective on Conspiracy Theories

Humans like to make sense out of the observed world


Peter Wurmsdobler

3 years ago | 5 min read

Humans like to make sense out of the observed world. Our sensors, e.g. our ears and eyes, feed our brain which appears to have the innate ability to create a mental model of the perceived world, i.e. an intellectual surrogate of a world, with abstract concepts for objects and their relationship in time and space.

The Brain with David Eagleman gives an accessible explanation of the mechanisms and brain regions involved.

The personal models of our world, however, seem to be geared up mostly for causal relationships as we tend to think in terms of cause and effect; for instance, we are often focused on finding the root cause of any phenomenon. And yet, some phenomena are the result of complicated interactions in very complex systems.

Since they do not exhibit obvious causal relationships our mind is tempted to latch on to causal substitutes, or conspiracy theories, in order to satisfy our desire for simple causal models.

In the following I would like to ask you to bear with me as I take you on a short journey from the perspective of systems modelling, from modelling very simple to complex systems, with the goal to show that seeking causal models for complex systems in the form of conspiracy theories is futile and prone to deception however appealing these theories might be.

Dynamics of Simple Deterministic Systems

Word models in our brain are mostly implicit; they operate unconsciously without the need to understand intricate details and technicalities, or offer mathematical equations. nevertheless, these models help us navigating the world,

e.g. implicit motion models for cars in every traffic participant as one expects a car not to leap but to move in a predictable manner.

The objective of science and engineering is to either extract explicit models using formal descriptions as well as mathematical equations, quite commonly from first principles, or to use Machine Learning to obtain implicit models, too, e.g. neural networks. The following focuses on the former as I have been involved in that sort of modelling tasks in my professional career.

Simplest Single Input Single Output System

The simplest system I can think of with one input and one output is a heating system: the input is the heating power through a dial, the output is the room temperature. All other things being equal, turning the dial up will increase the temperature with a certain lag due to the heater and the thermal inertia of the building. Cause and effect, easy to understand.

If we employed a simple controller that turns the dial in proportion to the difference between ideal and measured temperature according to a certain gain, the result is a closed loop system.

The closed loop system behaviour depends on the gain in the proportional control. If the gain is too low, the temperature will follow the demand too slowly, if the gain is too high, the system will tend to oscillate or even become unstable; a small disturbance can destabilise such a system.

Bottom line: when connecting parts of a system in a loop, the closed system exhibits behaviour that is not visible in its constituents, neither the controller nor the heating in this case. There is no need to resort to obscure theories but to study the body of system dynamics involving feedback loops.

Dynamics of Cross-Coupled Feedback Systems

Adding a little bit of complexity, let’s look at a system that has two inputs and two outputs: the water supply system by the Water Works in the city of Vienna my institute was involved in some time ago.

The water supply system used to have two large pumping stations that each measured and controlled the water pressure on their own. However, the entire system experienced stability issues which could initially not be explained or mitigated.

Every evening after the news, when most people used their wash-rooms, the water pressure dropped in peoples homes. Therefore, each plant independently ramped up the pumping power using some control gain resulting in a locally higher water pressure.

Due to two pumps working independently, the system overshot with the result of ramping the pumps down again, independently; the system became unstable.

Bottom line: what appeared inexplicable to each individual controller was only an effect of a cross coupled system which had not been taken into account in the control system design. System behaviour is sometimes an artefact of coupled, complex systems where the coupling is not understood, or the gain is too high leading to oscillations or even instability.

Dynamics of Complex Multi-Agent Systems

Connecting many sub-system models yields a very complex system model. These kind of models are used in many domains, such as in engineering, economics, traffic simulations or biology. The scope and complexity of the model is usually defined by its purpose or what behaviour it is intended to explain, quite often for control purposes.

One special case of complex models is when there are multiple instances of the same type of sub-systems called agents, each governed by very simple rules which can be expressed as mathematical equations.

An example of such a modelling approach is the multi-agent model for the explanation of the murmuration of birds, e.g. like in the paper Self-organized aerial displays of thousands of starlings: a model:

The question of this paper, therefore, is whether such complex patterns can emerge by self-organization. In our computer model, called StarDisplay,

we combine the usual rules of co-ordination based on separation, attraction, and alignment with specifics of starling behavior: 1) simplified aerodynamics of flight, especially rolling during turning, 2) movement above a “roosting area” (sleeping site), and 3) the low fixed number of interaction neighbors (i.e., the topological range).

The authors model the starling behaviour using “social forces” due to the assumed desire of birds for separation, cohesion, alignment, roost attraction and active steering but also due to aerodynamics and some random forces. Their model generates behaviour which is qualitatively and quantitatively similar to observed flocks and shows that:

… the flocking maneuvers of starlings may result from local interactions only … neither perception of the complete flock nor any leadership or complex cognition is needed.


In summary, as far as modelling of complex systems is concerned, complex behaviour is quite often an artefact of the interaction between agents, each of which following simple rules. It may appear that there is an entity controlling the system behaviour, but there simply is not, however convenient such an explanation would be.

With regards to social and economic systems, there are many agents on a global scale. Publications such as Revealed — the capitalist network that runs the world recognise that there are “too many to sustain collusion” and that the “network is unlikely to be result of a conspiracy” which is deceptively attractive but most unlikely given all the interactions in a complex system.

Complex, global systems may well lack in robustness as they tend to create all sorts of behaviour like economic cycles with their boom and bust cycles, possibly due to gains being too high, with gain in a control engineering as well as financial sense.

As an engineer I would say there is a design fault somewhere and perhaps lowering gains or adding some damping could help.

Originally published on medium


Created by

Peter Wurmsdobler

Contributes to the technological foundations for the self-driving revolution at Five, UK. Interested in sustainable economies and renewable energy.







Related Articles