Category Archives: Systems Theory

Reality as a System

(Note: my roadmap originally had planned a post on Gödel’s Incompleteness Theorems, but that’s not going to happen. It’s a fascinating topic with some interesting applications, but it’s even more mathematically dense than a lot of my other stuff, and isn’t strictly necessary, so I’m skipping it, for now. Maybe I’ll come back to it later. Read the wiki page if you’re interested.)

This post marks the final cherry on top of this whole series on systems theory, and the part where we finally get to make practical philosophical use of the whole abstract structure we’ve been building up. I’ve telegraphed the whole thing in the roadmap, and the thesis is in the title, so let’s just dive right in: reality is a system. It’s layed out almost already right there in axioms #3 and #5.

We can also tie this in with our definitions of truth and knowledge. If the absolute underlying reality of what is (forming absolute truth) is a system, then the relative truth that we regularly refer to as “truth” is just a set of abstractions layered on top of the underlying reality.

Dogs and cats and chairs and tables are just abstractions on top of molecules. Molecules are just an abstraction on top of atoms. Atoms, on top of protons, electrons, and neutrons. Protons and neutrons on top of quarks and other fundamental particles I don’t understand. The absolute true underlying system is, in this view, not possible to know. In fact, since we as persons are inside the system (we can in fact be seen as subsystems of it), then we literally cannot model the entire thing with complete fidelity. It is fundamentally impossible. The best we can do is to model an abstraction within the bounds of the entropy of the system. This is in some distant sense a restatement of the circular trap.

Advertisements

Recursive Abstractions and Approximate Models

Given my previous definition of system simulation (aka modelling) it seems intuitive that a finite system cannot model itself except insofar as it is itself. Even more obviously, no proper subsystem of a system could simulate its “parent”. A proper subsystem by definition has a smaller size than the enclosing system, but needs to be at least as big in order to model it.

(An infinite subsystem of an infinite system is not a case I care to think too hard about, though in theory it could violate this rule? Although some infinities are bigger than others, so… ask a set theorist.)

However, an abstraction of a system can be substantially smaller (i.e. require fewer bits of information) than the underlying system. This means that a system can have subsystems which recursively model abstractions of their parents. Going back to our game-of-life/glider example, this means that you could have a section of a game of life which computationally modeled the behaviour of gliders in that very same system. The model cannot be perfect (that would require the subsystem to be larger than its “parent”) so the abstraction must of necessity be incomplete, but as we saw in that example being incomplete doesn’t make it useless.

Modelling Systems

Now that we have the link between systems theory and information theory explicitly on the table, there are a couple of other interesting topics we can introduce. For example, the famous Turing machine can both:

  1. Be viewed as a system.
  2. Model (aka simulate) any other possible system.

And it is on the combination of these points that I want to focus. First, I shall define the size of a system as the total number of bits that are needed to represent the totality of its information. This can of course change as the entropy of the system changes, so the size is always specific to a particular state of a system.

With this definition in hand (and considering as an example the Turing machine above), we can say that a system can be perfectly simulated by any other system whose maximum size is at least as large as the maximum size of the system being simulated. The Turing machine, given its unlimited memory, has an infinite maximum size and can therefore simulate any system. This leads nicely to the concept of being Turing complete.

(Note that an unlimited memory is not in itself sufficient for Turing completeness. The system’s rules must also be sufficiently complex or else the entropy over time of the system reduces to a constant value.)

Information Theory, Compression, and Representing Systems

In several of my last few posts I have touched on or made tangential reference to the topic known as information theory. It’s kind of a big and important field, so I’ll give you a few minutes to at least skim the Wikipedia entry before we continue.

Alright, ready? Let’s dive in. First note that in my original definition of a system I defined an element as a mapping from each property in the system to a distinct piece of information. This was not an accident. Systems, fundamentally, are nothing more than sets of information bound together by rules for processing that information (which are themselves information, in the relevant sense). The properties set is nothing more than useful labels for distinguishing pieces of information; labels are also a form of information, of course.

As such, we have all the rather immense mathematical power of information theory available to us when we talk about systems. In hindsight, this should probably have been the very next post I wrote after the introduction to systems theory; all of the other parts I’ve written between then and now (specifically the ones on patterns, entropy and abstraction) make far more sense given this idea as context.

In this view, patterns and abstractions go hand in hand as ways of using the low entropy of a system to produce representations of that system using fewer bits. They are, in fact, a form of compression (and what I called an incomplete abstraction simply means that the compression is lossy).

The Emergence of Patterns

We kind of have a grasp of patterns and abstractions now; the last piece of that particular puzzle is the way such things emerge. Patterns and abstractions are not guaranteed to arise in any particular system (in particular, any apparent emergence in a purely stochastic system is likely to be nothing more than Poisson clumping) but as we have seen with gliders in Conway’s Game of Life, emergence does happen.

There are a few different ways emergence has been described, though for my purposes I will take my own stab at it. I shall say that:

Emergence is when the operation of the rules of a system produces a set of patterns in the system which form an abstraction whose inaccuracies (e.g. the case of colliding gliders from Monday’s example) are sufficiently contained that the abstraction can still be used as a reasonable model to predict the future state of the underlying system.

That’s rather long-winded, I know. To elaborate slightly on what I mean by “sufficiently contained inaccuracies”, consider the glider case. As long as the gliders don’t collide (and there are no other cells active) our abstract system of gliders perfectly models the underlying system of Life: starting in the same state and following the appropriate rules will produce the same subsequent state (if Life had probabilistic rules, the additional caveat would be needed that we assume the same random choices as well). However, in the corner cases of colliding gliders (or when the initial state has non-glider cells active) then the glider system diverges slightly from the underlying Life system. This is still an emergent model though, both because the divergence between the abstraction and the underlying system is relatively small in most cases, and because it is easy to catch; even if we don’t have rules for handling it, we can easily notice when two gliders collide and consequently know that the abstraction is no longer necessarily correct.

Systems and Abstractions

My previous post was a bit of a mess, and I’m starting to think it’s because I put that topic too early in the sequence. This post (and possibly the next couple, we’ll see) should have come first. Anyways.

Abstractions are a key part of systems theory. Recalling our base terminology for systems, an abstraction is a way of talking about collections of elements as single entities. The glider from our trusty example of Conway’s Life is a perfect example of this. From the pure systems-level view of Life, there exists nothing but the grid of cells. However, the glider abstraction lets us talk about a particular set of cells in a particular pattern (there’s that word again) that exhibits a particular behaviour.

Glider

The movement of a “glider” in Conway’s Life (image from Wikipedia).

It’s interesting (and important) to note that gliders exhibit their own, higher-level behaviour that can be expressed in rules without apparent reference to the underlying system rules: they cycle through a set of four states, and move diagonally by one cell each time they complete a cycle. The underlying system has no concept of movement, cells simply turn on and off – but we say that the gliders move, nonetheless.

Now consider a Life setup that consists of a couple-dozen gliders scattered about, and nothing else (all other cells are “off”). What does this resemble? Another system! Except instead of talking about cells as elements, with the property of location and being on or off, we talk about gliders as elements with the property of location, state, and moving in a direction. But it’s the same system. We say the latter, by grouping the elements of the former, is an abstraction on top of it.

However, there’s a catch; what happens in our shiny new abstract system if two gliders collide? The simple rules above say nothing about collision, so they just sort of cross over each other and keep going, but of course reality is different.  The underlying system doesn’t know about gliders, and so following the “real” rules of cell life and death, the gliders may disappear entirely or form some other pattern (depending on the precise nature of the collision). It is here where the abstraction breaks down; it lets us hide the messy details of cell manipulation and deal with higher-level gliders, but there’s a situation in which that simplicity doesn’t track and the abstraction is wrong.

(If you’re familiar with information theory then there’s all sorts of neat stuff we can say at this point about the informational content and rule complexity of abstractions versus the underlying system etc, but it’s not strictly necessary. Fun math though.)

In this case we call the glider-abstraction an incomplete system; it is a system that matches the underlying reality to a point, but not perfectly. The base system of cells is, naturally, called a complete system.

Patterns and Entropy

Our next foray into systems theory involves the definitions of patterns and the study of entropy (in the information-theoretical sense). Don’t worry too much about the math, I’m going to be working with a simple intuitive version for the most part, although if you have a background in computers or mathematics there are plenty of neat nooks and crannies to explore.

For a starting point, I will selectively quote Wikipedia’s opening paragraph on patterns (at time of writing):

A pattern, …is a discernible regularity… As such, the elements of a pattern repeat in a predictable manner.

I’ve snipped out the irrelevant bits, so the above definition is relatively meaty and covers the important points. First, a pattern is a discernible regularity. What does that mean? Well, unfortunately not a whole lot really, unless you’re hot on the concept of automata theory and recognizability. But it really doesn’t matter, since your intuitive concept of a pattern neatly covers all of the relevant facts for our purposes.

But what does this have to do with systems theory? Well, consider our reliable example, Conway’s Game of Life. A pattern in Life is a fairly obvious thing: a big long line of living cells is a pattern for example. This brings us to the second part of the above quote: the elements of a pattern repeat. This should be obvious from the example. Of course you can have other patterns in Life; a checkerboard grid is another obvious pattern, and the relatively famous glider is also a pattern.

It seems, on review, that I am doing a poor job of explaining patterns, however I will leave the above for lack of any better ideas at the moment. Just rest comfortable that your intuitive knowledge of what a pattern is should be sufficient.

For the more mathematically inclined, a pattern can be more usefully defined in terms of its information-theoretical entropy (also known as Shannon entropy after its inventor Claude Shannon). Technically anything that is at all non-random (aka predictable) is a pattern, though usually we are interested in patterns of particularly low entropy.

Apologies, this has ended up rather incoherent. Hopefully next post will be better. Reading the links may help, if you’re into that sort of thing.