Modelling Systems

Now that we have the link between systems theory and information theory explicitly on the table, there are a couple of other interesting topics we can introduce. For example, the famous Turing machine can both:

  1. Be viewed as a system.
  2. Model (aka simulate) any other possible system.

And it is on the combination of these points that I want to focus. First, I shall define the size of a system as the total number of bits that are needed to represent the totality of its information. This can of course change as the entropy of the system changes, so the size is always specific to a particular state of a system.

With this definition in hand (and considering as an example the Turing machine above), we can say that a system can be perfectly simulated by any other system whose maximum size is at least as large as the maximum size of the system being simulated. The Turing machine, given its unlimited memory, has an infinite maximum size and can therefore simulate any system. This leads nicely to the concept of being Turing complete.

(Note that an unlimited memory is not in itself sufficient for Turing completeness. The system’s rules must also be sufficiently complex or else the entropy over time of the system reduces to a constant value.)

Advertisements

Information Theory, Compression, and Representing Systems

In several of my last few posts I have touched on or made tangential reference to the topic known as information theory. It’s kind of a big and important field, so I’ll give you a few minutes to at least skim the Wikipedia entry before we continue.

Alright, ready? Let’s dive in. First note that in my original definition of a system I defined an element as a mapping from each property in the system to a distinct piece of information. This was not an accident. Systems, fundamentally, are nothing more than sets of information bound together by rules for processing that information (which are themselves information, in the relevant sense). The properties set is nothing more than useful labels for distinguishing pieces of information; labels are also a form of information, of course.

As such, we have all the rather immense mathematical power of information theory available to us when we talk about systems. In hindsight, this should probably have been the very next post I wrote after the introduction to systems theory; all of the other parts I’ve written between then and now (specifically the ones on patterns, entropy and abstraction) make far more sense given this idea as context.

In this view, patterns and abstractions go hand in hand as ways of using the low entropy of a system to produce representations of that system using fewer bits. They are, in fact, a form of compression (and what I called an incomplete abstraction simply means that the compression is lossy).

Patterns and Entropy

Our next foray into systems theory involves the definitions of patterns and the study of entropy (in the information-theoretical sense). Don’t worry too much about the math, I’m going to be working with a simple intuitive version for the most part, although if you have a background in computers or mathematics there are plenty of neat nooks and crannies to explore.

For a starting point, I will selectively quote Wikipedia’s opening paragraph on patterns (at time of writing):

A pattern, …is a discernible regularity… As such, the elements of a pattern repeat in a predictable manner.

I’ve snipped out the irrelevant bits, so the above definition is relatively meaty and covers the important points. First, a pattern is a discernible regularity. What does that mean? Well, unfortunately not a whole lot really, unless you’re hot on the concept of automata theory and recognizability. But it really doesn’t matter, since your intuitive concept of a pattern neatly covers all of the relevant facts for our purposes.

But what does this have to do with systems theory? Well, consider our reliable example, Conway’s Game of Life. A pattern in Life is a fairly obvious thing: a big long line of living cells is a pattern for example. This brings us to the second part of the above quote: the elements of a pattern repeat. This should be obvious from the example. Of course you can have other patterns in Life; a checkerboard grid is another obvious pattern, and the relatively famous glider is also a pattern.

It seems, on review, that I am doing a poor job of explaining patterns, however I will leave the above for lack of any better ideas at the moment. Just rest comfortable that your intuitive knowledge of what a pattern is should be sufficient.

For the more mathematically inclined, a pattern can be more usefully defined in terms of its information-theoretical entropy (also known as Shannon entropy after its inventor Claude Shannon). Technically anything that is at all non-random (aka predictable) is a pattern, though usually we are interested in patterns of particularly low entropy.

Apologies, this has ended up rather incoherent. Hopefully next post will be better. Reading the links may help, if you’re into that sort of thing.