Monthly Archives: January 2014

Other Opinions #13: Stopping Killer Robots

http://bos.sagepub.com/content/70/1/32.full

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.

Advertisements

Patterns and Entropy

Our next foray into systems theory involves the definitions of patterns and the study of entropy (in the information-theoretical sense). Don’t worry too much about the math, I’m going to be working with a simple intuitive version for the most part, although if you have a background in computers or mathematics there are plenty of neat nooks and crannies to explore.

For a starting point, I will selectively quote Wikipedia’s opening paragraph on patterns (at time of writing):

A pattern, …is a discernible regularity… As such, the elements of a pattern repeat in a predictable manner.

I’ve snipped out the irrelevant bits, so the above definition is relatively meaty and covers the important points. First, a pattern is a discernible regularity. What does that mean? Well, unfortunately not a whole lot really, unless you’re hot on the concept of automata theory and recognizability. But it really doesn’t matter, since your intuitive concept of a pattern neatly covers all of the relevant facts for our purposes.

But what does this have to do with systems theory? Well, consider our reliable example, Conway’s Game of Life. A pattern in Life is a fairly obvious thing: a big long line of living cells is a pattern for example. This brings us to the second part of the above quote: the elements of a pattern repeat. This should be obvious from the example. Of course you can have other patterns in Life; a checkerboard grid is another obvious pattern, and the relatively famous glider is also a pattern.

It seems, on review, that I am doing a poor job of explaining patterns, however I will leave the above for lack of any better ideas at the moment. Just rest comfortable that your intuitive knowledge of what a pattern is should be sufficient.

For the more mathematically inclined, a pattern can be more usefully defined in terms of its information-theoretical entropy (also known as Shannon entropy after its inventor Claude Shannon). Technically anything that is at all non-random (aka predictable) is a pattern, though usually we are interested in patterns of particularly low entropy.

Apologies, this has ended up rather incoherent. Hopefully next post will be better. Reading the links may help, if you’re into that sort of thing.

Subsystems and Closure

Subsystems

With the basic definition of a system in hand, we can also define a subsystem. To anyone who has worked with mathematical sets before, the definition of a subsystem should be quite intuitive.

Given systems S and S’, then S’ is a subsystem of S if:

  • They have the same property set.
  • They have the same rule set.
  • The element set of S’ is a subset of the element set of S.

We can likewise define a proper subsystem, when the element set of S’ is a proper subset of the element set of S.

Closure

The concept of closure should also be familiar to those with a mathematical background, though it’s application here probably won’t be quite as intuitive. We consider a system (or equivalently, a subsystem) closed if the application of its rules is determined only by information within the system or by true randomness.

This probably isn’t as obvious, but hopefully the example of Conway’s Game of Life will make it clearer. An instance of Life is a system, as we have already seen, and is in fact a closed system, since there is no information outside the set of elements that determines its behaviour at each step. We can, however, take a subset of the cells of an instance of Life, for example a 10×10 square of them. Together with the property set and rule set from normal Life, this forms a subsystem which is not closed.

To see why our subsystem is not closed, consider the cells around the edges of that 10×10 square. Their behaviour is determined by rules referring to their eight neighbours, but not all of their neighbours actually lie within that 10×10 square. Since the edge cells’ behaviours are determined in part by cells (elements) not in the 10×10 square (the element set of our subsystem), then the subsystem is not closed.

Systems: Determinism and Randomness

In my basic definition of a system, I spoke rather vaguely of rules that define how the elements of a system change over time. The example I gave was of Conway’s Game of Life, which contains a set of four rather simple rules. The rules in Life are all deterministic, that is to say that they define exactly what is to happen in a given scenario, but not all rules in all systems are like that.

Some systems may contain random, or probabilistic rules. A trivial example of this might be the rule (for some moving thing) “Go either left or right at random, with equal probability”. In this scenario, the rule does not define exactly what will happen; the thing may go left or right. Importantly, two instances of this system that start in the same state may turn out entirely differently (since one could go left while the other goes right) which is not possible in a deterministic system.

As such, a system is deterministic if and only if all of its rules are deterministic. A single probabilistic rule is enough to make the entire system behave probabilistically, even if all of the other rules are deterministic.

Interesting tangent: There is some debate in philosophy about whether the universe is truly deterministic or random, and while quantum physics is currently leaning in the direction of random that is not (yet) relevant to these definitions.