Some of the responses to my post about the need for something other than a set in large systems theory led me to think about how unmathematical the theory tends to be. It might seem to be logical to say that if math struggles so much to explain large systems theory then it’s not really a mathematical concept. But I think most mathematicians would disagree with such a conclusion.
Either the theory is poorly defined (always a strong possibility with any newly proposed theory) or the theory demands a new set of mathematical tools. Many theories have led to the creation of such tools.
We use math to express our understanding of how to manage things in a logical fashion. Take a simple triangular shape. You could define that triangle as a collection of simple equations, a series of numbers, or as a coordinate system. The triangle might be a point in a multi-dimensional space or it might be a surface area.
A triangular shape is not merely a singular thing. It is potentially many things. The potential is realized as a specific thing depending on what you’re doing mathematically.
The same principle holds true for numbers. Take the number c as an example. In simple algebra, c is merely a variable. In physics the number c is recognized as “the speed of light”. But in reality, c is not the speed of light. It’s the maximum speed of anything in the universe, if we define “speed” to mean the rate of change in a point’s position within a framework or coordinate system representing space-time.
For convenience we say that c = (approximately) 300,000 kilometers per second. In other words, to an observer, anything that traverses about 300,000 kilometers in 1 second of the observer’s measurement of time is moving at the speed of light. But that isn’t necessarily true.
We know that light can be slowed down. You can write a lengthy explanation of how to alter the speed of light or you can just say that if we obstruct particles of light (photons) they’ll move much more slowly. But this is a semantic trick. The limit of possible speed in the universe hasn’t changed just because you can send light particles through a thick soup and slow their progress. So the speed of light is not really “the speed of light”.
And that sounds exactly like the kind of contradiction that large systems theory poses for people. It sounds like a semantic trick.
Large Systems Theory Is Not about Semantics
To be a viable theory, any proposition must use reliable, comprehensible, persistent definitions. You can’t say, “well, the speed of light is X over here and it’s C over there.” Now, physicists have wondered for generations if this isn’t literally possible. They keep trying to break Einstein’s theory of relativity by looking for different speeds of light. Although they’ve found conditions where light particles’ velocity can be altered, they know that these local phenomena don’t change the conceptual thing that we mean by the speed of light. So far, that has remained constant.
But there are ways to get around these simple limits in physics. Large Systems Theory says, for example, that you cannot measure the large system. In this sense, it just means you cannot quantify every aspect of the large system. But you can redefine the system to be something else that is easily quantifiable.
Exactly how large is the universe?
That’s a trick question to the large systems theorist because it implies there is a way to quantify the universe. In what terms are you quantifying it?
There is only 1 known universe. That’s a simple number, easy to understand, and it helps us to visualize the universe in a very human-acceptable way. It’s also a rather useless quantification unless we can theorize there are other universes. Even imaginary universes matter in this context because we can count imaginary universes as well as real (verified) universes.
In other words, for every large system there is a reciprocal system that redefines the large system. We use this principle in computer science all the time. And that’s not based on some complex mathematical formula, either. The COBOL programming language was one of the first high-level languages to implement formal redefinition.
You might define a variable (a section of physical memory) in COBOL to consist of 4 bytes of data (it holds 4 characters from a standard set of data definitions like EBCDIC or ASCII) and to be of a certain type, like an integer. In COBOL you can only perform integer operations on integer variables. You can’t rearrange the positions of the data in the 4 bytes.
But you can REDEFINE your integer variable to be a string of 4 characters. Now you can move your byte-sized data around inside that 4-byte structure. And you can use your original definition of the variable as an integer to perform (only) integer operations on the re-arranged data.
If you think this kind of logic must lead to unpredictable results, it does. The REDEFINES command was controversial when introduced and some of my programming instructors forbade us from using it. So I used it anyway, but that’s beside the point.
Redefining Data Requires Special Rules
There are two ways to deal with data retyping, as it’s called, in programming languages. You either ask the programmers not to do it (and they will eventually ignore your request) or you force them to use a programming language that doesn’t allow it.
The latter solution is explained as strict typing. In a strict programming language when you create a variable you must assign it to a specific type (or it defaults to a specific type) and you can only perform certain operations on that variable, never redefining it in any way for any reason.
Naturally, programmers find ways to get around the strict typing. You might write an algorithm that deduces what the ASCII characters represented by an integer in a 16-byte variable represent, insert those characters into a text or string variable, move the characters around, deduce what integer value that new combination represents, and then assign the deduced integer value to the original integer variable.
Yes, people really do write this kind of logic. They always have a justification and I’m not chasing that rabbit.
What this means is that you can always find another way redefine your data even if you have to go outside the limits of the rules of logic that you’re working with. Which leads us back to the theorem of reciprocity in large systems as I stated it above.
Large Systems Theory Demands that Everything Be Redefined
Redefinition is the basis of Large Systems Theory. This all began with a simple formula I deduced from analyzing search engine results.
1 = Ny + Ty + Oy
This formula or equation describes a probability distribution for Naturality which shows that in every system the properties of an initial natural state decline over time. As Ny increases either Oy or Ty (or both) increases. If N is everything natural in a system, then O is everything that appears to be natural but is not (it’s a measurement of opacity) and T is everything in the system that is clearly (transparently) not natural. In other words, a system consists of things that are natural and not natural.
In search engine optimization the results for a query are natural if they were selected by the search engine without intervention by the Websites from which the results were taken. An SEO specialist intervenes by changing a page’s content, a Website’s navigation, or the backlink profile of the page.
Natural search results are what the search engine shows you if noone were trying to get a better ranking.
Opaque search results are what the search engine shows you if someone is trying to get a better ranking for the query, either in a sneaky way (trying to look natural) or by unintended consequence. You may try to rank a page for “top 10 books on number theory” and accidentally improve your ranking for “how to compute complex numbers”.
Transparent search results are clearly and obviously contrived to rank for the given query. The search engines don’t object to transparency as long as you play by their rules.
The Naturality Equation applies to every kind of system. I have yet to find a system to which it doesn’t apply.
You can alter the equation by differentiating between open systems and closed systems.
An open system changes its composition, structure, or order on the basis of outside input. This could be the result of someone redefining a variable in a programming language or of God injecting new matter or laws of physics into the universe.
A closed system changes its composition, structure, or order solely on the basis of its own natural definition. That doesn’t mean that if you prove the universe is a closed system you have proven there is no God because God could simply have said, let there be a closed system universe. Things can still exist (figuratively speaking) outside of closed systems.
Any system (something that can be measured) can be classified as open or closed.
You cannot classify a large system (because you cannot measure it). You can guess that the large system may be open or closed and you can treat the large system as if it is open or closed. But you cannot prove that the system is open or closed until you measure it.
The Theorem of Naturality Says Every System Changes
The theorem behind the Naturality Equation says that a system changes over time. Why should this happen? Perhaps because of something physicists call entropy.
You can talk about a static system that never changes but you’re really only talking about a single state (at time T) of that system. If you express the unchanging system as a vector of states across a timeline then 1 aspect of that system changes: the points on the timeline. You cannot not change a system no matter how hard you try.
And that means Naturality declines as a system changes.
But this is an inconvenient rule because it means that your attempts to measure systems become more complicated as time goes on. And yet our attempts to measure the universe have become more sophisticated as time has passed. We add knowledge to knowledge and derive new knowledge from our combined knowledge – inferring facts or probable facts from facts we’ve observed and verified.
Put another way, the theorem says that a system’s naturality degrades from time T0 to time Tn until there is no more naturality. The value of n could be finite or infinite but it’s too much trouble to look for meaning in n so instead we just use arbitrary time slices.
In other words, you assign an arbitrary value to n and then start over when you reach that point in the chronological progression, redefining naturality at time Tn+1. This new Tn+1 point becomes a new T0.
Here is a real-world example of such a transition: The years in the western calendar from 1901 to 2000 comprise the 20th century. The 21st century began with the year 2001.
If you want a physical example, the world’s total dry land surface area has declined by about 40% since sometime around 8000 BCE (about 10,000 years ago). What we think of as the total dry surface area of the Earth today is just a subset of the total dry surface area of the Earth 10,000 years ago.
If current theory accurately describes how the universe has evolved from a single point in space time about 13.8 billion years ago, then the history of the universe illustrates how naturality declines over time. There was a time when there were no elements in the universe. Then there was a time when all the elements were hydrogen and helium. Then there was a time when those elements began pooling together to create gravity wells that became stars, black holes, and other things. And the progression has continued to this day and will continue for as many years as theory suggests the universe will continue to exist.
How Do You Mathematically Describe a System?
Mathematics has a huge semantics problem. It reuses many words and phrases because it’s so hard to come up with good new words and phrases. We add descriptive qualifiers to things. We don’t just have sets in Set Theory – we have finite sets and infinite sets and singleton sets and power sets and so on.
So that is why I proposed the ugly, horrible name pluristate set for Large Systems theory. You must be careful not to reuse names to describe things, even if the things themselves can be redescribed in other terms. You can take a set and combine it with some special operations and call that an algebra. But a programming language would call it an object class or something like that.
A toy maker would call it the rules of the game.
For every definitive definition there seems to be an endless array of alternative definitions.
To define a system in mathematical terms – such that you could define formulas, functions, equations, or whatever – you must define the components of the system in mathematical terms.
You can’t call those components arrays or matrices because those are mathematical concepts that have been borrowed to death. But you can sort of define things in a system as if they were arrays or matrices. Yes, I know some people hate it when I say “sort of” but if anything should be obvious (or transparent) by now, it’s that Large Systems Theory is a theory about ambiguities in data properties.
In Large Systems Theory sort of is a perfectly natural expression. It’s sort of like limits in math, where you can define functions that produce a series of results that don’t ever reach a specific value. Sort of is a more intuitively accurate phrase than limit of is. Besides which, when you start talking about morphality in Large Systems Theory people begin to think of morphisms (functions).
There are things called morphing sets, which is a concept I did not need to devise for Large Systems Theory. I wrote about morphing sets on Science 2.0 in 2012. Large Systems Theory demands a subset or type of morphing sets that I call Chronocity Sets. I define chronocity as “the measurement of the distance in Time between where we are (Now) and where we were at some point in the past (or where we will be at some time in the future) with respect to a specific object, or document.”
Building on that, “a Chronocity Set consists of a collection of objects together with a vector of discrete states for each object as measured from Time 0 (the initial state of the set) to Time S (the final state of the set).”
Although morphing/chroncity sets are useful contexts for analysis, they don’t accurately describe a system. That’s not what I intended them for.
A system is built from atomic or pseudo-atomic components. A pseudo-atomic component is something that has coherency and definition or individuality within the system, but which itself is comprised of atomic or pseudo-atomic components.
Everything within the system reduces to atomic components at some point but everything within the system may have its own local timeline. If that sounds a lot like Einstein’s theory of relativity, that’s because it is Einstein’s theory of relativity – redefined in terms of Large Systems Theory. You’re allowed to redefine everything in Large Systems Theory.
It’s sort of like telling everyone to speak their own language as long as they are talking about the same things – in their own defined contexts, of course.
Maybe a better metaphor is that Large Systems Theory explains why three blind men perceive an elephant differently – the elephant is a large system and none of the blind men can fully measure it. And if you cannot fully measure a system then you cannot accurately describe it.
But as one of the blind men you must still deal with the elephant in the room, so you do the best you can with what you know. And that is why sort of works so well in Large Systems Theory. If we could be more precise we’d be talking about things in a limited context.
So one must ask, are there limits in large systems? (I think the answer is always “yes” but proving that is not so easy.)
A System Is A Set of Components, Sort of
If you don’t treat the component as numbers or mathematical variables then you can sort of use set theory to describe systems. But if you’re using set theory then it follows that you can define functions (operations) on your set. And you cannot call them pseudo-whatever. If you define the system as a set then it must have real functions and operations.
That’s okay because we don’t have to define a system as a set, but it’s convenient to think of a system in a set-like way. The thing about a system is that it changes. Hence, if you define a system as a set you must include the morphality of the set in everything you do.
Set morphality consists of or describes all the changes of the set. Now, I could drag this definition down into the depths of the differences between permutations and combinations but let’s not go there. We’re not dealing with number theory here. We’re only concerned with a set of states pertaining to a system when it is defined as a set.
In other words, to define a system as a set you need to define at least two sets. And so far we have a better understanding of the morphality of the set (that describes the system) than we do of (the set that describes the system). We’ll call this thing the Morphality Set of the System Set.
You also need a set that describes all the possible components of (the set that describes the system). So let’s call that the Component Set of the System Set.
So, to describe a system in terms of set theory you need three sets:
- The System Set
- The Morphality Set
- The Component Set
The Component Set is in one way a subset of the System Set, but it’s an aspect or property of the System Set, not merely a subset. The Component Set doesn’t serve any purpose by itself. It doesn’t describe a smaller system (because it would need its own Morphality and Component sets to do that). So these three special sets are atomic set descriptors of a system. You cannot have one without the others. They have a symbiotic relationship with each other.
And yet so far all we have determined is that the System Set contains a number of things defined in the Component Set and that the System Set experiences a number of changes (or morphs) defined by the Morphality Set.
In the simplest terms, if you can count all the things in the System Set then it’s a (small) system. Otherwise it’s a large system.
*=> Large systems are not necessarily infinite. They are unmeasurable.
What seems to confuse some people is the assumption that if you cannot measure a set of definable quantities they must be infinite in number. But that isn’t how Large Systems Theory works. The Large System is probably finite in at least one perspective, which means it’s not infinite.
We say the universe is infinite but since we can’t measure it accurately we don’t know that. It might be infinite in size but we can’t prove that. It only needs to be one unit of measurement larger than our ability to measure it in order to be a large system. Since we’re blind to that last unit of measurement’s worth of the composition of the universe we cannot say anything definitive about the universe’s real size.
Is There Order within a System Set?
It’s easy enough to say there is order to the universe. But what does that mean mathematically? We can also say there is order to the Internet. But, again, what does that mean mathematically?
Set theory fails us because it’s too dependent upon Order Theory, which defines order as a progression of ordinal valuations. In other words, we use Order Theory to decide that A is less than B.
Is Pluto less than Jupiter? Is the Milky Way less than Andromeda?
You can quantify these things in terms of mass, number of molecules, surface temperatures, radii in meters, etc. but none of that has anything to do with answering the question, “is there order within the system set that describes the universe?”
If there is anything like order to the universe it’s not a progressive or ordinal order. We might be able to identify derivative order structures based on progressions (in fact, we have, when you think about atoms grouping together as molecules, molecules grouping together as materials, etc.).
The question of ordinality is important to Large Systems Theory because if you can describe the order of a System Set you can predict what that set should look like. The prediction will be precise for a (small) system and approximate for a large system.
A statement of ordinality for a large system tells you what it sort of looks like. It’s ambiguous and there is a limit to how precise it can or will be until you reach the point where you’re no longer measuring a large system but instead measuring a (small) system.
That means limits are ambiguous until resolved in this type of theory. Any mathematician will tell you that is nonsense according to the standard use of limits in math. But limits in Large Systems Theory are only sort of like limits in math.
It’s not that people haven’t been trying to describe large systems mathematically. We have developed whole systems of mathematical theory for doing just that including calculus and ring theory and things.
The problem is that math tends to squeeze everything down into precise things. One of the beautiful things about math is that if you cannot quantify something you can usually make up a way to quantify it.
It seems it should be possible to represent large systems mathematically. But I haven’t come across any mathematical theories (and I’ve only explored a few, understanding even fewer of them adequately) that appear (to me) to do this adequately.
That is because sort of isn’t a very comfortable concept for mathematicians. They do have theories that deal with ambiguities but they look for boundaries and limits. And the problem with applying that kind of thinking to Large Systems Theory is that it completely misses the point about what a large system is. We don’t know what its boundaries and limits are or should be, so how we can define boundaries and limits for a large system?
We can only sort of do that.
Related Articles about Large Systems Theory
How to Almost Measure a Large System on this blog
Large Systems Theory for Web Marketers and Analysts on SEO Theory
Why You Will Never Be Able to ‘See’ A Large System on Interwebometry at Science 2.0
Can We Prove That A Large System Is Self-organizing? on Interwebometry at Science 2.0