A Basic Primer on the Subject of Complexity

Introduction

The study of complexity can be a very enlightening endeavor.  Throughout the literature, the idea of complexity is often framed as somewhat of a paradox.  Along these lines, Bonini’s Paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems.

Many great thinkers have lamented the idea of complexity.  In support of this notion, Albert Einstein once stated: “The definition of genius is taking the complex and making it simple.”  In this sense, Douglas Horton underscored Einstein’s conclusion by saying: “The art of simplicity is a puzzle of complexity.

Digging a little deeper into the subject, one quickly learns that the concept of complexity is multidimensional in nature.  According to Wikipedia, “definitions of complexity often depend on the concept of a ‘system’— a set of parts or elements that have relationships among them differentiated from relationships with other elements outside the relational regime.”  To further this line of reasoning, we will take a look at complexity from a purely qualitative point of view and then consider the quantitative perspective.

The Qualitative Perspective

To launch our discussion, let’s first view complexity from a purely humanist perspective.  For example, when we look at the underlying structure of a tree, we tend to immediately notice the trunk and its progressively smaller branches.  Owing to this visual structure, many would say that such an image is not complicated, perhaps using the word “simple” to describe its character.

However, in some things we view in our world, like the overlapping forms that define a piece of modern art, there is no recognizable underlying structure or pattern (at the macro or micro-level).  As a result, our mind interprets the visual image as a seemingly never-ending set of randomly overlapping shapes, colors, and textures.  Thus, many would say that such an image is “complex,” like the example shown below.

The Quantitative Perspective

From the dictionary, we find that the term “complex” is described as anything that is “composed of many interconnected parts.”  In this case, a “part” is largely synonymous with the terms “element” and “node.”

Let’s now consider the idea of a network.  Again turning to the dictionary, we find that a network is simply “a system of interrelated elements.”  Owing to this definition; and only for the purposes of this essay, we will treat the terms “network” and “system” as synonyms.  Thus, we may now generalize and say that complexity is: “the state of a system or network that is defined or otherwise characterized by the intricate arrangements of connections between its many nodes.”

As a matter of abstraction, we can reduce the latter definition to a couple of factors; namely, nodes and connectors.  Just as the name would imply, a node is a terminal point, much like the hub on a wagon wheel; where the spokes are connectors.  While the hub has a system role, so do the spokes.

For the sake of argument, suppose we have a network that consists of only two nodes (N = 2)  and one connector (C = 1).  In this case, the complexity factor would be M = 3 since there are N = 2 nodes and C = 1 connector, or simply M = N + C = 2 + 1 = 3.  To better visualize this particular case, the reader is directed to the upper left-hand side of the graphic provided below.  From this graphic, it should be apparent that as the number of nodes increase, the number of possible connections also increases, but at a disproportional rate.

The idea of forming small networks (sub-systems) into larger networks (systems) is not new.  This design strategy has been demonstrated to yield many benefits.  To extend our discussion, with must now give due consideration to Metcalfe’s Law.  Essentially, this law can be used to determine the number of unique connections (C) in a network based on the total number of nodes (N) in the network. The relationship can be expressed as C = N(N − 1)/2.

Applying the Concepts

To better illustrate the ideas discussed thus far, consider the instance of N = 5 nodes.  Given this circumstance, we might logically ask: “If a network has 5 nodes, how many possible connections could exist?”  To answer this question, we can directly compute the number of connections (C) by way of the relation: C = N(N − 1) / 2 = 5(5-1) / 2 = 10.

However, when considering the total network, the number of nodes (N) and connections (C) must be added together. Thus, for the case of N = 5 nodes, the complexity factor (M) would be given as: M = N + C = 5+10 = 15.  Understanding this point now begs us to ask the question: “Why is this idea so important to the field of Lean Six Sigma?”

The answer to this question is really quite simple.  The complexity factor (M) of a system or network should not be viewed as being correlated to the total number of defect opportunities (O) associated with that system or network.  This would only be true if each system element and connector represented a single defect opportunity.  It should be remembered that a large number of defect opportunities does not necessarily infer a complex state.  On the flip side, a high level of complexity would necessarily infer a very large number of defect opportunities (given that every node and connector has a system purpose).

As a sidebar note, consider a system where each node and connection sustains a performance capability ratio of Cp = .83.  This would be equivalent to Z = 2.5, or about 99.4% confidence.  Given this level of confidence for each node and connection, the graphic provided below displays the probability of zero system failures (ordinate) for a select range of M (abscissa).

An examination of this graphic reveals that, for the case of Cp = .83, the confidence of system success is reasonably robust to the various combinatorial possibilities of N and C, at least to the point where M = 20.  When M > 20, there is a rather sharp drop in the system confidence.

Again, the reader is reminded that this essay is a basic primer.  Owing to this, the various assumptions underlying the computations and other such analytical details are not introduced into the mix so as to keep the discussion light, simple, and focused.  As previously stated, the aim is to create basic awareness, not grapple with technical details.

Expanding the Example

Considering our example model (M = N + C = 5+10 = 15), let’s say the likelihood of each element (node) successfully performing its network role is 99%. We will also say that each of the 10 connections maintains a performance confidence of 95%.  Thus, the likelihood of total mission success upon each engagement of the system would be (.99^5)x(.95^10)= .5694, or about 57%.  On the flip side, it can be said that, for each system engagement, there is a 100(1 – .5694) = 43.06% likelihood of mission failure.

Let’s now further suppose the network must be successfully engaged four (4) times over a one-hour period to achieve a larger goal.  Given this aim, the likelihood of success would be estimated as .5694^4= .1051, or about 10.5%.  This is to say the likelihood of achieving mission success and realizing the post-mission goal is about 10.5%.  Obviously, such odds would not be viewed in a favorable light in most circles.  So how can the odds of success be improved?

Improving the System

Where a system or network is concerned, improving the odds of total mission success can be realized in a number of ways.  Owing to this, we might ask: “Is it possible to design (configure) the nodes, connections, and capabilities in such a way the net effect will minimize the risk of mission failure while concurrently reducing the total operating costs?” 

Another question might be: “Is it possible to make the likelihood of mission success robust to the impact of complexity?”  Of course, postulating such questions could continue on and on.  However, at day’s end, there are four primary options to reduce the consequential influences of complexity.

1) Design the system with fewer nodes and connections.

2) Configure the nodes and connection in such a way so as to achieve a robust design.

3) Improve the capability and/or reliability of the sensor nodes and/or connections.

4) Some combination of options 1 through 3.

Closing Remarks

As one might easily surmise, books could be written on how to realize the intents and aims of these four options.  For example, consider option 3.  By way of DMAIC, it’s often possible to find the most optimal parameter settings for those nodes and connections for each sub-system; thereby, decreasing the system-level probability of failure.

Per the caveat offered at the beginning of this short essay, our discussion of the topic is not intended to provide the reader with a “user’s manual” for how best to cope with complexity.  However, its purpose is to discuss some of the most elementary subjects on the topic.  In this way, the readers can begin to wrap their minds around the many issues that could come into play when attempting to leverage and manage complexity during the course of product design.

it_ITITA