Grandma's Deck of Cards - An Alternate Reality

My grandmother was a woman of simple ideas and frugal habits.  She was a woman with much to do and a grandson to amuse and one of the amusements was ‘snap’.  Snap is played with a well shuffled deck of standard playing cards.  Each player holds half of the deck face down in their hand, and takes turns to lay their top card, face up, in front of them on their discard pile.  When the reveal results in two cards of the same face value being displayed, the first player to call ‘snap’ claims both discard piles and puts them in their hand.  Play continues until one player runs out of cards.  The suit of a card is irrelevant.

I said at the outset that my grandmother was a frugal woman, and playing cards were “expensive”.  For this reason we played with a deck consisting of the remnants of several ancient decks.  Who knows how many cards there were in total, or indeed, how many cards were present of any given face value.  This was an irrelevance to her and, because I didn’t know any better, an irrelevance to me also.  We will return to the alternate reality of Grandma’s deck of cards in due course.

Games of all sorts are models of reality.  The ancient precursor to Snakes and Ladders had the very serious purpose of teaching the player that massive advances could easily be followed by spectacular reverses, and that the outcome was never a matter of certainty.  Games, from draughts to video games, are all about modelling, or simulating, an arena in which experience can be gained that may then be applied to real life.  The player at draughts pits their wits against another intellect in a controlled environment.  The video game player does likewise.  The reason games lose their attraction as we mature is twofold; we now have greater stores of genuine experience to draw on, and this experience increasingly alerts us to the fact that the game itself is a flawed simulation of reality.

What games do accomplish is an early realisation of causality.  By playing games a child is encouraged to accept the notion of an uncertain outcome, and to plan for victory.  These life lessons remain with us, because we have an enduring urge to explain causality.  We ask ourselves; ‘Which of the events of the past were instrumental in deciding the present?’; ‘Can dwellers in the present really influence their destiny?’; ‘Are the events of the past a guide to the future?’.  These questions have exercised humans ever since there have been humans to ask them, and we continue to attempt to explain the reasons for existence and circumstance on both an individual and a global level.   

From the beginning it was obvious that some events were predictable and reliable, in time they became known as laws of nature and were eventually measured, delineated and codified.  Laws of motion, heredity, thermodynamics, gravity and relativity occupy the same place in our consciousness as did the notion of a regular calendar and the inevitability of the seasons in the minds of the ancients.

For the sake of this discussion such universal constants are labelled Necessities.  In the consideration of causality they are termed such because certain events will, of necessity, produce a predictable outcome.  As far as we are aware necessities are a universal constant.  We have no reason to believe that the rules that hold true on our planet are in any way different from those to be found elsewhere in the universe.  It is true that reductions in scale may result in the discovery of unexpected necessities, such as the bizarre properties of the quantum realm, but it seems to be a given that even bizarre necessities are regular necessities, in that their application is universal.

As far as causality is concerned, the role of necessity is likely to be the primary factor in explaining phenomena.  Some schools of philosophy even hold that necessity can explain causality alone, without recourse to any other explanation.  This belief is called Determinism; it holds that the universe has been set on its path and that the configuration of particles and forces in which we find ourselves today is the one inevitable outcome.  Over time this world view has been augmented by a new perspective, after all, the sun may rise every day, but whether it will be seen, or obscured by the elements, is a question that cannot be answered with absolute certainty.

Some may argue that inability to predict the weather is due to a failure of measurement; indeed, meteorological departments the world over still seek to increase accuracy of prediction by increasing the volume of data gathered.  Many are now resigned to the limits of such an approach however, and forecasts are now commonly presented in terms of ‘percentage chance of rain’ as opposed to a bald prediction.

A new modifier has therefore been established in the struggle to define causality, Chance.  While necessities are as reliable as ever, it is often impossible to say with 100% certainty that a certain outcome will follow a given cause.  For example, the heat of the sun on the ocean will result in greater atmospheric humidity, and higher atmospheric pressure - that is to say, wind will blow.  But where will it blow and when will it drop rain; these are questions for the probabilitist.  

We are aware that events of very little probability occur all the time.  For example, the odds against winning a lottery are so great that a person that buys a ticket in the expectation of winning is regarded as foolish.  However, we also accept that a winning ticket exists.  Therefore, while the chance of a given individual winning may be remote, the chance that there will be a winner is held to be 1:1, a certainty.

Whenever we face a situation in which the possibility of a given outcome is less than 1:1, we are obliged to calculate the chance of the outcome occurring.  To accomplish this it is necessary to construct a model or a Simulation.  Modern weather forecasting consists entirely of constructing and refining rolling simulations.  The role of the weather forecaster is to audit the output of the simulations and decide, by reference to his/her personal experience, how much weight to attach to the predictions that are made.  The fact that a human forecaster is required illustrates the fact that, even when based on current data, the result of a simulation cannot be regarded as a prediction of the future; it is, at best, a guide to the chance of a certain outcome.

Is the combination of Necessity and Chance sufficient to explain causality?  Most would add one more modifier, Agency.  In the example of the lottery mentioned above, the one unlikely winner may still not claim his prize if the pot is embezzled by an executive of the lottery company.  Dyed in the wool determinists may claim that the impulse to steal the pot was hard-wired from the moment of the big bang, but society does not think that way.  Indeed it is difficult to imagine the determinist who, having been the victim of violent crime, would not desire that the perpetrator be punished. 

Society has a very sophisticated intuition of agency in that no investigation is made into the motivations of a killer bear/lion/shark, it is merely destroyed.  On the other hand, consideration of the life history of a murderer may well modify the severity of justice that society will find satisfactory.  Indeed, justice itself may be defined as the awarding of appropriate consequence in response to improper exercise of agency.

Determination of the weight to assign to necessity, chance and agency in explaining the causality of a particular event is often subjective.  Throughout history agency, in the person of a creator/first cause, has been taken as a given.  Any argument has been centred on the question of whether chance exists, and whether the exercise of human agency is anything more than an illusion.  The modern vogue however, is to re-order the list of causality with agency, that of humans, as the last and least influential modifier - from the perspective of a lottery ticket buyer this is perfectly understandable.

Having established the actors in our drama, it is time to turn to the question of assessing the suitability of a simulation/model.  Why is this of concern?  It is of concern because we base the decisions we make, that is the expression of our own agency, on the output of simulations created by the agency of others.  Calculations of chance/probability are increasingly occupying the primary role in decision making and opinion forming.  Why is this the case?

The relegation of agency to the position of most insignificant modifier has discounted reliance on divine providence, and the restriction of agency to humans that have no power to influence necessity has left only one modifier that offers any chance of control - chance.  Guided by an accurate estimation of chance/probability humanity hopes to exercise its agency to choose the fork in the road that is most favourable.  For this approach to have any chance of success, the accuracy of the model/simulation is of paramount importance.

The other difficulty presented by this situation is over-reliance on the probabilities suggested by statistical models.  To be reliable, a model must meet the following criteria:

1. Whereas Phenomena may be caused by any combination of Agency, Chance and Necessity,

2. Agency may be substituted for Chance in a Simulation, practical or intellectual, and the statistical chance of Phenomena being the result of Chance may be inferred from a Simulation where,

3. the inputs of the Simulation acknowledge the known values of Necessity.

4. Should the chosen values of Necessity deviate from known values, the statistical validity of the Simulation shall be compromised, as shall any subsequent calculation that uses the output of the Simulation as an input.  In the case of values of Necessity that contradict known values, the probabilistic output of the Simulation shall be regarded as zero.

Lets explain the four points above, before returning to the true account of Grandma’s Deck of Cards.

1. Point 1 holds out a real world scenario where the phenomena to be explained is the result of complex circumstances.  Necessity will always have a role in that the physical paradigm is bounded by physical laws.  Rainfall for example is the inevitable result of the water cycle, and the fact that rain is falling at some place on the globe may be taken as a given.  Suppose we want to explain the fall of rain that is taking place right now outside our window.  Now we must incorporate chance into our model, as rain does not fall outside our window 24/7 - even though it may feel that way.  What if we want to explain the causality of the cold/flu we will develop in 48 hours due to the fact that we have been drenched by the rain?  Now agency plays a part since we decided to go out and did not take adequate precautions.

2.  Point 2 deals with the construction of a model/simulation; perhaps a government department wishes to subsidise the umbrella industry in the hope that less days of production will be lost due to the common cold.  The construction of the simulation itself is an exercise of agency, as is the selection of the rules and inputs that will govern the behaviour of the simulation.  If the results of the simulation become influences on future events then it is wrong to regard these future events as being purely the result of necessity and chance since the creation of a perfect simulation is impossible and the agency of the creator of the simulation is now acting in the chain of causation.

To state the case more fully, we are predisposed to accept the validity of a model/simulation if it has been constructed by qualified people and its results are, broadly speaking, in line with our expectations.  We are prone to forget that the simulation is not a mirror of the world, but is at best an approximation of reality.  In forming this approximation the creator of the simulation had to make decisions regarding what factors/data to include, and what to exclude.  These decisions are an expression of agency even where there is no creator bias.

Why do we say that agency is substituting for chance?  This is especially the case where a simulation has been created to predict rare events, or to provide a proof of possibility.  Proof of possibility calculations are vitally important where the consequences of failure are unthinkable.  For example the decision to site a nuclear reactor must be supported by confidence in the impossibility of the site being compromised by earthquake on land, or the effects of ocean floor earthquakes - tsunami.  Earthquakes are notoriously difficult to predict and therefore the model used to provide proof of possibility relies heavily on the agency of the forecaster.  Because it is not possible to simply wait and see, this agency has replaced the action of chance in the model.  All models seek to replace simply waiting and seeing, that is their purpose; therefore the role of the agency creating and populating the simulation cannot be discounted.

3. Point 3 applies a necessary caveat to the validity of any particular simulation.  Since necessity provides the context for any examination of causality, the simulation that we use to release us from the need to ‘wait and see’ must be constructed according to the accepted laws of nature.  If any of these laws are modified in the simulation, this modification must be justified by appeal to factors known to modify these laws in the ‘real world’.

4. Point 4 merely points out that the reliability of any statistical prediction based on the behaviour of a simulation is inversely proportional to the simulation’s degree of deviation from known necessity.  In other words, a meteorological model that does not correctly account for the behaviour of water vapour will fail to forecast the weather correctly.  Inaccuracy of this type is to be expected, hence the role of the weather forecaster as mentioned above.  Indeed most predictions are made with the understanding that there is a ‘margin of error’ that must be taken into account.  The margin of error alerts the user of the prediction to the degree of deviation that can be expected on account of the simulation’s failure to perfectly model necessities.

The unavoidable inaccuracies mentioned above are quite different from a case where ‘values of N that contradict known values’ are used.  While inaccuracies are to be expected, and the presence of these inaccuracies does not completely invalidate the indications of a simulation, the invention of necessities not found in nature, or the discounting of known necessities, renders the result of the simulation meaningless.  To illustrate this point lets return to Grandma’s Deck of Cards.

As explained above, Grandma’s Deck of Cards was truly chaotic - in the scientific sense of the word.  True, there were still four suits and thirteen possible cards in each suit, but that was as far as the orthodoxy went.  If I remember correctly, there were only three aces of any colour, but six sixes, along with a multitude of other irregularities.  The size of the deck would also grow and shrink depending on how many cards were currently under the furniture.  I call this deck of cards ‘an alternate reality’ above, because no deck like it existed anywhere else.  Having been inducted into ‘snap’ using this unique deck, I was naturally left with several fundamental misconceptions about games of cards.

I was not aware of how warped my sense of reality was, at least with regard to cards, until I started to learn a series of card games in my late teens, culminating in Bridge.  I clearly remember my incomprehension when told to note the cards that had been played, since they would not be seen again until the next hand.  Deep in my subconscious I still believed that literally any card could appear at any time, just as it did when playing with Grandma’s deck.

To use the language above, Grandma’s Deck had properties that contradicted known values of N.  Not knowing any better - I was only 5 - I assumed that Grandma’s Deck was typical of all decks, and my estimation of what was possible within the playing card paradigm was therefore completely unreliable.  The situation was hardly one of life and death, although it certainly caused some hilarity.

Behind this example is a serious lesson.  Bridge is a game in which all 52 cards are dealt out at the beginning of the hand.  If you have a good memory it really is possible to know what will happen next, especially toward the end of the hand.  Take for example a situation where spades are trumps, and every spade except the 2 has already been played.  If a player holds the 2 of spades, he will win the trick on which the card is played, irrespective of the cards held by the other players.  His chance of success is 100%, the chance of failure nil.

If I had been asked the chance of losing the trick however, before having shaken off the influence of Grandma’s Deck of Cards, I would not have answered ‘nil’.  My experience, according to my alternative reality, was that you never could be 100% certain what card would show up next.  I would have said that the chances of losing the trick were small, but not zero.  If pressed on the subject I probably would have thought long and hard about factors such as how many fragmented packs were normally present in Grandma’s deck, what the historical distribution of cards had been, how likely it would be that a trick could be won with a 2 of spades.  If I had a statistical aptitude I might even start to put figures on the likelihood of losing the trick, when losing the trick was in fact impossible.

This phenomena is a False Proof of Possibility.  Unwittingly the model I was using contained an assumption that contradicted a necessity.  In this case the necessity was that each card appears only once in a hand of bridge.  By accepting a false proof of possibility, albeit with just a very small probability attached, my decision making process had been contaminated.  Consider that Bridge is routinely played for large sums of money, and that the situation outlined above occurs at a make or break stage of the game.  By believing that failure is possible, the player places them self in a position of doubt, when they should be in a state of certainty.  They have been influenced by an agency, Grandma, into allowing for the possibility of chance, where necessity has already answered the question.

The above is an example of a false proof of possibility, caused by denial/contradiction of a necessity.  Consider another example caused by ignorance of a necessity.  When Christopher Columbus sailed west across the Atlantic in 1492, it was with the intention of discovering a new route to the Orient.  There is no doubt that Columbus and his backers regarded this adventure as a possibility and they would certainly have performed calculations in support of this belief.  In spite of their optimism, supported by a model/simulation, their aim was actually an impossibility and the chance of success was zero.  This is not to say that the voyage was a waste, the Americas were discovered in place of the route to the Orient and the history of the world was changed.

How can a false proof of possibility be identified?  It is necessary to audit the necessities and ensure that they are properly integrated into the model/simulation.  If necessities are not fully known or understood then the margin of error should be adjusted accordingly.  If necessities are disregarded or contradicted then the model/simulation is fatally flawed and its output should not be considered as proof of anything.

There is a fashion for accepting the results of flawed models and arguing about whether the exceptionally long odds produced are in some way a proof of impossibility.  Philosophically this approach is flawed.  Some commentators assert that improbability greater than a certain arbitrary number of orders of magnitude is synonymous with impossibility; they are wrong.  To illustrate why this is so, consider the bitcoin keyspace.  Bitcoin keys contain 52 characters and the relevant Bitcoin address can be deduced from the key.  The total number of possible valid bitcoin addresses is 1x10^77 (the number of atoms in the universe is regarded to be between 1x10^78 to 1x10^82).  Assume that 100 million of these have been used and currently contain funds.  This would make the chance of typing 52 random characters and finding the key to an address containing funds 1:1x10^69 - a vanishingly small chance but not impossible.

A closer examination of models/simulations producing vanishingly long odds generally finds that the simulation has disregarded or contradicted a necessity, and should therefore be regarded as a proof of impossibility rather than possibility.

Someone may object that the failure of a model/simulation should not be regarded as final proof of impossibility, as a better simulation may take its place.  This expectation is not realistic.  To be really useful a simulation/model should produce proof of likelihood, not just a vanishingly small probability.  If it has only been possible to produce proof of possibility by invoking an alternative reality, is it realistic to expect a better model?

It has become popular to 'suspend our disbelief' in the pursuit of entertainment and, sometimes, in the pursuit of a more comfortable universe.  Witness the fact that there are now any number of people that believe time travel is possible, that deny the Holocaust and that assert that human activity has no effect upon climate.  In many ways they are playing with Grandma's deck of cards, and their actions and decisions are affected in profound ways.  

I advocate an audit of the Necessities - the laws of motion, heredity, thermodynamics, gravity and relativity are just as valid as they ever were, and the chain of causality is still inescapable.  We all run simulations in our minds as part of our decision making process, yet many among us plug values for N into those simulations that are widely divergent from reality.  Like the gambling addict that believes he has discovered a 'system' when there is none, false values for N are causing millions to justify opinions and actions that will prove disastrous for themselves and others.

Grandma was never playing with a full deck - are you?

Comments

Popular posts from this blog

Abiogenesis & JCVI-syn 3.0

The Probability Stacking Fallacy