On the Nature of Asymmetry

So I entered the FQXi essay contest this year. You can read my essay “On the Nature of Asymmetry” on the FQXi website, in its full PDF glory.

I only found out about the website, and the contest, about a month ago, so my entry feels a bit rushed and unfinished. But I think having an external deadline was a good motivator nonetheless. I can still develop the ideas further some other time.

If you like, you can rate the contest entries at the FQXi website, by registering an email address. The rating period ends in about a month, after which the best rated entries advance to an anonymous expert panel.

The Simulation Narrative

Most of the millions of people lining up to see the latest blockbuster film know that the mind-boggling effects they are about to see on the big screen are made “with computers”. Big movies can cost hundreds of millions to make, and typically less than half of the budget is spent on marketing the premiere, or paying the actors upfront. Plenty of money left over to buy a big computer and press the ‘make effects’ button, right? Except that these movies close with 5-10 minutes of rolling credits, and about half of them are names of people working in visual effects, not computers. (Seems like a tentpole movie crew these days has more TDs (Technical Directors) than any other kind of directors combined.) [If you think making cool computer effects sounds easy, just download the open source tool blender, and create whatever your mind can imagine …]

Computer simulations are no longer just for engineering and science, they can be used as extensions of our imagination. A simplified set of rules and initial conditions are input, then a few quick test renders are made with low resolution. Twiddling the knobs, how many particles, viscosity, damping, scattering, and finding the correct lighting and camera angles, etc., iterating until you are happy or (more likely) forget what you were trying to accomplish.

Selene Endymion MAN Napoli Inv9240

Before computers, before visual effects and film, people had to use their own imagination to make entertaining simulations. The most light-weight technology to accomplish that was storytelling, guided narrative with characters and settings. The rules of the simulation were the rules of plausibility within the world of the story. The storyteller created the events, but the listeners enacted them in their imaginations. The storyteller received immediate feedback from his audience if the story became too implausible.

But once the audience “buys in” to the characters and their narratives, they become emotionally invested in them as if they were real people. Fictional characters, today protected by copyrights and corporate trademarks, can still suffer unexpected fates, and new audiences to a fictional world often demand to be protected from “spoilers” that would make it difficult for them to simulate them in their imaginations. Real people do not know what their future is, and to ‘play’ the role convincingly and without foreshadowing, it is best to live with incomplete information.

If I start to read a book of fiction, written many decades ago, when is the simulation of the characters happening?  Does every reader simulate the main characters each time they read the book, or did the author execute the simulation, and the readers are only verifying that the story is plausible? Certainly I feel like I am imagining the phenomenal ‘qualia’ that the characters in the book are experiencing, but at the same time I know that I am just reading a story that was finished a long time ago. Am I lending parts of my consciousness to these paper zombies?

In a well-known book of genre-defining fantasy, after hundreds of pages of detailed world-building, two characters are beyond weary, in the darkest trenches of seemingly unending war, when one of them starts to wonder if they shall

“[…] ever be put into songs or tales. We’re in one, of course; but I mean: put into words, you know, told by the fireside, or read out of a great big book with red and black letters, years and years afterwards.”

It’s not a bad way to put it, but even for myself at age 12, characters in a book discussing the possibility of them being characters in a book was just too self-referential to be plausible, and pushed me ‘out’ of the story for a moment. (A bit like characters in a Tex Avery cartoon running so fast they run outside the film. We get the joke, but let’s keep the “fourth wall” were it is for now.)

Since the book was written long ago, and has not been edited since, it can be argued that none of its characters have free will. The reluctant hero makes friends, sacrifices comforts, has unexpected encounters and adventures, all while trying to get rid of the “MacGuffin” that has fallen into his hands. When at last he arrives at the portal of Chaos where the artifact was forged, does his determination to destroy it falter, or will something totally unpredictable happen? To have any enjoyment in the unfolding of the story, the readers must believe that the actions of the characters have significance, and play their roles in our minds as if they had free will.

There are also professional actors, people who take to the stage night after night, repeating familiar lines and reacting to the events of the screenplay as if they were happening for the first time:

“For Hecuba! What’s Hecuba to him, or he to Hecuba, That he should weep for her?”

A good performance can evoke both the immediacy and intimacy of a real emotional reaction, but the audience still needs to participate in the act of imagining the events as actual, to understand at some emotional level “what it is like” for the characters in the play to have their prescribed experiences.

What to me really sells a scene is the interplay of the actors, not so much how photorealistic the visual effects happen to be. A painted canopy plays the part of a majestic sky, or a sterile promontory becomes earth for the gravedigger, if all the actors act as though it were so.

As convincing as our simulations can be, the point of fiction is that we enter it knowing that it is fiction, that we can always put the book down, or step outside of the theater. Fiction is not realtime, and it always requires audiences to imagine some parts of it (for example, what happens between scenes?). We choose not to pay attention to the man behind the curtain, or analyze the plot too much, when we want to immerse ourselves for a moment.

[Having said that, I don’t mean to imply that it is impossible to become lost inside made-up stories, and confuse them into reality in a quixotic manner, but that is usually not the intention of the storyteller (though it could be useful to the intentions  of a shrewd marketer, politician or cult leader).]

Time inside the simulation is independent from time in the real world. In addition to pausing the simulation, monolithic or pre-computed simulations can be executed in different sequential order from the assumed order inside the simulated world. This is used to great effect in some books, which describe the same event multiple times in different chapters, but from the point of view of different characters. Each perspective usually gives the reader some extra information, something that no character in the simulation can have. Viewing from outside the simulation, the audience gets an almost god-like view of the situation, sometimes even enhanced with indexes and bookmarks so they can page back and review the events in previous chapter (but not forward, since it would “spoil” the freshness of the simulated experience).

Pre-written narrative simulations, movies and plays, edit out the parts that are though to be uninteresting. This is a careful balancing act, because editing out too much leaves the characters and their actions too distant, and harder to relate to. Leaving in too much unnecessary details on the other hand can appear gratuitous and put off many viewers and readers, who will surely find better things to occupy their time.

Computer simulations today almost always consist of time-steps. A numerical approximation of some evolution equation uses the results of the previous steps to compute the next step of the simulation. The smaller the time interval used, the closer the approximation is to the real solution [in the mathematical sense, for example a piecewise linear line approximating a smooth curve], and the longer it takes to compute. If the simulation is pre-computed, the audience need not view every individual step, to make use of the simulation. [Note: In terms of the blender software, the physics timestep is independent of the framerate of the animation, and changing either will affect the needed baking and/or rendering time.]

When played at sufficient number of frames per second, our mediocre senses are fooled to interpret a sequence of still images as moving pictures [interestingly, while higher framerates increase the realism, and are technically possible today, moviegoers still prefer the “cinematic” 24 fps for big screen cinema, or even less for dream-like sequences]. But it would be naive to think that the timeline of the physical world also consists of individual static states, progressing in infinitesimally short step transitions across the whole Universe. Such ideas of motion were debated already hundreds of years BC by the Eleatic school, most famously with the paradoxes of Zenon. [Note: Unfortunately, even logical and analytical philosophy usually implicitly assumes that there is always a “world at time t” snapshot, at any chosen time t, with defined entities and properties. But that is a topic for another post.]

Stories can be used used to simulate, or theorize, about the minds of others. By vicariously identifying with characters, we can sometimes glimpse across the inter-subjective gap. Even a child can understand that a character in the story does not have all the information, that Red Riding Hood is asking questions because she doesn’t know who she is speaking with. But even voluntary participation in a simulative narrative can reveal hidden agendas in the audience, through transference: For example, did Shakespeare put baits in his Scottish play to “catch the conscience” of king James, or just harmless references to the new king’s known interests?

Some stories contain such powerful or virulent motivations for their characters, that the audience starts doubt their own volition as participants of the simulation. [Note: This could to be related to hypnosis, which can also be induced using only words and signs, and makes subjects doubt their own volition to some degree.] Being part of something larger, even if it is just a simulation, is a recognizable desire in the human psyche. Experience, of real life, as well as other kinds of stories, can also recontextualize previous narratives in new light, and help reframe them with a different viewpoint. [An example of this could be a new parent realizing: “maybe my parents were just as clueless as I am now?”; a kind of subject-object relationship reversal, in the psychoanalytical sense.]

In this transitional state between the simulator and the simulated, we might also strive to theorize on the motivations of a possible future ‘superintelligence’. Why would it spend so much effort to compute realistic ‘ancestor simulations’, extrapolating scenarios from its vast collections of historical data, as in the simulation argument by Nick Bostrom? Perhaps the motivations are the same as when we try to understand our ancestors from the knowledge that we have: If you don’t understand history, you are doomed to repeat it, over and over. Just as intelligence does not imply wisdom, superintelligence certainly does not imply ‘superwisdom’.

Collection and storage of ever more detailed data today gives another perspective to simulation as a way of stepping outside of time. If we can store as much information about the state of the world today as we can, and build a simulated snapshot state of it at a point in time, the transhumanist proposition is that with enough data (how much?) this would be indistinguishable from the real thing, at least for the simulated subjects inside the simulated snapshot. The idea is identical to [and identically depressing as] the afterlife scenarios of many religions and cults. No release for Sisyphos, or holodeck Moriarty from his “Ship in a Bottle”.

The problem of using statistics or probability to determine the ontological status of your consciousness is problematic for many reasons, among them the transitory nature of conscious experience. For the total tally of fictional consciousnesses, do you count the number of different characters in all the scripts, all the actors, or all the audience members? Does it matter if all scripted characters are actually “played” to any audience with phenomenal consciousness? Does a simulation character need to have phenomenal consciousness all the time, or just during some key scenes, zoning out as sleepwalking zombies for most of the time? Since there can be no definite multiplicity for such an ill-defined entity as a consciousness separate from its substrate, counting of probabilities from a statistical point of view is as meaningful as arguing about the number of angels that can dance on the point of a needle.

I don’t consider myself to be any technological “luddite”, on the contrary I believe that technological progress has the potential to create many more breakthroughs in the future, or even an unprecedented ‘flowering’ of the kind that rarely happens in the evolution of life on Earth (for example, the appearance of fruit-bearing plants  (a.k.a. flowers) during the Early Cretaceous). I do dislike the word “singularity” in this context though; especially since it originally means an attempt at extrapolation beyond the limits of the current working model. (For example, the financial “singularity” of 2008, when loan derivatives cast off the surly bonds of fundaments, and left the global markets in free fall.) All flowers have their roots in the earth, and in the past, which they must grow from. No ‘sky-hook’, or ‘future-hook’ can seed the present.

The Pilot of Consciousness

We do not know in detail how human consciousness arose, and we only have direct evidence of our own consciousness. But it is common sense to assume that most normally behaving people are conscious to some degree, and that consciousness is a result of biological processes of the body, and its organs, even though we cannot see it directly, the way we appear to see our own consciousness.

Assumptions about the level of consciousness of other people affect modern society at a deep level. A conscious mind is thought to have free will, to a greater degree than simpler animals or machines do. In criminal law, a person can only be judged on the actions they take consciously, but not while for example sleepwalking or hallucinating.

The old metaphor is that consciousness is to the body like a pilot is to the ship. The pilot of a ship needs information and feedback to do his job, but he does not have direct access to it. Instead, he gets his information secondhand from the lookouts, and the various instruments for measuring speed with knots, the compass, and elsewhere. The pilot also does not row the oars himself, or stoke the engines, he just sends instructions below and assumes they will be carried out. The pilot does nothing directly, but all vital information must flow timely through him. Neither is steersman the same role as the captain; piloting work means reacting to the currents and the winds as they happen, not long-term goal-setting or strategic planning.

Ship procession fresco, part 4, Akrotiri, Greece

[Note: The old greek word for pilot is kubernetes, which is the etymological root word for both ‘cyber-‘ and ‘govern-‘ words.]

Piloting a ship is not always hectic, at times the ship can be safely moored at harbour, or the sailing can be so smooth that the pilot can take a nap. But when the ship is at strange seas, risking greatest danger from outside forces, pilot-consciousness kicks in fully, alerting all lookouts and powering the engines to full reserve power, ready to react to whatever happens. When the outside forces show their full might, the pilot is more worried about the ship surviving the next wave, than getting to the destination on time.

The state or level of consciousness is often associated with some feelings of anticipation, alertness, even worry or anxiety; such feelings can even prevent dialing down the level of consciousness to restful sleep, and thereby cause more stress the next day. Pain can only be felt when conscious, hence the cliché of pinching yourself to check if you are dreaming or not. Pathos [the root word for ‘-pathy’ words, like empathy or psychopathy], in all its meanings, is a strong catalyst to rouse consciousness. Only humans are thought to be capable of becoming truly conscious of their own mortality, the conscious mind thus becoming aware of the limits of its own existence.

When the pilot takes over and commandeers a vehicle, the flexibility of consciousness allows him to extend his notion of self to include the vessel. For example, an experienced driver can experience his car sliding on a slick patch of road as a tactile sensation, as if a part of himself was touching the road, and not the tires. In the same way, human consciousness naturally tends to identify itself as the whole individual. Sigmund Freud named the normal conscious part of the mind ‘ego’, which is ‘self’ in latin. His key observation was that the mind is much more than the ego, and that true self-knowledge requires careful study, which he called psycho-analysis.

Introspection is an imperfect tool for studying one’s own mind, due to the many literal and metaphorical blind spots involved. The ego is very capable of fooling itself. This is why it is not considered safe to try doing psycho-analysis by yourself, you should have guidance from someone who has gone through the process. Same applies for some methods of controlling consciousness through meditation.

There are methods of self-discovery that are less dangerous, such as the various personality tests. To extend the metaphor, different pilots have their own favorite places on the ‘bridge’, their habitual ways of operating the ship, or specific feelings associated with its operations. Your ‘center’ may not be in the same place as someone else’s. For example, a procrastinator waits until the last possible moment to make a decision; it could be that only the imminence and finality of a deadline makes their choices feel ‘right’ or ‘real’ enough to commit to. Another example is risk-seeking/aversion: some people only feel alive when in some amount of danger; others do their utmost to pass risks and responsibilities to other people.

Most pilots become habituated to a specific level of stress when operating the self-ship, and cannot function well without it; the types and levels of preferred stress can vary much between individuals. Too much stress however can break the pilot and damage the ship. This is also variable between individuals. Hans Eysenck theorized that sensitivity of an individual to be easily traumatized is correlated to intraversion, or that extraversion could be even redefined in terms of tough-mindedness; but there are other models as well, such as psychological ‘resilience‘, which supposedly can be trained as a ‘life skill’.

Habits are also something that can be consciously trained, and paying attention to our own habits is very healthy in the long run. Consciousness is tuned to fairly limited range of timescales; changes that happen too fast or too slowly do not enter consciousness. Daily habits creep slowly, and without photographs it would be hard to believe how much we change over time. Almost all of the atoms and molecules in our bodies are swapped to new ones every few years, yet our sense of identity remains continuous.

Heraclitus says that “a man’s character is his destiny”, and to know thyself means knowing your weaknesses, as well as strengths. Multitasking is a typical weakness that the pilot often confuses for a strength. Consciousness appears to be the stage where all experience terminates, but the real multitasking happens at the edges; the decision of which of the competing stimuli enter consciousness is never a completely conscious decision. The same applies to commands outgoing, unfortunately. Completeness of control can be an illusion, a form of magical thinking.

Many philosophers have also been fascinated with the true nature of the biggest ‘blind spot’ of consciousness: consciousness itself. There have been various efforts to formalize the ‘contents’ of consciousness, or to model consciousness in terms of ‘properties’ that some entity may or may not ‘have’. There are inherent limitations with these approaches, they should be taken in the original context of phaneroscopy, without drawing any metaphysical conclusions from them.

Not many deny that life, and consciousness, is a process, and human viewpoint is one of moving inexorably forward through Time. The ‘contents’ of consciousness form an unstoppable stream, moving in relation to our self-identity. It seems to us that our mind is anchored to something unmoving and unchanging, with the world changing around it. Yet we identify no specific ‘qualia’ for change or motion, or atomic perceptions of time passing. [There are some thresholds to when we begin recognizing a rhythm, though.]

The true nature of subjective experience may be a ‘hard problem’, but no harder than explaining the true nature of Time. The human condition is to flow from an unchangeable past, inexorably and continuously forward, towards an unknown future, and to only ever be able to act in the present. The pilot role is necessary exactly because the flow that powers all flows cannot be stopped, it can only be navigated.

A Likely Story

Is cosmology a science? Is scientific cosmology even possible, because it is about events so unique and fundamental that no test in any laboratory can truly repeat them? Questions like these pop up often enough, and you can find many good answers to them through e.g. quora, which I will not repeat here.

For the layman thinker, the difference between truth and lies is simple and clear, and it would be natural to expect the difference between science and non-science to be simple and clear as well. The human brain is a categorizing machine that wants to put everything in its proper place. Unfortunately the demarcation of science versus not-science is not so clear.

Tischbein - Oldenburg

Karl Popper modeled his philosophy of science on the remarkable history of general relativity. In 1916, Albert Einstein published his long-awaited theory, and made sensational predictions, reported in newspapers around the world, that would not be possible to verify until the next total eclipse of the Sun. It was almost like a step in classical aristeia, where the hero loudly and publicly claims what preposterous thing he will do, before going on to achieve exactly that. Popper’s ideas about falsification are based on this rare and dramatic triumph of armchair theory-making, not so much on everyday practical science work.

If we want a philosophy of science that really covers most of what gets published as science these days, what we really need is a philosophy of statistics and probability. Unfortunately, statistics does not have the same appeal as a good story, and more often gets blamed for being misleading than lauded as a necessary method towards more certain truths. There is a non-zero probability that some day popularizations of science could be as enthusiastic about P-values, null hypotheses, bayesians, as they are today about black holes, dark energy and exotic matter.

Under the broadest umbrella of scientific endeavors, there are roughly two kinds of approaches. One, like general relativity, looks for things that never change, universal rules that apply in all places and times. These include the ‘laws’ of physics, and the logical-mathematical framework necessary for expressing them (whether that should include the axioms of statistics and probability, if any, is the question).

The other approach is the application of such frameworks, to make observations about how some particular system evolves. For example, how mountains form and erode, how birds migrate, how plagues are transmitted, what is the future of a solar system or galaxy, how climate changes over time, what are the relationships between different phyla in the great tree of life, and so on. Many of such fields study uniquely evolved things, such as a particular language or a form of life. In many cases it is not possible or practical to “repeat an experiment” starting from the initial state, which is why it is so important to record and share the raw data, so that it can be analyzed by others.

From the point of view of the theoretical physicists, it is often considered serendipitous that the fundamental laws of physics are discoverable, and even understandable by humans. But it could also be that the laws that we have discovered so far are just approximations that are “good enough” to be usable with the imperfect instruments available to us.

The “luck” of the theorist has been that so many physical systems are dominated by one kind of force, with the other forces weaker by many orders of magnitude. For example, the orbit of the Earth around the Sun is dominated by gravitational forces, while the electromagnetic interactions are insignificant. In another kind of system, for example the semiconducting circuits of a microprocessor, electromagnetism dominates and gravity is insignificant. The dominant physics model depends on the scale and granularity of the system under study (the physical world is not truly scale invariant).

As the experimental side of physics has developed, our measurements have become more precise. When we achieve more reliable decimals to physical measurements, we sometimes need to add new theories, to account for things like unexpected fine structure in spectral lines. The more precision we want from our theories, the more terms we need to add to our equations, making them less simple, further away from a pythagorean ideal.

The nature of measurement makes statistical methods applicable regardless of whether measurement errors originate from a fundamental randomness, or from a determinism we don’t understand yet. The most eager theorists, keen to unify the different forces, have proposed entire new dimensions, hidden in the decimal dust. But for such theories to be practically useful, they must make predictions that differ, at least statistically, from the assumed distribution of measurement errors.

Many theorists and philosophers abhor the uncertainty associated with probability and statistics. (Part of this is probably due to personality of each individual, some innate unwillingness to accept uncertainty or risk.) To some extent this can be a good thing, as it drives them to search for patterns behind what first seems random.

But even for philosophers, statistics could be more than just a convenient box labeled ‘miscellaneous’. Like in the Parmenides dialogue, even dirt can have ideal qualities.

Even though statistics is the study of variables and variability, its name comes from the same root as “static”. When statistics talks about what is changeable, it always makes implicit assumptions about what does not change, some ‘other’ that we compare the changes against.

It is often said that statistical correlation does not imply causation, but does cosmic causation even make sense where cosmic time does not exist? Can we really make any statistical assumptions about the distribution of matter and energy in the ‘initial’ state of all that exists, if that includes all of space and time?

One of the things that Einstein was trying to correct, when working on general relativity, was causality, which was considered broken in the 1905 version of relativity, since causes did not always precede their effects, depending on the movement of the observer. General relativity fixed it so that physical events always obey the timeline of any physical observer, but only by introducing the possibility of macroscopic event horizons, and strange geometries of observable spacetime. But the nature of event horizons prevents us from observing any event that could be the primal cause of all existence, since it would be outside of the timeline from our point of view. We can make estimates of the ‘age’ of the Universe, but this is a statistical concept, no physical observer experiences time in the clock that measures the age.

Before Einstein, cosmology did not exist as a science. At most, it was thought that the laws of physics would be enough to account for all the motion in the world, starting from some ‘first mover’ who once pushed everything in the cosmos to action. This kind of mechanistic view of the Universe as a process, entity or event, separate from but subservient to a universal time, is no longer compatible with modern physics. In the current models, continuity of time is broken not only at event horizons, but also at the Planck scales of time and distance. (Continuing the example in Powers of Two, Planck length would be reached in the sixth chessboard down, if gold were not atomic.)

Why is causality so important to us, that we would rather turn the universe into swiss cheese than part with it? The way we experience time, as a flow, and how we maintain identity in that flow, has a lot to do with it. Stories, using language to form sequences of words, or just as remembered sequences of images, dreams and songs, are deeply embedded into the human psyche. Our very identities as individuals are stories, stories are what make us human, and plausible causes make plausible stories.

Knowledge, Fast and Slow

Ars longa, vita brevis

Due to the shortness of human life, it is impossible for one person to know everything. In modern science, there can be no “renaissance men”, who have deep understanding of all the current fields of scientific knowledge. Where it was possible for Henri Poincaré to master all the mathematics of his time, a hundred years later no-one in their right minds would attempt a similar mastery, due to the sheer amount of published research.

A large portion of the hubris of the so-called renaissance men, like Leonardo da Vinci, can be traced to a single source: the books on architecture written by Vitruvius more than a thousand years earlier, rediscovered in 1414 and widely circulated by a new innovation, the printing press. In these books, dedicated to emperor Augustus, Vitruvius describes what kind of education is needed to become an architect: nothing less than enkuklios paideia, universal knowledge of all the arts and crafts.

Of course an architect should understand how a building is going to be used, and how light and sound interact with different building materials. But some of the things that Vitruvius writes are probably meant as indirect flattery to his audience and employer, the first emperor. Augustus would likely have fancied himself “the architect” of the whole roman empire, in both the literal and the figurative sense.

Paideia was a core hellenic tradition, it was how knowledge and skills were kept alive and passed on to the future generations. General studies were attended until the age of about 12, after which it was normal to choose your future profession, and start an apprenticeship in it. But it was also not uncommon for some aristo to send their offspring to an enkuklios paideia, a roving apprenticeship. They would spend months, maybe a year at a time learning from the masters of one profession, then move to another place to learn something completely different for a time. A born ruler would anyway not be needing any single profession as such, but some knowledge of all professions would help him rule (or alternatively, human nature being what it is, the burden of tolerating the privileged brats of the idle class must be shared by all (“it takes a village”)).

Chiron instructs young Achilles - Ancient Roman fresco

Over the centuries, enkuklios paideia transformed into the word encyclopedia, which today means a written collection of current knowledge in all disciplines. As human knowledge is being created and corrected at accelerating rates, printed versions are becoming outdated faster than they can be printed and read. Online encyclopedias, something only envisioned by people like Douglas Engelbart half a century ago, have now become a daily feature of life, and most written human knowledge is in principle available anywhere, anytime, as near as the nearest smartphone.

Does that mean that we are all now vitruvian architects, renaissance geniuses, with working knowledge of all professions? Well no, human life is still too short to read, let alone understand, all of wikipedia, or keep up with its constant changes. And not everything can be learned by reading or even watching a video, some things can only be learned by doing.

For the purposes of this essay, I am stating that there are roughly two types of knowledge that a human can learn. The first one, let’s call it epistemic knowledge, consists of answers to “what” questions. This is the kind of knowledge that can be looked up or written down fast; for example, the names of people and places, numeric quantities, articles of law. Once discovered, like the end result of a sports match, they can be easily distributed all around the world. But, if they are lost or forgotten, they are lost forever, like all the writings in languages we no longer understand.

The other type of knowledge I will call technical knowledge, consisting of answers to “how” questions. In a sense technical knowledge is any acquired skill that is learned through training, that eventually becomes second nature, something we know how to do without consciously thinking about it. Examples are the skills that all children must learn through trial and error, like walking or speaking. Even something as complex as driving a car can become so automatic that we do it as naturally as walking.

[Sidenote: the naming of the two types here as “epistemic” and “technical” is not arbitrary, they are based on two ancient greek words for knowledge.]

The division to epistemic and technical knowledge is not any fundamental divide, and many contexts have both epistemic and technical aspects. Sometimes the two even depend on each other, like names are dependent on language, or writing depends on the alphabet.

Both kinds of knowledge are stored in the brain, and can be lost if the brain is damaged somehow. But whereas an amnesiac can be just told what their name and birthday is, learning to ride a bicycle again cannot be done by just reading a wikipedia article on the subject. The hardest part of recovering from a brain injury can be having to relearn skills that an adult takes for granted, like walking, eating or speaking.

In contrast to epistemic knowledge, technical knowledge can sometimes be reconstructed after being lost. Even though no documents readable to us have survived from the stone age, we can still rediscover what it may have been like to work with stone tools, through experimental archaeology.

Technical knowledge exists also in many wild animals. Younger members of the pack follow the older ones around, observe what they do and try to imitate them, in a kind of natural apprenticeship. Much has been said about so-called mirror neurons that are though to be behind this phenomenon, in both humans and animals.

New techniques are not just learned by repetitive training and imitation, entirely new techniques can be discovered in practice. Usually some competitive drive is present, like in sports. For example, high jump sets its goal in the simplest of terms: jump over this bar without knocking it off. But it took years before someone tried to use something other than the “scissors” technique. Once the superiority of a new jumping technique became evident, everyone starting to learn it, and improve on it, thus raising the bar for everyone.

New techniques offer significant competitive advantages not only in sports, but also in the struggles between nations and corporations. Since we are so good at imitating and adapting, the strategic advantage of a new technique will eventually be lost, if the adversary is able to observe how it is performed. The high jump takes place in front of all, competitors and judges alike, and everything the athlete does is potentially analyzed by the adverse side. (This does not rule out subterfuge, and the preparatory training can also be kept secret.)

About the time of the industrial revolution, it became apparent that tools and machines can embody useful technical knowledge in a way that is intrinsically hidden from view. Secret techniques that observers cannot imitate even in their imaginations are, to them, indistinguishable from magic. To encourage inventors to disclose new techniques, but still gain temporary competitive advantage in the marketplace, the patent system was established. Since a patent would only get granted if the technique was disclosed, everyone would benefit, and no inventor need take their discoveries to the grave with them, for fear of them being “stolen”. Today international patent agreements cover many countries, and corporations sometimes decide to share patent portfolios, but nations have also been known to classify some technologies secret for strategic military purposes.

Even though technical knowledge is the slow type of knowledge, it is still much easier to learn an existing technique from someone than it was for that someone to invent, discover or develop in the first place. This fact allows societies to progress, as the fruits of knowledge are shared, kept alive and even developed further. One area where this may not apply so well is in the arena of pure thought, since it mostly happens hidden from view, inside the skull. This could be one reason why philosophy and mathematics have always been associated with steep learning curves. Socrates never believed that philosophy could be passed on by writing books, only dialogue and discussions could be truly instructive, the progress of thought made more explicit thereby. This is also why rhetoric and debate is often considered as prerequisite for studying philosophy (though Socrates had not much love for the rhetors of his time either).

From all the tools that we have developed, digital computers seem the most promising candidates for managing knowledge outside of a living brain. Words, numbers and other data can be encoded as digital information, stored and transported reliably from one medium to another, at faster rates than with any other tool available to us. Most of it can be classified as the first type of knowledge, the kind that can be looked up in a database management system. Are there also analogues of the second type of knowledge in computers?

In traditional computer programming, a program is written, tested and debugged by human programmers, using their technical skills and knowledge and all the tools available to them. These kind of computer programs are not written just for the compiler, the source code needs to be understood by humans as well, so they know that/how it works, and can fix it or develop it further if needed.  The “blueprint” (i.e. the software parts) of a machine can be finalized even after the hardware has been built and delivered to the customer, but it is still essentially a blueprint designed by a human.

Nowadays it is also possible for some pieces of software to be trained into performing a task, such as recognizing patterns in big data. The development of such software involves a lot of testing, of the trial and error kind, but not algorithmic programming in the traditional sense. Some kind of an adaptive system, for example an artificial neural network, is trained with a set of example input data, guided to imitate the choices that a human (or other entity with the knowledge) made on the same data. The resulting, fully trained state of the adaptive system is not understandable in the same way that a program written by a human is, but since it is all digital structures, it can be copied and distributed just as easily as human-written software.

This kind of machine learning has obvious similarities to the slow type of knowledge in animals. The principles are the same as teaching a dog to do a trick, except in machine learning we can just turn the learning mode off when we are done training. And of course, machines are not actively improving their skills, or making new discoveries as competing individuals. (Not yet, at least.)

Life on Other Planets

Much of the public interest in space has always revolved around the idea of life on other planets. We know that there are other planets, even around other stars, some of them even “Earth-like”, in some optimistic definitions of the word. But the current lack of detailed information about such places makes fertile ground for imaginative speculation. I will try to refrain from speculation here, and stick to the factual.

We have many inborn instincts that deceive our minds about the matter. For one, we think that recognizing life and intelligence is easy. Just bring it before us and we will categorize it as either:

  1. alive and intelligent (example: elephants, some whales, great apes)
  2. alive and non-intelligent (example: bacteria, lichen, viruses)
  3. non-living and intelligent (example: ?)
  4. non-living and non-intelligent (example: vacuum, atoms, radiation).

It is of course our generalist diet and dependence on social behavior that make recognizing life and intelligence so important and instinctive to us. It takes a lot of effort not to anthropize everything that we encounter, not to react to what our brain thinks is a face for example.

Our ancestors walked for thousands of kilometers, encountered new environments and ecosystems, and survived. We are all descendants of surviving colonists. Our species has traveled just about everywhere on the surface of this planet, and marked all the habitable places. That is, habitable for humans. As we have expanded the reach of our scientific equipment, we have found life in places where we never though there would be life: at the bottom of the ocean at thousands of atmospheres of pressure, in scalding heat or acidity, living with no direct access to sunlight. We call these kind of beings “extremophiles”, because they live in environments that are extreme compared to our human views of habitability.

What makes life, and indeed intelligence, so interesting is that it breaks molds, defies definition, and jumps from host to another, forever in between destinations. Life, as we understand it, exists in the interstitials, in the sweet spots between states and phases, conversing in an endless dialogue between solid and fluid. As much as we humans like categorizing things, life itself cannot be contained in any box (or if it can, it will die inside it).

Life, as we understand it, exists in the interstitials, in the sweet spots between states and phases, conversing in an endless dialogue between solid and fluid.

As a species, we have traveled thousands of kilometers vertically, but move just ten kilometers straight up or down, and the Earth becomes hostile; with either too much or too little pressure for human habitation. It is not the planet as a whole that is habitable, just certain zones within a thin layer of it are. Our senses evolved to this thin layer between the earth and the sky, and until modern science did not even become aware of the many invisible layers and processes above and below this one, that our lives depend on: the ozone layer, the magnetosphere, ground water, deep ocean currents.

The circulatory systems of the Earth, the water cycles, the carbon cycles, and various mineral cycles, all must align at sweet spots for life to flourish. Such fertile locations include of course the alluvial plains, where human city-cultures arose some ten thousand years ago. But there are also two especially active locations on Earth: the Amazon, and the Great Barrier Reef.

Gravity presses tons of bioavailable minerals to the bottom of the oceans, where the lack of sunlight prevents plants from making use of them. Only deep ocean currents, or continental drift can bring these nutrients back to the surface. By the rotation of the Earth, nutrient dust is regularly carried in the air from Africa to Southern America, where it meets water vapor coming from the Pacific (both wind directions driven, paradoxically, by the very same rotation of the planet). Where they meet, the Amazon.

The Great Barrier Reef, the largest structure created by life on Earth, was born when a large slice of coastal plain became flooded about ten thousand years ago. The edge of the continental shelf stays close enough to the surface to receive plenty of sunlight, creating large lagoon-like areas between it and the current coastline. Like they do for ships that sink, corals started to metabolize the trees and other matter from the moment they became submerged, slowly covering them in an accumulation of rock-like deposit. In addition to material flowing in from land, nearby ocean currents such as the Capricorn Eddy help sustain the ecosystem by bringing up nutrient-rich waters from the seabed.

When we now look to the Solar system, our colonist intuition tells us to look for solid surface, terra firma, somewhere to raise a flag and stake a claim. But to truly make humans a multiplanetary species, we need to build, grow, or transport an entire ecosystem capable of sustaining both itself and us humans at the top of the food chain. Quite a few species will have to become multiplanetary in order for that to happen.

As it happens, one of the most promising location for humans discovered outside of Earth exists about 50 km above the surface of Venus, where the levels of temperature, pressure, gravity and radiation are all comparable to the thin layer of Earth we call home. There is of course nothing solid or liquid at that altitude, nowhere to plant anything, not even the intrepid explorer’s flag. A conceptual project (HAVOC) has been proposed to study the conditions in that altitude in Venus (and possibly to invent a flagpole adapted for clouds). But to make that layer into a permanent second home for humans requires designing the ecosystem from scratch. This idea is daunting, but also liberating. I for one am excited to imagine the steps needed to grow our own “Great Reef” to float in the sky of another planet. Most of the building materials should already be present; if you think about it, trees on Earth create solid wood almost entirely out of air, out of thin air. On Venus, CO2 and sunlight are in abundance.

In the same way as when life arose from the seas and moved onto land, moving life into space will have to be at least as much adaptation as conquest. Climbing the formidable gravity wells accommodate travelers best when packed into the smallest mass possible. Ideally, just the instructions how to grow life could be packed into small “seeds” that could then adapt to the local conditions when they arrive (or it is thinkable that this has already happened long ago, and we are the result of panspermia).

All currently known life forms, even extremophiles, have evolved and adapted into the wonderful thin layer of our planet, this “region of interaction” first named biosphere by Eduard Suess. What are the necessary characteristics of such a layer of interaction, and how do they contribute to life as we know it?

On first approximation, the surface of the planet is where the different phases of matter separate, the solid earth, the liquid water, and the gaseous air, like the concentric spheres of classical cosmology. But the solidness and fluidness of a substance is relative, and our senses experience them as such because we have evolved into this layer. It is thinkable that a lifeform adapted to a different layer, or with different mass and strength would see things differently. For example, a bird might sense air currents like a fish senses water, or an elephant senses vibrations in the ground.

The point of view also changes with timescale, and density. The smallest flying insects experience the air viscosity differently, and their flight is more like swimming than gliding. If you are made of gossamer you experience more things as hard and solid than if you were made of diamond. The slower your perception is, the more foggy movement appears, and so on.

f_211_193_171_1024_crop_rot

Courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.

The spherical shape of a planet’s surface is the result of the opposing forces that act on its mass: Gravity pulls everything together, while pressure pushes outward in all directions. The total sum of countless trillions of small collisions eventually separate the mass of the forming planet into layers, of which the separation between “surface” and “atmosphere” is just one.

The end result of the separation process could be just a lifeless set of perfect concentric spheres, like the rings of Saturn. But on Earth, the separation is not complete, there are continuous cycles of matter, interacting over the layer boundaries. An example is the water cycle, continuously evaporating, condensing, raining, sublimating, diluting and conveying all over the biosphere.

This spontaneous layering to spheres follows density, so it does not necessarily result in ordering by phases of matter. In addition, increasing pressure towards the center can melt material that would otherwise solidify. The current theory about the internal structure of our planet is that it has a solid inner core, surrounded by liquid outer core, surrounded by viscous mantle, with a mostly solid crust on top

And of course at planetary scales, solidity is relative. Also gases, when they are thick and viscous enough, can behave more like liquids.

pia18337_rot_crop

Gored Clump in Saturn’s F Ring (Image Credit: NASA/JPL-Caltech/Space Science Institute)

Chain reactions must triggered by something. Just like a snowflake cannot form without a seeding speck of dust, the complex biochemistry of life cannot appear if all base materials are just cleanly separated. Some flaws must be present in the interface, it cannot be a perfect mirror. Crystallization cannot start without a seed, and crystals do not naturally grow into perfect spheres. Exterior solid crusts inevitably erode into uneven rocks and sands, due to to tidal forces, winds and waves, even meteor collisions if nothing else.

The concentric spheres of matter must interact, even interpermeate to some extent, to make them more fertile places for life to evolve. Reservoirs, niches, potentials and flows should be present, with local variations in temperature, flow speed, density and such. Growth can then slowly adapt to different situations, as long as the overall conditions are stable enough.

Such interactive layering does not happen only on planets. The surface of the Sun is also wildly active and complex, but due to its heat cannot sustain the kind of complex biochemistry that we associate with life. The valencies of the chemical bonds in an organism need to be compatible with the ambient energy levels (including radiation) of the environment, so that macromolecules can be both synthesized and broken down near each other.

By a big stretch of the imagination, we might theorize a system of life not based on macromolecule synthesis, or even molecules. For example, we don’t really know what happens in the layers of quark soup in the pressures of a rotating neutron star. But for now, such things are beyond what is known, clearly in the realm of speculation which I said I would try to avoid here.

Even if we stay within the realm of chemistry, we should be looking for life in the shapes and forms that are expressed through it, rather than reducing it to any kind of quantitative process. Otherwise there would be no other purpose to life than “to hydrogenate carbon dioxide”.

Closeup of the two sides of a gold coin

Chrisos design

I wanted a gold coin design for Powers of Two, something that looked timeless and, unlike real ancient coins, could be stacked. For the design, I was greatly inspired by the description of the chrisos given to Severian at the beginning of The Book of The New Sun: the autarch’s face in profile on the front, and a flying ship on the back.

As I am not much of an artist, I still needed more inspiration. The profile on my chrisos is actually based on statues of Antinoos, a greek prince of Rome in Egyptian headdress: suitably cosmopolitan, and somewhat androgynous in appearance. This choice of subject also gave the opportunity to include hieroglyphs, which look suitably alien to modern readers. The ones on the right are copied from an artifact dedicated to Antinoos, the Barberini obelisk that now stands in Rome. In English they mean roughly “Lord Osiris-Antinoos”, according to Erman, Grimm and Grenier. With this vertical text on the right side of the profile, I needed some symbols on the left as well, for balance. I chose the ankh, the djed, and the was, three symbols which were often associated with Osiris.

The obverse side, at first glance, could be mistaken for an alien spaceship against a field of stars. But if you look closely, you can see that it could also be an Egyptian ship, reflected on mirror-calm water surface. The stars in the background are (of course) from the obsolete constellation Antinous, how it might look from the ground in Northern Africa, looking towards west.

coins_gold

For the renders, in blender cycles, I used the interference OSL shader written by prutser. As input it takes actual measured values of the complex index of refraction of a physical material, acquired from thin foils similar to what Faraday used. For the render above and in Powers of Two, I used pure gold with no interference layer, with just some procedurally generated scratches. As bonus here are two renders of the same coin with other IOR values, and with a varying interference layer on top, representing patina or dirt. They are meant to look like tarnished silver and copper (perhaps asimi and aes in Gene Wolfe’s terminology?).

coins_silvercoins_copper

gold coins, in stacks on the lowest row of a wooden chessboard

Powers of Two

Place one gold piece in the first square, double the number in the second square, double again in the next square, and so on, filling all 64 squares of the board. How many gold pieces is that in total?

In real life, you should not try this indoors, since the stacks of coins will soon increase in height beyond any roof made by man. The board should also be sturdy, to support the tons of weight placed on it. Due to shearing winds at higher altitudes, the coins would need to be fused together pretty soon, turning from stacks into solid rods of gold.

Barely halfway through the board, at square 35, all gold ever refined by humankind so far, 180 thousand metric tons, would have been cast into coin-rods and placed on the same chessboard. Around the same time, the rods are no longer pressing all their weight against it, since their center of mass would be orbiting the Earth at ever higher altitudes. Eventually the rods would become tethers, their rotating inertia pulling away from the board game with more force than Earth’s gravity pressing them against it.

In the end, assuming enough gold is refined, the stack of coins in the final square would reach well into the Kuiper belt, beyond the orbit of Neptune. Even if the gold is welded into a solid rod, it would not stay straight, or even in one piece, for very long. Light would take over four hours to travel from one end to the other, and local pulling forces would not be able to balance out along the whole rod.

The whole exercise is of course just an educational story, meant to reveal the poor grasp of numbers and magnitude that human intuition is cursed with. It is not known where or by whom it was originally told, but most versions of it are told with grains of wheat or rice, or salt, all things that come in small sizes.

But what if instead of doubling the amount in each square, you were to halve the amount of gold in each square? Start with a single gold coin, cut it in half, move the half into the next square, then a quarter, and so on? Would it be possible to have a piece of gold in each 64 squares of the chessboard?

Gold is soft and malleable enough to cut, at least when pure (this is why real gold coins are usually not pure gold; but let’s assume purity here, for the sake of the argument). In fact, with stone-age tools it is possible to beat gold to such thinness that its edge becomes invisible: it is thinner than the shortest wavelength of visible light. Already on the second row of the chessboard it becomes necessary to use gold leaf instead of nuggets. This is good for the visibility of the remaining gold; it also helps that gold foil naturally attaches itself to the underlying surface, making it less likely that a light breeze blows away the invisible gold dust in the lower squares.

Gold was considered the noblest substance by the ancients, embodying the idea of material permanence. If gold can be beaten so thin that sunlight is visible through it, why not beat it even thinner, until it becomes as thin and light as the Emperor’s new clothes? What is the internal force that makes thinning the foil ever harder, the thinner it gets? And could we, in theory, continue dividing the gold forever, if we had the means to see it and the power to thin it?

In the middle of 19th century, the great experimentalist Michael Faraday attached gold leaf to glass plates, and studied it under the most powerful microscopes available. He was looking for, among other things, hints of any fine structure, such as the existence of atoms or molecules, strongly suggested by the works of Dalton, Avogadro, Berzelius and others. In his Bakerian lecture from 1856 he writes

“Yet in the best microscope, and with the highest power, the leaf seemed to be continuous, the occurrence of the smallest sensible hole making that continuity at other parts apparent, and every part possessing its proper green colour. How such a film can act as a plate on polarized light in the manner it does, is one of the queries suggested by the phenomena which requires solution.”

Faraday had a knack, and was already famous, for making unusual experiments and finding strange natural phenomena for other, more theoretical-minded scientists to then try to explain. It was only a few years later, in 1865, that Jacob Loschmidt referred to Faraday’s experiment, in a paper that, for the first time in history, made a reasonable estimation for the mass of a single atom: a trillionth of a milligram [he used trillion in the long scale, meaning 1018].

Applying Loschmidt’s estimation to the chessboard, assuming the coin is a solid 8 grams of Au, it could easily be divided 63 times, with still hundreds of gold atoms left even in the last square (8×1021 / 263). There is indeed plenty of room at the bottom, as Dr. Feynman said. Since then, we have of course made more accurate measurements of the atomic mass, but Loschmidt’s estimate was very close to the mark [the actual number of atoms in 8 grams of Au is about 2.4×1022, just three times higher].

We have had the knowledge about the approximate size of atoms for more than 150 years, and have made steady progress in the precision and accuracy of our instruments. But the sight of the apparently empty lower half of the chessboard really demonstrates how much about the physical world is hidden from the senses we were born with.

The gold leafs in squares 38-50 are of the diameter of typical living cells, and visible with a microscope. Optical microscopes become useless pretty soon after square 50, because the diameters become smaller than the wavelengths of visible light. All the complex biochemistry of life, with practically infinite variations of form, happens at a scale too small for us to see. But there is plenty of room for the variations; every view in a microscope is like choosing one asteroid in a galaxy cluster of star systems to look at.

Taking the hint from Feynman, the world-wide electronics industry has proceeded down the ladder, doubling the transistor count of production chips about every 18 months since the mid nineteen-sixties, by miniaturizing components. After fifty years, this so-called Moore’s Law has an unknown future, but the incredible impact to human society of affordable computing has been achieved by traversing just half-way down the chessboard of magnitude.

The problem of accuracy is not just with the instruments used, it is also in the amount of raw data needed to process the information. Every new bit of information doubles the number of possible combinations. To represent measurements of 22 digits of accuracy, for example the recent discovery of gravitational waves, more than 64 bits of precision is needed. For comparison, all the pictures in this post were created with blender, which uses single-precision floats internally, with just 24 bits of precision. This is enough for human vision, and requires less hardware.

Even with trillions of pixels in quadruple-precision accuracy, the human brain would not have the internal bandwidth to grasp both ends of the chessboard of magnitude at the same time. The most common way to display physics magnitudes is to use the zoom effect, as famously done in the Powers of Ten film in 1977. The zoom is a compromise, with the apparent motion as if traveling to other worlds, and no intuitive way to gauge the logarithmic speed of movement; but it seems no better way to convey differences in magnitude has been invented, so far.

NIL EX NIHILO - NIL AD NIHILUM - IUNGIT AMOR - DISSILIUNT ODIIS - CORPORIBUS CAECIS IGITUR NATURA GERIT RES

Skylight

Looking at clouds moving in the sky is always magical to me. To think that all those forms and colors are just water and air. Sometimes I just sit on my balcony and look at the sky for an hour. I have also taken photographs of the sky, with surprisingly nice results: such cloud photos can be used for moody and mysterious backgrounds, like the one in the banner of this blog.

This video is not mine, but I wanted to link it because I like it. It is by someone more professional at skywatching:

Skylight from Chris Pritchard on Vimeo.

 

What are we made of?

The obvious answer is that we are made of atoms, like everything else physical in the Universe. Some of these atoms were made from other atoms inside stars that then exploded.

This is of course correct, but it misses a lot of crucial points. For example, the atoms in a living body are being continuously replaced with new atoms, a process called metabolism. Like the Ship of Theseus, our bodies are continuously being rebuilt, with none of the original atoms remaining in place.

Not all the atoms in the body get replaced over time. There are things that do not take part in normal metabolism, yet are considered parts of our bodies, like tooth enamel, or the lenses of the eyes. DNA material in post-mitotic cells, like most brain cells of an adult, could also remain essentially the same atoms in the same molecules for decades.

But as the brain is the most metabolically active organ in your body, the atoms that remember an event that happened ten years ago are, for the most part, not the same atoms that experienced it at the time.

Single atoms have no memories, and no identities, only large collections of them do. But not just any heap of matter is alive.

The smallest collection of atoms and molecules that can be said to be alive is the cell. We are made of cells that have grown together, and specialized into the different parts of our bodies. In biological sense, the correct “atom” of the human body is the cell.

But this division is not truly atomic either. We cannot take apart a human at cellular level, and reconstruct him again from individual living cells (By the way, there is one class of animal that can spontaneously do exactly this, the sponge). Individual cells in our bodies are not just in different positions, they are also specialized to very different tasks. Even if all cells in the body shared the same DNA, cellular specialization occurs by turning different genes on and off, based on inter-cellular communication.

Individual cells in our bodies die all the time, and are replaced with new ones. In fact, most of what you see when you look at any bigger animal is the dead tissue covering the surface. Yet the organism lives.

Life is not made or designed, it exists through growth, and refuses to be confined to any single level of definition. As conscious beings we take all the unconscious processes in our bodies for granted. As apex predators, we live on top of a pyramid of life which we mostly take for granted. As social and specialized animals, we live as dependent members of the global economy, which we mostly take for granted.

In the end it makes little difference what parts our bodies consist of at any moment. As growing things our bodies are not finished. Nothing alive ever is.