Centrifugal Force and Gravity on the Moon

Human biology and physiology has evolved and adapted to Earth gravity. Lunar surface gravity is better than nothing, but there should be some way for people to put their bodies under more acceleration from time to time, if in the future a more permanent human presence is established on the Moon.

The most spartan method to generate such acceleration is by using the body’s own muscles. Pedaling a bicycle at 7 m/s along the inside wall of a round chamber of 10 m diameter will produce about 1 g of centrifugal acceleration on the body doing the pedaling. This has been demonstrated on Earth, with the daring “Wheel of Death” and “Globe of Death” circus acts; except that they are usually performed with motorbikes, to generate reliably the higher speed needed to overcome Earth’s gravity. Since lunar gravity is so much smaller, less force and speed is needed to overcome it, and an ordinary human-powered bike should be more than enough to perform such tricks on the Moon.

Of course, riding a bicycle is not the most natural posture to experience Earth-like gravity for a brief moment. Could the same trick be performed by just running inside a spherical chamber? In principle yes, but 7 m/s is a fairly demanding running speed to maintain (or even achieve at all in lunar gravity, see below). Shortening the diameter of the running sphere will reduce the required speed somewhat, but the stronger curvature makes more demands on the kinetics of the body. The centrifugal acceleration is also not experienced equally in the body, if body length is a sizable fraction of the sphere’s radius. Twisting the upper body sideways while running at maximum speed is not a natural posture either.

lunar_run2

[DISCLAIMER: I will not take responsibility of any accidents when future residents of our natural satellite decide to attempt circus acts. Please be careful, use safety gear and go easy at first. Know your limits, and the distance to the nearest hospital.]

Achieving a tangential running speed of 7 m/s on the Moon might not be possible at all on a planar surface. NASA made extensive tests in preparation for the Moon walk, and found that test subjects could only achieve maximum 4 m/s running speed on a flat track in simulated lunar gravity, while they could easily sprint at 6 m/s in normal gravity.

nasa_stickfigure

“Comparative Measurements of Man’s Walking and Running Gaits in Earth and Simulated Lunar Gravity”, Hewes, et al, 1966. Original film footage available on youtube: http://youtu.be/B_wLkcS50JA

In the simulator, the test subject is suspended in a harness that puts his body at an incline of 9.5° from the horizontal plane, able to walk or run forward along a linear track. Supported at this angle, he experiences about one sixth of Earth’s gravity against the track which is perpendicular to the same angle.

One likely explanation for the meager running speed in lunar gravity is reduced traction. Traction, like friction, is usually modeled using a coefficient that characterizes the properties of the two contacting surfaces. This material-based coefficient is then multiplied by the normal force, with which the objects are pushed against each other, to arrive at actual traction. Normal force being proportionate to the weight of the person at gait, it is understandable that the actual traction would be reduced to one sixth in lunar gravity; unless the surfaces of the floor and the shoes are not made six times “stickier” to compensate.

Apollo 11 bootprint

NASA/Aldrin, 1969

The only footwear used so far for walking in non-simulated lunar gravity, the Apollo lunar overshoe, has a distinct (and iconic) pattern: deep transverse grooves along the entire sole, and an even rim running along the edge. The need for increased traction on the Moon can be seen in the design; it was also known that the added weight from carrying the heavy space suit would increase the normal force and thus increase traction. But the aligned design of the parallel grooves seems to increase traction only when walking in a straight line, just like in the simulator. In reality, sideways traction is just as important, sometimes even more important in footwear.

The need for increased traction applies also to bicycle tires, if they are to be used on the Moon. And the traction pattern needs to continue along the sides of the tire as well. Because of the smaller lunar gravity, a bicyclist needs to lean deeper when making turns than when pedaling on Earth. If we make the simplifying assumption that sideways leaning angle that a bicyclist uses when turning corresponds to the direction of the net force that acts on his center of mass, and further assume that the vertical component of the net force is all gravity, we can decompose the horizontal portion of the net force (consisting mostly of the centrifugal force from making a turn), and recompose it with lunar gravity to arrive at an estimate for what angle is needed for a turn taken at same speed as on Earth.

lunar_bike_tilt

The same thing happens with all horizontal forces, not just the sideways centrifugal force from turning, using any mode of traction-based locomotion. Accelerating and decelerating on foot, such as the natural body leanings when doing a shuttle run, will also become exaggerated on the Moon. (And if you have tried a shuttle run, you will have noticed that the feet are usually turned completely sideways when braking, requiring reliable traction from the footwear also in that direction.)

It may well be that the optimal form of locomotion on the Moon will look very different from what we are used to, due to smaller gravity. For example, a low-riding recumbent bike might be more suited to lunar environments, lowering the center of mass and reducing distance to the locus of traction. Or, going in the other direction, it could be that walking on stilts gives better leverage to our limbs. [A lunar suit with powered stilts might also be considered a step towards a full exoskeleton. How tall stilts would be needed to extend the lunar horizon to the same distance as on Earth without stilts, for a person whose eyes are at height 1.75 m from the ground? This is left as an exercise for the reader.]

Lunar interiors could also be designed with curved floors to help with turning, accelerating and decelerating; a floor that looks like a skate park could actually be more comfortable to navigate by foot than a level (but sticky) surface. Natural caves or mines could even have existing curvature that could be utilized in architecture. One curvature pattern could be the kind of circular running bowl pictured above, where you could accelerate and decelerate at your own pace and choose the incline according to what your balance sense is feeling most comfortable at the current speed.

Continuous Centrifugation Chambers

If at some point in time, a larger, more permanent human presence is established, with lunar cities under domes or in lava tubes, they may want to invest in a system that generates steady g-force without constant muscular exertion.

There are multiple reasons why such a system would be considered worthwhile. For example, the glymphatic system of the brain works during sleep, and may need Earth-like gravity to function optimally. Other biological processes may also benefit from normal gravity, for example germinating plant seeds. Lunar parents might also want their children to spend most of their time in 1 g, to ensure normal physiological development. People returning to Earth after a long time spent in space may want to slowly acclimatize to 1 g beforehand; and so on.

How would such a facility look like? One basic approach is to build a large cylindrical housing, and rotate the whole thing around its vertical axis with motor power, very much like a top-loading centrifugal washing machine on Earth. But the internal geometry of the rotating body will need to be adapted to human habitation. [To be clear, I am not talking about just a short-arm centrifuge for one person use from time to time, but a building-sized habitable chamber that constantly rotates as a whole.]

cf_slide001

Centrifugal force is dependent on the radius of rotation and the angular velocity. Inside the rotating cylinder, the angular velocity is constant, so the centrifugal acceleration (shown in blue) just grows linearly with the distance from the axis. The rotation speed is chosen so that 1 g is reached at the maximum radius (r=10 m, say).

cf_slide002.png

Every point inside the rotating cylinder will experience both centrifugal force (blue) and lunar gravity (red), in their separate directions. They combine at each point by vector sum into a net force (purple), and since both centrifugal force and gravity are proportional to mass, we can simplify the net force into net acceleration, and substitute acceleration vectors in place of force vectors.

cf_slide003

The net force vector always points downwards, but turns more horizontal with distance from the center. At the outer edge, the net acceleration magnitude is about 1.01 g, and the angle is 9.5° from vertical. [Sound familiar? Wasn’t this also the angle used in the NASA lunar gravity simulator? Almost the same, because it is approximately tan-1(1/6), based on the ratio 1:6 of lunar gravity to Earth gravity, or to 1.01 g in this case.]

cf_slide004

Since the lunar gravity is the same at each point, the net force vector angle and magnitude are the same at every height along the vertical axis. We can use this fact and define two curves based on the angle of the vector:

cf_slide005.png

The first curve (purple), called the plumb line, follows the direction of the net force/net acceleration. For the purpose of formulating the curves, we will fix the maximum radius at 10 units, as the distance where 1 g centrifugation is attained at the chosen rotation speed.

cf_slide006.png

The second curve (green), called the water level, is always perpendicular to the net force.

cf_slide007

As indefinite integrals, both curves can be offset on the vertical axis by any constant. This also makes sense from the point of view of the net acceleration angle, since it is only dependent on distance from the rotating axis. We note that wherever the two curves intersect, they do so at a right angle.

cf_slide008

The net force affects static objects in the rotating frame of reference as a unified constant acceleration. This means that an actual perpendiculum, a weight suspended from a string, will align itself along the plumb line curve at that point. And the water level curve, when considered as a surface of revolution around the axis, will align with the surface of water in statically held containers. Tools such as bubble levels would also agree with the local curvature, and, perhaps more importantly, the linear acceleration sensors in the inner ear (saccule and utricle) would accept the local plumb line as vertical and the water level as horizontal.

Using these curves as guides, we can design the internal geometry of the continuous centrifugation chamber for human habitation: the local water level is used wherever horizontal surfaces are needed, and the local plumb line wherever vertical lines are needed.

cf_slide009

The example diagram of a continuous centrifugation chamber above has 10 m radius, with curvature designed for rotation at constant ~9.5 RPM. Two toroidal floors of rooms with different levels of artificial gravity are placed concentrically, so from the inside it appears that one floor is “above” the other. This example shows two sets of concentric toroids on top of each other, for more efficient use of space. If normal human locomotion is possible inside the system, ramps, stairs or ladders could be used for moving between the different parts of the rotating chamber.

Since the rotation must be constant for the curvature to make sense, people and things need to be able to transfer in and out of the rotating chamber without stopping the rotation. [Of course, regular maintenance of the mechanical system may require stops from time to time, and the chamber should be designed to be navigable even when not rotating.] For a large enough radius, the rotation is slow enough that it would be possible to transfer by just climbing up and down a non-rotating ladder, and take a few steps on the rotating floor to align the body with the rotation. This is still a hazardous procedure, and not helped by the dizziness from seeing the room rotate around you. A more automated process of transferring people and goods in and out of the chamber will most likely be developed; either some kind of a pater noster ski lift, or just a big robotic arm that both lifts and rotates carriages when needed.

The rotation would not be visually apparent to occupants in the toroid floors, as the outer walls of the inner chamber would be opaque. Use of the local plumb line as vertical in the interior design and decoration would instead give the brain the visual cues needed to accept the local net force as gravity. When standing, the inner ear vestibular system would still experience a slightly different net force direction than when lying down, because it would be closer to the rotation axis. This means the same floor would feel slightly tilted when standing up, but level when lying down. The effect is less than 2 degrees at the outermost toroid floor, but more pronounced near the central axis.

There are other ways that the chamber will feel different from real gravity, other than the floors and walls being slightly curved and asymmetrically slanted. For one thing, a room would have stronger gravity on one side compared to the other, and objects will weigh less when held high. Unlike on Earth, a dropped object will not follow the local plumb line until it hits the floor, nor will it fall straight. Once free, a dropped object is ballistic, and will be subject to both Coriolis effect as well as lunar gravity affecting its path. The same applies also to thrown objects in sports, like baskeballs or darts. Water pistols would shoot interesting trajectories to visualize these effects (and are probably safer to play with than darts, untill you learn the ballistics).

You could try running along the latitude of the toroid floor, but you would feel either heavier or lighter depending on whether you ran with or against the rotation, and the floor would then feel tilted because you have added or subtracted horizontal force to the net force, thus changing its direction.

If you still were not sure if the gravity you felt was real or artificial, you could try spinning a small top to test it:

tops2b

The same thing would happen with other fast-spinning toys like fidget spinners, yo-yos, or diabolas: as gyroscopes, they would align their rotation axis to the distant stars instead of the interior of the cenrifugal chamber. Many modern devices also contain miniature gyroscopes, and such “six-axis” sensors would need to be reprogrammed to make sense inside the centrifugal frame of reference. The semicircular canals in the inner ear balance system are not gyroscopic however, and can be fooled when rotation is steady and not visible to the eyes. Having said that, we have no actual experience of people walking inside centrifuges at 1 g net acceleration, so we really don’t know in what subtle ways it will affect human movement, or how the angular momentum in our limbs would interact with the rotating framework (I advise against trying pirouettes, at least without some soft mats spread around). There is also no guarantee that a baby mammal can learn to 1-g-walk in a centrifugal chamber, and transfer the skills and reflexes to planetary 1 g after flying to Earth or Venus.

One of the biggest engineering challenges in the functioning of a continuous centrifugation chamber is that people are allowed to move freely inside it. Just like an unevenly loaded washing machine on Earth, uneven distribution of mass will cause aberrations in the rotation of the inner chamber. Some form of moving counterweight system needs to be devised if we want the rotation axis to stay in place, and keep the inner chamber from wobbling against the outer housing. For example, automatically pumping water between alternate reservoir tanks could be used to keep the center of mass at the geometric center. (If running water plumbing is built inside the floors, it will anyway participate in transferring mass from one part of the system to another.)

wheel_a.gif

The problem of mass distribution changes from voluntary occupant locomotion is also present in proposed orbital stations that would rotate to produce centrifugal artificial gravity. Since there is no outer housing to bump into, the crew that gathers to one section of the torus for a meeting (something that seems to happen quite often in Stephenson’s Seveneves) might not even notice that the barycenter has shifted because of it. But a spaceship trying to dock at the center will notice, even if the shift is just a few centimeters.

wheel_b.gif

Back on the Moon, there is also the possibility to allow the center of mass to shift around by letting the inner chamber to travel laterally by some margin. This could be implemented for example with a layer of spherical bearings underneath the whole thing. (Such a layer would also provide some seismic protection, which may be needed for buildings on the Moon anyway.) Ballast is also easier to come by on the surface than in orbit, and the structure could be made more stable by just adding fixed mass to it. Allowing the rotating axis to travel laterally could introduce other problems, and adding mass makes the system harder to contain if something goes wrong. For example, if the layer of spherical bearings develops a slight incline, the whole thing will start to slide downwards in lunar gravity while rotating; if it hits something the extra flywheel energy from the added mass will increase the collision energy and damage.

The geometric principle of using water level and plumb line gradient curvature inside a centrifuge is something that can be applied on any planet or body with gravity. The water level curve as defined above will always be a paraboloid, but its steepness will depend on the ratio of planetary gravity and the centrifugal force. For example, the same geometry of the lunar 10 m radius chamber would also work on Earth if the chamber were rotated at 23.2 RPM instead of 9.5 RPM. The faster rotation would make the transition in and out of the chamber more difficult, and, while 6 g by itself would not be immediately fatal, losing consciousness and falling down steps at 4 g probably would be. In other words, not an ideal place to raise children. [Unless the aim is to produce science-heroes on Attabar Peru, that is.]

Advertisements

Between Fluid and Solid: Plasticity, Part 1

Plasticity as a concept is both old and new. It can offer a slightly different and fresh perspective to thinking in many disciplines. Plasticity is closely linked with creativity, and in its most fluid form is resistant to rigid formalization. Nevertheless, in this series I will attempt to keep the scope as loose as possible, while aiming to approach something that resembles a formal framework later on. This first part introduces the concept, and briefly lists some contemporary uses of it.

Introduction

About a hundred years ago, the English word ‘plasticity’ was more coherent in its intuitive meaning than it is today.  In the 1920s, the word ‘plastic’ was still mainly an adjective [like in the name of the 1924 best-seller, “The Plastic Age“, by Percy Marks], not yet having been turned into a mass noun for certain types of synthetic polymers. This change of language has been gradual, and largely unintended: the original (and still correct) collective noun for industrial polymers was ‘plastics’, in similar formation to ‘electronics’, or ‘mathematics’, but the ending ‘-s’ has worn off for most users, creating the current confusion of homonyms for English speakers. [This google books ngram chart shows how ‘plastics’ peaked back in the 1940s, when synthetic polymers were first introduced.]

Screenshot_2019-03-04 Google Ngram Viewer

Plasticity derives its meaning from the original, adjective sense of ‘plastic’: in the concrete sense it is the capacity of being molded, formed, pressed into shapes. In the broad, abstract sense it means the capacity of taking on persistent changes, in any context.

[NOTE: The etymological root of the word ‘plastic’ can be traced to ancient greek πλάσσω (plássō‘), with related words like ‘plasma’ and ‘plaster’; but it may go all the way back to the PIE root ‘pleh2-‘, with more distantly related words like ‘flat’, and ‘platonic”]

It is not at all obvious that matter should take on persistent forms, especially in light of modern physics. Yet the human perspective is usually to take the plasticity of the world for granted: that we can make intentional changes in our physical surroundings, and that the changes we make will persist, unless some other force is there to change them. This is the perspective of force: the Power of Man to Change The World. But there is another perspective, that of plasticity: that the capacity for change as well as persistence must already exist in the world, for any force or command to have effect. The limits of plasticity are at least as important to changing the world as the limits of force.

Memory

The capacity to record information, in history, prehistory, or future, is possible because of physical plasticity. From clay tablets to solid-state disks, all intentional recordings of information by humans make use of physical plasticity that is available in the world. In addition to intentional recordings, a lot of trace information also gets recorded without any human intervention, in fossils, in geological strata, in genetic remnants, in frozen water deposits. A big part of science consists of finding and interpreting such autonomously recorded signals from the past.

It is natural to think that some kind of physical plasticity is also responsible for human and animal memory. Indeed, there are echoes of this idea as far as Aristotle, who compares the formation of perceptual memories to stamping an impression seal:

“The process of movement (sensory stimulation) involved the act of perception stamps in, as it were, a sort of impression of the percept, just as persons do who make an impression with a seal.” [450a30, On Memory and Reminiscence, translation by John Isaac Beare]

Louvre unidentified access number mp3h8892

Aristotle attributes elemental Water in the psyche as necessary for the recording of memories, but only in proper proportions: young souls are too wet and flowing to form lasting memories, and old souls too dry and hard to form new memories. Aristotle does not use πλάσσω, of course, or any related word; the operative word here is ἅπτω  (‘háptō‘): to touch. (This same word is also used when Aristotle discusses the (now lost) sense theory of Democritus [442a-442b, On Sense and the Sensible], where Aristotle is skeptical how ‘touch’ could deliver senses other than touch or taste or smell.)

[NOTE: ‘háptō‘ is the etymological root word for ‘haptics’, and ‘synapse’]

This metaphor, receiving an imprint of an object through contact, eventually grew to mean abstract sensory data in general. The empiricist John Locke uses words like ‘stamp’, ‘struck deepest’, ‘imprint’, to describe the reception of experience; and especially the word ‘impression’ for the abstract forms passed from objects to the mind.

“IMPRESSION, in philosophy, is applied to the species of objects, which are supposed to make some mark or impression on the senses, the mind, and the memory. See SENSATION.” — Ephraim Chambers, Cyclopaedia, Vol 1, 5th ed, 1741

David Hume also gave his own definition of the word ‘impression’, and tried to distinguish it from other content of the psyche. Today’s philosophers and psychologists mostly use different technical terms for sense data and the contents of consciousness, but the impression metaphor has survived in common phrases such as ‘being impressed’ or finding something ‘impressive’.

The plasticity necessary to receive a sense impression remained implicit in the metaphor, but was not made explicit until the late 19th century, when William James attributed memory to plasticity in the brain:

“What happens in the nerve-tissue is but an example of that plasticity or of semi-inertness, yielding to change, but not yielding instantly or wholly, and never quite recovering the original form, which, in Chapter [IV], we saw to be the groundwork of habit.” — William James, The Principles of Psychology, vol 1, pg 646

James emphasizes structural plasticity, instead of the plasticity of essence (like Water element for Aristotle). In the modern view of physics, it was clear that plasticity could not be a fundamental property of matter:

“The habits of an elementary particle of matter cannot change (on the principles of the atomistic philosophy), because the particle is itself an unchangeable thing; but those of a compound mass of matter can change, [..]” — William James, The Principles of Psychology, vol 1, pg 104

Nervous circuitry was readily visible with the imaging techniques available at the time, and James uses “brain-paths” as his analogue for both retentive and recollective aspects of memory; but he seems open to also “invisible and molecular” explanations of plasticity in general. [Today, more than a 100 years later, neuroscience has yet to pinpoint exactly where in the brain perceptual memories are stored, and with what physical mechanism(s); but I feel confident that whatever is discovered will fit inside the broad definition of plasticity used in this series.]

Constructiveness

Making deliberate changes in the world seems to be characteristic to humans, and we have instinctive understanding of physical plasticity from the moment we start applying our tiny hands to our surroundings; so it is no wonder we take the plasticity of the world for granted most of the time. We use tools and materials, appreciating them for how well they follow our intentions, and interact with our imagination.

James calls this instinct of humans constructiveness:

“Constructiveness is as genuine and irresistible an instinct in man as in the bee or the beaver. Whatever things are plastic to his hands, those things he must remodel into shapes of his own, and the result of the remodelling, however useless it may be, gives him more pleasure than the original thing.” — William James, The Principles of Psychology, vol 2, pg 426

[This constructive pride in “useless remodelling” is reminiscent of the so-called “IKEA effect” identified recently.]

Stone tools made by our early ancestors have survived hundreds of millennia, but it does not mean that stone was the only material used in prehistory. Indeed, material technology has always been a world of trade-offs: hard materials like stone are difficult to shape, but very persistent. Softer materials are easier to sculpt, but do not last.

Clay was the most easily available plastic material in prehistory, especially in the fertile alluvial plains where early civilizations began. It was known by the ancients that clay was nothing more than finely ground Earth mixed with Water. While moist it could be easily molded, but when exposed to Air it would harden. If covered with a wet cloth, a hardened lump could be re-plasticized, making it possible to work on a sculpture for a longer period of time. But when clay was exposed to the fourth element, Fire, it would harden more permanently.

Sculptors were the inventors and engineers of the ancient times, exploring the properties of materials and the methods of shaping and hardening them. Daedalus was a mythical sculptor associated with inventions like the saw, and advanced materials like wax and glue. The anonymous discovery of metal refinement happened also at some point in prehistory, possibly from experiments in glazing pottery.

Today, elemental theories of matter have been replaced by atomistic ones. We understand the elementary particles and fields of physics to tens of decimal points, but this is because of the elementary nature of these fields: like a child breaking and pulling apart toys, we have progressed to the limits of our current ability to reduce matter into component parts. At every step of the way down to smaller and smaller parts, the number of different elements and forces seems to grow smaller. All baryonic matter that humans come in contact with, including all of biology, consists of combinations of just three different particles: up(u), down(d), and electron(e). (This trend is taken to its logical conclusion by the grand unifiers; in string theory there is just one basic element, the string.)

Combining our fundamental knowledge with our constructive instinct, we now live in a golden age of materials science. New materials, and new ways to manufacture and apply them, seem to pop up at ever accelerating pace. In our minds we can imagine any combinations of atoms, and some even expect that the Power of Man to Change the World will soon be realized down to individual atoms: manufacture and application of designer molecules of anything we can imagine, with instant amplification to industrial scales.

Knowing the fundamental building blocks, the pieces that we have no power to break into smaller ones, is a starting point for such Drexlerian ambitions, but it is by no means a complete picture of the macroscopic world. Even a chemically simple substance like water is prohibitively complex to simulate accurately. In practice the study of condensed matter is still based on approximate and idealized models, instead of accounting for the movements and placements of every individual elementary particle.

Phases and phase transitions are a modern way of understanding dynamism and staticism in matter: the number of degrees of freedom in the continuous random movement of particles in homogeneous matter depends on temperature and pressure. Instead of attributing Earth with solidity, Water with fluidity, we can work a material by putting it just below its critical temperatures and pressures, and guiding it between fluidity and solidity by small nudges. Materials are easiest to shape just below their “Goldilocks” temperatures: near body temperature for modelling wax, or near zero °C for making snowballs.

For better control, all degrees of freedom cannot be dynamic or open, some of them need to be static or closed. This approach I will call the laminary principle. Magnetic media for storage of information are an example of this principle: The tape or disk itself stays solid and unchanged, only the ferromagnetic layer of it is open to phase transitions. In arts and crafts, lamination can happen with concrete layers: glazing, gluing, fixatives, etc. But it can also mean adding a layer of paint or ink applied on a solid base surface, or the layered way that additive 3D printers work. Autonomously created fossil records also form in accordance with the laminary principle, as layers of geological strata dynamically form on top of previous, already solidified layers.

William James - John La Farge

[William James, layering paint on canvas, circa 1859]

If not looking at chemically homogeneous and idealized matter, the possible combinations of molecules and atoms quickly explodes into prohibitively large number of variations. The only process we know of that has made some headway into exploring that vast molecular design space is biological evolution. By storing the instructions to replicate the design of a protein, then randomly mutating the recipes as they are replicated, evolution can compare massive amounts of variations in parallel.

Apart from the membranes inside cells, at the molecular level all processes of life come into being as growing 1D structures, long chains of molecules. [That we can understand this part of the synthesis is perhaps because it follows the laminary principle: reducing the dynamic degrees of freedom to just one dimension.] Even though these molecules grow in one direction only, somehow they form dynamic shapes in three dimensions, folding DNA into chromosomes, peptides into proteins, and RNA into what we call viruses. [I say “somehow”, because this part of the synthesis is not yet well understood.]

Life as we understand it, forms and reforms itself by using physical plasticity, at many different levels: at molecular level, at cellular, in organs, as symbionts and holobionts, even populations and ecosystems. Events at different levels also interact, both upwards and down the scales: genetics affects phenotypes, and vice versa; environments affect populations, and vice versa. And at the largest end of the scales, the formation of stars and planets must happen before life can take root; persistent changes regardless of origin or timescale imply plasticity.

Complex life on Earth exists in the relatively thin boundary layer between the earth and the sky. Living boundaries at many different levels are also essential to biology; For example, while the overall shape of an animal stays relatively unchanged, microscopic cells and their structures are continuously destroyed and replaced with new ones. The organism continuously regenerates its many boundaries and scaffoldings, metabolizing and consuming material and energy from its surroundings; until the organism itself breaks down and is consumed into the larger ecosystem.

Current Uses

In materials science, plasticity refers to non-reversible deformation of a solid material in response to applied forces. Many materials react to minimal force with elastic, or reversible deformation, but when force is increased beyond a yield threshold, plastic, i.e. non-reversible, deformation occurs. Many related terms also exist, such as brittleness and ductility, to describe different kinds of deformations. Materials science is an active area of study in many respects. [Some odd deformations have also been discovered, for example shape-memory alloys, which display conditional reversibility to previous shapes. These materials also display something called ‘pseudoelasticity‘.]

In biology, phenotypic plasticity refers to changes in physiology, morphology, or behaviour of organisms in response to identifiable variation in the environment. This is currently an active area of study. (Related terms ‘genotypic plasticity’, ‘phenotypic elasticity’ can be found in the literature, but are not commonly used.)

Neuroplasticity is the modern incarnation of the microstructural changes that William James hypothesized in chapter IV ‘Habit’ of The Principles of Psychology. This is currently an active area of study in neurology.

Future Topics

Straddling the fuzzy boundary between change and persistence, future and past, possibility and actuality, the concept of plasticity offers a new perspective to the perception, perhaps even the nature, of time.

All recordings of information must take shape via plastic media; does this mean that the limits of plasticity are also limits of information?

Neural plasticity is what enables animal learning; how is plasticity present in machine learning?

These topics and others will be covered in later parts of this series. But before that, we must build some tools and terminology by studying systems, aspects, and models, in part 2 of this series.

References

William James, The Principles of Psychology, Volume 1 and Volume 2

Superolfaction: Paths, Dangers, Strategies

Olfaction, sense of smell, is one of the most primitive senses that we have, in evolutionary terms at least. Smells enhance the functioning of memory and evoke emotions, even when we are not consciously aware of them. Our consciousness of smells is very private and subjective, and hard to describe. Since our sense of smell is generally unreliable and vague,  we don’t put much trust in it. Many people are completely unable to smell, and we don’t treat them like they were blind or deaf; in the modern world there is very little handicap in missing the ability to smell. [Unless your job depends on it; Cooks and chefs can lose their jobs if they lose their sense of smell, for example due to a head trauma.]

We do know that there are many species of animal with much more sensitive noses. Dogs are able to perform amazing feats of chemosensing, such as distinguishing identical human twins by their smells. Through our inter-species cooperation, we have had access to canine super-senses for millennia, but only recently have we been able to compare canine senses to our scientific instruments.

What if there was a way to receive the kind of information from the environment that a dog gets through smell, through technological means? For our more accessible senses, seeing and hearing, we have technologically advanced replacements, cameras and microphones, which are already quite small and ubiquitous. Despite many attempts to make technological sensors, trained dogs are still today the best alternative for versatile chemosensing in many practical situations.

Just as we have optical instruments that surpass our eyes, we have developed instruments for detecting chemical compounds that are more accurate than even the olfactory sense of a trained tracking dog. Gas chromatography-mass spectrometry (GC-MS) is the “gold standard” used in forensic research. But such instruments are bulky, nowhere near the portability of digital cameras. For the kind of instant superolfaction that a dog is able to perform, the chemosensor would have to be about the same size as the miniature camera in a smartphone, and work continuously without consuming anything else besides a small amount of electrical power.

Some technological development is of course needed before this becomes reality; The miniaturization of photographic cameras from bulky glass plate devices into digital miniature marvels did not happen overnight either. But the development of cameras and microphones may have also been driven by human familiarity with the senses that we rely on most. Because sense of smell is so vague and impractical in humans, we cannot easily imagine how it would benefit to develop it artificially. [There seems to be at least one ongoing research project for OFET-based e-noses, called PlasticARMPit, which seems technically promising, but the silly name and the idea of using it to sell deodorant shows a serious lack of imagination.]

What could you do with an aerial chemosensor attached to your mobile computing device, as sensitive as the noses of dogs or the antennnae of bees, as programmable as an app?

  • Hold it above a served meal, and you will instantly see if it contains any compounds you are allergic to, or just prefer to avoid in your diet.
  • With food products and many consumables, you can check the ingredients are what they are claimed to be, just by waving your device above it. This can help avoid product forgeries, and empower consumers to make more informed choices in general.
  • Your device automatically alerts you if it detects any harmful substances in your surroundings. If the sensor is generic enough, tuning your device to look for any specific stuff is just a matter of downloading a software signature from an online store.
  • An instrument that is mechanical or simulated can also learn smells that would be cruel or impractical to teach to dogs, like deadly nerve toxins.
  • An olfactory device can be taught smells by machine learning, and since the results of the training are just a software signature, it can be transferred and downloaded to any similar device. Just as mobile cameras can enable people to become citizen reporters or semi-pro photographers, having an olfactory device with you at all times can enable people to become collectors of airborne chemical signatures.

Advanced artificial superolfaction of this kind is not easy to break into smaller steps. Our natural olfaction is less structured than vision and hearing, which does not help: Whereas digital photographs and audio recordings have the concept of resolution, numbers of samples per second or pixels per area, no such natural structure exists for smells.

There must be some dimensions or components to an odor, like primary colors for vision, or the five dimensions of taste [or six, if capsaicin sensitivity is included], but there are no established names for elementary odor components. In fact, by conservative estimates, there are hundreds of dimensions of smell. Through controlled double-blind testing, the minimum number of different combinations of odorant molecules that the average human can distinguish is estimated to be about one trillion  [“Humans Can Discriminate More than 1 Trillion Olfactory Stimuli”, Bushdid et al., Science vol 343, March 2014. The “trillion” in the title is short-scale, the estimate they present is 1.72·10^12. To put this number in perspective: if you were to sample a distinct smell every one second of your life, you would die before experiencing 1% of all possible smells.]

“Primary odors” must be even rarer in nature than primary colors, so it makes no evolutionary sense to name them as abstract entities, separated from the objects that produce them. We name things when we come across them, so the names of smells are also the names of the things that smell like them. Smells are harder to imitate than other sensory properties, so we easily identify them with the “essence” of a thing [insert Shakespeare quote here].

The subjectiveness of the aesthetics of scents and flavors is the reason there can be “secret” recipes for products with fragrance or taste. While the ordinary consumer can appreciate the flavor of prepared seasonings and sauces, it is an impossible task for the average taster to break it down in his mind into a recipe to recreate it. [Of course there are chefs who say they can do exactly that, taste a dish and tell exactly the ingredients and recipe used. This claim is still subjective, and like any acquired skill, subject to continuous training and calibration.] It is because the “qualia” of olfaction run in the trillions, that scent can augment identity and memory. But due to the vast number of possible smells this augmentation is in one direction only. We can recognize hundreds and thousands of scents when we come across them again, but it is hard to “imagine” smells in the mind, or recall or describe them from just memory. We can read descriptions of what dodo meat tasted like, but unless we bring the species back, there is no way to recreate the experience. This rich subjective tapestry of scents and flavors goes well with human omnivorousness and curiosity: despite having less olfactory sensitivity than most other animals, we seem to be the the only ones who regularly cook and season our food, and find pleasure in combining several elements to create new sensory experiences.

Since we don’t understand the coding of odors, efficient digital representation, like we have for digital video and audio, is not a very meaningful goal for artificial olfaction. And if we want a detector to be as general as possible, and more sensitive than our native sense, we should try to retain as much detailed information as possible in the representation. This is why it makes little sense to implement artificial olfaction in the manner of a phonograph, or a camera: as a series of recording samples with no further analysis. The trace chemical composition of air must be analysed on the spot, with access to a large database, using statistical models capable of parallel recognition, based on uncompressed detailed molecular measurements and high sensory data bandwidth locally. In other words, artificial superolfaction requires a connected smart device to be useful.

Because the design space of possible molecules is so vast, the expedience of evolution means that our sense of smell has most likely been optimized to detect the kinds of organic molecules that normally occur in biochemistry. Many of the chemicals that we naturally consider malodorous serve as warning signs, something to avoid for our own health. Of course we also attach more private emotional associations to smells, but more universal preferences seem to exist as well, encoded somewhere deep in our biochemical program code.

An artificial chemosensor does not need to be optimized towards organic molecules. Pretty much anything that produces airborne particulates and molecules could be analysed in detail with artificial olfaction. Modern environments have increasing amounts of artificial chemicals that don’t occur naturally in nature. They could have long-term health effects, but our biological senses do not smell them as harmful to us, if they smell them at all.

Just as with any technological advancement, universally available superolfaction will not be an unqualified blessing. Ubiquitous devices capable of revealing the unseen chemistry that surrounds us every moment of our lives would increase transparency and safety in many ways. But this kind of capability has dangers as well, if used by unscrupulous individuals. Just as ubiquitous portable cameras and microphones have diminished the normal expectations of privacy, ubiquitous chemosensing could be used to diminish privacy even more. Facial recognition as currently implemented requires large databases which can (in principle) regulate their access. But the capability for everyone to sample their surroundings and record chemical signatures of the people they meet, provides an astonishing capability for “stalking” them later. Private or illegal databases of chemical signatures might pop up, which could be used to create virtual “bloodhound mobs” to target individuals for whatever reason.

Our biological selves operate at the level of biochemistry. In theory it would be possible to technosniff an individual and instantly detect:

  • what they have eaten recently (past few days even)
  • where they have been, and with whom
  • some medical conditions they have, or medications they are on

This of course requires that the device is programmed to detect such chemical signatures. And of course, even deeper analysis would be possible by taking solid samples, small pieces of dead skin and hair, and analysing them later by aerating them in front of the device.

[The title of this post obviously refers to a book by Nick Bostrom. It was not my original intention to review books on this blog, but I will make an exception here.

Now, for something completely different … ]

Comparative Review of Wiener and Bostrom

Cybernetics: or Control and Communication in the Animal and the Machine by Norbert Wiener (1894-1964) was something of a surprise bestseller in 1948, much like Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (1973-) was in 2014. Both books were also published in multiple, revised editions by their respective authors; Wiener also published a more approachable book, The Human Use of Human Beings: Cybernetics and Society, in 1950 as a less technical book on the topic, more intended to the lay public. In 1964, the year of his death, Wiener returned to the topic with God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion. [I will treat the three books by Wiener as essentially three parts of the same work in the following. From Wiener, I have used the second edition of Cybernetics from 1961, the revised Human Use from 1954, and God & Golem from 1964. For Bostrom’s Superintelligence, I have used the second edition from 2016.]

While both books can be characterized as cross-disciplinary efforts, situated somewhere between philosophy of engineering and futurology, they seem to cover much of the same ground; in particular the relationship between humans and the powerful machines they have created, or will create in the future. With continuing advancement of technology this topic remains at least as important as it was 70 years ago.

Exploring these two authors in parallel in 2018 has been a very enlightening experience in many ways. The closeness of the topic highlights both the similarities and the differences between the approaches that Wiener and Bostrom have made. The two authors are largely ignorant of each others work; Wiener was of course not influenced by Bostrom, but neither does Bostrom seem directly influenced by Wiener’s writings. [Wiener does get mentioned in a footnote on AI pioneers, Superintelligence pg 326]

Both authors have varied educational backgrounds. Wiener studied mathematics, zoology, and philosophy before his doctorate in mathematical logic at Harvard in 1912-1914. Bostrom explored physics, mathematics, logic, and computational neuroscience before his doctorate in philosophy in 2000. This interdisciplinary curiosity remained as a strong creative force for Wiener, who attended seminars, conferences in wide variety of fields and initiated discussions with experts in many fields throughout his career. He was consistently interested in physiology, and collaborated closely with Walter Cannon (who coined “homeostasis”) and Arturo Rosenblueth when forming his ideas. Bostrom has been active in many forward-thinking interdisciplinary groups, like the Transhumanists, and has been influenced by the ideas of Eric Drexler and Anders Sandberg [as well as Eliezer Yudkowsky, who is referenced often in Superintelligence].

The backgrounds have dissimilarities relevant to their work as well. Wiener’s insight into society and communication was formed after living through two world wars, and seeing the political landscape change. It could be that being part of a minority (agnostic jew) affected his view on society and politics as well. In contrast, Bostrom’s home country, Sweden, has not been directly involved in a war in two centuries, and until recently has not had much ethnic or cultural diversity. [The above should not be construed as an apology for either author’s views, but as an attempt at placing them to some context.] [Even though Sweden did not fight in the wars, they did design and manufacture weapons for them. In Swedish language, the word commonly used for powered missiles is “robot”, introduced by Chief Engineer Tore Edlén in the 1940s.]

Even though they handle similar topics, the styles of the two authors could hardly be more different. Wiener’s prose is pithy but rambling; it is piecewise smooth and continuous everywhere, but each chapter progresses like a maximum entropy random walk. An amazing array of topics gets randomly covered in just a few pages, but the delivery never seems hurried or impatient. There are few footnotes in Wiener’s text, but there are many small allusions to outside literature. Example:

The chance of the quantum theoretician is not the ethical freedom of the Augustinian, and Tyche is as relentless a mistress as Ananke. [Cybernetics, ch. 1]

In spite of the concise style, Wiener’s rhetoric gets quite passionate at times, for example towards the end of God&Golem, where he compares “gadget worshipers” to the sin of sorcery, but his target is not clear and the implications are veiled. In a warning tone, he cites Goethe’s The Sorcerer’s Apprentice, The Fisherman and the Jinni from Arabian Nights, and The Monkey’s Paw by W.W.Jacobs, as timeless examples of assumed mastery revealed as tragic illusions of control.

Bostrom’s prose is less deep, and as he laments in his introduction, peppered with weasel wordings (“perhaps”, “likely”, “could” etc.) that act to dilute his message. Classics education is today marginal, and Bostrom does not bother to reference ancient myths; his audience would not get the references anyway. But he does start his book with an unfinished fable written in a timeless style about sparrows and owls. The cover of the book also depicts an owl, but the half-finished fable just raises more questions than it answers. Does it refer to the owl of Athena, the virgin goddess of strategy and wisdom? Or is the owl just the biological owl, stealth predator of a specialized niche? How did Skronkfinkle the sparrow lose his eye? Would a cuckoo’s egg be more to the point than an owl’s egg?

Where Wiener’s voice is piecewise smooth, Bostrom’s arguments often have rough and jittery edges, which he manages with the forementioned weasel wordings, as well as by oscillating between many alternative scenarios in parallel. His anxiety seems quite genuine, he really wants to convince the reader of his fears for humanity’s future; but like the time-anxious White Rabbit from Alice’s Adventures, he has a tendency to lead the reader down rabbit holes. Although they seem far-fetched, his scenarios are not the usual thought experiments that philosophers like to employ to illustrate an idea or a contradiction. The extreme hypotheticals that he presents one after the other are meant as actual possibilities of future development of mankind and its instruments. But the book is not really futurology either, more like a pamphlet with an almost exhaustive list of talking points. [I will not use my unpaid time to tackle most of them in this review.]

For convincing his reader, Bostrom’s rhetoric comes a bit too close to a sales pitch at times; for example the false dichotomy of extreme alternative futures [Drs Pangloss and Strangelove?]. The “limited time” anxiety is also a known method to force people to make quick and bad decisions. The sheer exhaustion of the detailed scenarios and original invented terminology [partly overlapping other disciplines, like “oracle”, “singleton”] also serve more to confuse than to enlighten the reader. Whereas Wiener could devote pages to deriving mathematical formulas as well as gently educating his readers on how machines and animals actually work, Bostrom is not using his pages for basic education. He assumes that the “adults” among his readers are capable of searching the Internet for more details, if they need them.

The word “cybernetics” itself seems to have gone out of fashion, but derived terms like “cyber-crime” and “cyborg” make appearances in Bostrom’s book [even “cyborgization”, pg 55]. The word “cyborg” was coined in 1960, while Wiener was still alive, but I don’t know what he thought of the portmanteau. The original “cyborg” was intended as a means of overcoming biological shortcomings in the context of manned space exploration, and was based on extended homeostasis. It immediately became popular in science fiction, and today it seems to mean any merging of artificial and biological prostheses, in space or elsewhere, with or without homeostasis. [A further degeneration are the fictional Borg of Star Trek: The Next Generation, where only the letter B remains from the word “cybernetics”.]

Since the original meaning of cybernetics has become so muddled, I will take a brief detour to try and filter out the original meaning from the accumulation of background noise.

Control Problem vs. Control Theory

Systems with realtime regulation, either designed or natural, have been studied for quite a while. In the Alcibiades dialogue, as recorded by Plato, Sokrates refers to the art of the pilot [“kubernetikhe”] when discussing how to be a good ruler of men [Alcibiades, 125d]. This later became the commonly used metaphor for government in general, especially in the latin translation, “gubernatoria”. Clerk Maxwell started to formalize the theory of self-regulating devices in the 19th century, in an article on “governors” of steam engines. Partly to avoid confusion with civic government, Norbert Wiener brought the word back to its greek roots with “cybernetics”. [Although to add to the confusion, similar back-formation was made in 1834 by André-Marie Ampère to describe a science of civil government.]

Basic mechanisms of cybernetics, feedback and forward amplification, were already intuitively known in antiquity, exemplified by the way the pilot controls the rudders from the back of the ship, using quite small movements to turn a large ship. But their detailed formalization could not be made until a theory of realtime signaling or communication was available. Pure information theory is not enough for cybernetics; realtime latency and bandwidth must also be characterized.

The necessity of feedback is due to the inherently incomplete knowledge of the controlling agent. Instead of a copy of the actual world it acts in, the control agent has just a map, a simplified model of the world that it can internally refer to. However accurate its internal model may be, no agent has a reliable map of the future. An agent with telos, acting in a world with incomplete information, needs to know whether it has succeeded or failed, or how close or how far it has come to reaching its goal.

Here is how Wiener puts it:

[…] control of a machine on the basis of its actual performance rather than its expected performance is known as feedback [Human Use, pg 24]
[…]The simplest feedbacks deal with gross successes or failures of performance, such as whether we have actually succeeded in grasping an object that we have tried to pick up, or whether the advance guard of an army is at the appointed place at the appointed time. [pg 59]

It is important for the agent to get accurate feedback as soon as possible, so that it can correct its internal model and avoid wasting resources at a faulty target. The fundamental delay of the round-trip time from taking an action and receiving the results of that action is called latency. The concept of latency is very well known to players of online multiplayer games, where having too much network latency towards the game server puts even the best players at a disadvantage. Single-player games and virtual reality helmets also have maximum latency requirements for the feedback loop between moving a controller and seeing the results, to make the virtual world seem real to the player, and not cause nausea. Although it is desirable to minimize latency, it cannot be completely removed in physical systems, ultimately due to the maximum transfer speed of information, the speed of light.

Oscillation due to feedback latency is a realtime phenomenon which can be seen in many natural systems. In addition to intention tremor described by Wiener, cycles of boom and bust in economy, cycling of political parties into opposition and out of it, can be seen as large-scale examples of delayed feedback oscillation. In a wide sense, even natural selection in biology can be seen as a form of feedback from the environment, operating at population scale with generational latency. [Plato also describes the “kuklos” of history, a larger, slower oscillation of socio-historical forces where individual human lives are just hapless passengers.]

The living animal takes feedback through the senses. Innate senses even seem to have built-in valuation heuristics, indicating short-term success and failure through pleasure and pain. Pain is perhaps more fundamental, and required even in the safety-conscious modern world for long-term survival. Humans have complex emotions, able also to imagine and empathize with pleasures and pains not currently present. For an agent wishing to act in the real world, the contingency of receiving either pleasant or painful feedback is unavoidable. Medical and technological means can short-circuit the feedback loop, replacing real pleasures and pains with virtual ones. This usually results in addiction.

Even simple animals with innate senses move towards pleasure and away from pain, seeking local optimums of their environment with little need for conscious memory. These pleasures and pains are rarely experienced with absolute idempotency, however; when the need has already been satisfied, less pleasure is received from e.g. eating more food. This natural lessening of urgency can also lead to orbits or oscillations in long-term patterns of behaviour, with interspersed periods of attraction and avoidance. In complex social situations the multivariate fears and desires, avoidances and attractions, of individuals are signaled in various ways, leading to complex guessing games of what thing will become desirable or undesirable in the future. This kind of marketplace signaling of buying and selling, delaying and preparing, works ultimately on probing and then getting feedback; there is no a priori way of deciding market prices for non-essentials [which do often display oscillations at many interval scales].

A modern version of the ship pilot metaphor is driving a car. Forward amplification of control signals is called power steering and power brakes; feedback takes place through a multitude of instrument displays, but perhaps more importantly through the driver’s vision of the road, as well as feeling the accelerations via the seat of his pants. When a driver presses a pedal, the complete feedback loop progresses through the wheels, contacts the surface of the road, and makes the whole vehicle accelerate or decelerate. When the contact is temporarily lost between the wheel and the road, for example when driving over a slick, the driver feels the loss of traction as directly as if his own feet were slipping.

With self-driving cars having the potential to prevent millions of accidents as well as transport goods more economically, they seem on the verge of becoming a reality. Enhanced computer vision systems have already become standard for assisting the driver with lane drift or emergency braking. In these scenarios the driver is still sending all the forward signals, but the computer may censor them for safety, and preferably indicate to the driver that he is about to do something stupid. Eventually it may be that the computer vision and safety logic gets advanced enough that the forward signals will also be triggered by a computer instead of the human. There could still be an overlap period when a human driver is needed as a backup, in case the machine makes a mistake. But switching from automatic driving back to human driver is not as easy as a driving instructor taking over the controls from a student [which is itself not that simple, or foolproof]. The driving instructor continuously looks for signals from the human student, to see when the driving student is about to do something dangerous. These signals are non-verbal, biological signals that humans are evolved to interpret as gestalts, such as direction of the eyes or body language. What kind of signaling should a self-driving car be designed to use, to show to the human backup driver that it is about to do something stupid and the human should take over? [Marketing will of course claim that their self-driving car never behaves stupidly, but please sign all these waivers saying that you are prepared to take over the wheel in case of emergency.]

The theory of games, founded by von Neumann, has the idea of “sums”, valuations which the players use to formulate their goals of attraction and avoidance during play. In both the cybernetic model and the game theoretic model there are agents trying to achieve goals with incomplete and delayed information; one difference is that in game theory the incompleteness of information is caused by agents purposely not revealing their strategies to the other players, and the delays are imposed by the rules of play. In the cybernetic model the incompleteness of information is a result of entropic noise, an unescapable fact of the world, and delay is ultimately due to the maximum speed of information transfer, another unescapable fact of the world.

It may be down to the individual temperament of individual thinkers, how much uncertainty they are prepared to live with, that informs their intuition on epistemic humility. Here is Wiener’s characterization of the two sources of incomplete knowledge:

The scientist is always working to discover the order and organization of the universe, and is thus playing a game against the arch enemy, disorganization. Is this devil Manichaean or Augustinian? Is it a contrary force opposed to order or is it the very absence of order itself? The difference between these two sorts of demons will make itself apparent in the tactics to be used against them. [Human Use, pg 34]

Most, if not all, of the descriptions of the superintelligence in Bostrom’s book cast the AI decidedly into the role of the Manichean devil. At least in this book, Bostrom does not seem much interested in how intelligence (natural or otherwise) arises, only in what kind of opponent it will turn out to be when it inevitably does.

As an example, Bostrom presents “domesticity methods” for containing what he calls “oracles”:

For example, we might stipulate that it should base its answer on a preloaded corpus of information, such as a stored snapshot of the internet, and that it should use no more than a fixed number of computational steps. [Superintelligence, pg 178]

Using a preloaded snapshot as the only input of a process removes entirely what Wiener calls feedback, the ability of an agency to know if it has succeeded or failed in its goal, or how close it was to achieving it. Instead of a using a pilot mechanism, Bostrom expects AI to work by dead reckoning.

The best way that we currently know how to build an “oracle” like this is machine learning: basically a process of curated input with a feedback valuation function telling how close the agent was to succeeding. Rather than removing the valuation feedback completely when learning is considered “finished”, a less severe way of isolating the agent would be to add artificial latency to its inputs. [Latency management is actually used in high-frequency trading, to ensure a level playing field between automized agents.]

Information As Message – Contingency And Interpretation

In one of the more speculative passages in Human Use, [ch. 5, “Organization as the message”] Wiener poses that it would be theoretically possible to transmit a living organism, even a human being, via telegraph. This is handled both as a thought experiment to clarify metaphysical intuition about human identity (discussed later in more detail by Derek Parfit), as well as an exploration of the developing field of information.

A pattern is a message, and may be transmitted as a message. How else do we employ our radio than to transmit patterns of sound, and our television set than to transmit patterns of light? It is amusing as well as instructive to consider what would happen if we were to transmit the whole pattern of the human body, of the human brain with its memories and cross connections, so that a hypothetical receiving instrument could re-embody these messages in appropriate matter, capable of continuing the processes already in the body and the mind, and of maintaining the integrity needed for this continuation by a process of homeostasis. [Human Use, pg 96]

[…]This takes us very deeply into the question of human individuality. The problem of the nature of human individuality and of the barrier which separates one personality from another is as old as history. [pg 98]

[…]Let us then admit that the idea that one might conceivably travel by telegraph, in addition to traveling by train or airplane, is not intrinsically absurd, far as it may be from realization. The difficulties are, of course, enormous. It is possible to evaluate something like the amount of significant information conveyed by all the genes in a germ cell, and thereby to determine the amount of hereditary information, as compared with learned information, that a human being possesses. In order for this message to be significant at all, it must convey at least as much information as an entire set of the Encyclopedia Britannica. In fact if we compare the number of asymmetric carbon atoms in all the molecules of a germ cell with the number of dots and dashes needed to code the Encyclopedia Britannica, we find that they constitute an even more enormous message; and this is still more impressive when we realize what the conditions for telegraphic transmission of such a message must be. Any scanning of the human organism must be a probe going through all its parts, and will, accordingly, tend to destroy the tissue on its way. To hold an organism stable while part of it is being slowly destroyed, with the intention of re-creating it out of other material elsewhere, involves a lowering of its degree of activity, which in most cases would destroy life in the tissue. [pg 103]

The notion of transporting people as signals is of course fascinating, and a fictional version of it became famous in the TV show Star Trek, ten years after Human Use was first published.

But just as the next generation of Star Trek took the ideas of the original show even further, so has the idea of people as information been taken further in Bostrom’s time. Bostrom spends a lot of effort on the simulation argument, questioning the distinction between a real physical process and a computational simulation of it. Again, Bostrom takes the wildest of thought experiments at face value, even ballparking the number of human lives that can be simulated using the “spare” bits of matter and energy scattered around in space. Since this is obviously a very high number, compared to the number of humans that have ever lived, the probability is high that you (the reader) are in fact a computer simulation instead of a real physical person.

This kind of a thought twister is of course a daily exercise for the professional philosopher, who can entertain six impossible thoughts before breakfast, but in the less mentally flexible population it can take a more sinister guise: a mindgame pitch for a cult, replacing outdated ontologies of phairies and aliens with “science” and  “computers”. From the looks of it, there are plenty of people on the Internet quite serious about an upcoming digital rapture (of course only for people who have demonstrated proper subservient good-will towards our future robotic overlords).

In the area of theory-making, intellectual humility is not always proportionate with the significance of new theories. Nor is it always the people who first formalize a theory who foresee their most important future significance. We must be very careful in our fascination of life as information, as a pattern. Could this become an excuse for lazy thinking, in a similar way that the transporter of Star Trek became sometimes an excuse for lazy writing?

Today, information is everywhere; digital, ubiquitous, mundane even. But just because it is everywhere and easily available, does not mean that its nature is universally understood. We treat information like a commodity or a fuel, flowing like substance between containers, measuring it in bandwidth and storage capacity, compressing it with algorithms to save storage space. But in the communication theory of information, every last bit of information represents a contingency, a choice between two equally likely possibilities. Such a thing has no independent existence, it is always relative to a specific context.

To cover this aspect of communication engineering, we had to develop a statistical theory of the amount of information, in which the unit amount of information was that transmitted as a single decision between equally probable alternatives. This idea occurred at about the same time to several writers, among them the statistician R. A. Fisher, Dr. Shannon of the Bell Telephone Laboratories, and the author. Fisher’s motive in studying this subject is to be found in classical statistical theory; that of Shannon in the problem of coding information; and that of the author in the problem of noise and message in electrical filters. Let it be remarked parenthetically that some of my speculations in this direction attach themselves to the earlier work of Kolmogoroff in Russia, although a considerable part of my work was done before my attention was called to the work of the Russian school. [Cybernetics, Introduction, pg 10]

The contextual, contingent and relative nature of information-as-message is what enables techniques like compression algorithms and cryptography: without a correct interpreter, or the correct key, information is just meaningless random noise, an incoherent stream of bits. This processing is purely statistical. As Shannon puts it: “[…] semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.” [C. E. Shannon, 1948, emphasis in original]

Part of the allure of information-as-existence comes from developments in modern physics, as well as trends in philosophy of mind. Rather than giving up on metaphysics as unsolvable, many people seem attracted to minimalist ontologies, often some version of “everything is just information”, or “everything is mathematics, nothing else exists” [one proponent of the latter is Max Tegmark, a compatriot of Bostrom’s]. [Unfortunately, the concept “every/all” always holds the seeds of a paradox.]

Another explanation for the category mistake is the natural desire for an ultimate terra firma, a fundamental something that everything else can (at least in principle) be mapped to; the original language of the cosmos at its most profound level. If only we could access such a philosopher’s touchstone, why, we could (in principle) map the semantics of all that we can say or see in terms of it. Unfortunately that is just not the nature of mathematical information theory. The only touchstone is noise, its statistical properties in time and space, and not some universal a priori Rosetta stele.

Information(-as-message) is meaningful only for agents with incomplete knowledge of the world. The cybernetic agent does not know everything, so it opens its receptors for incoming data, pokes things to see how they react, oscillates to find out what moves and what doesn’t. The perception of time as flowing is also a common source of incomplete knowledge; although statistically at the large scale the world evolves with mathematical certainty, the small scale agent perspective includes significant contingencies, things that may or may not happen, that have consequences for the agent. The semantics that arise from all this are those of survival, of countless trials and errors recorded in genetic memory.

This seems to be a crucial difference in the thinking represented by Wiener and Bostrom, and is repeated in many of the scenarios they discuss. One example is “breaking out of the simulation”, where an intelligent agent, either natural or artificial, is able to deduce that its senses are not dealing with a fundamental reality, but rather a virtual reality simulated by some process. An earlier variant of this thought experiment was famously given by Rene Descartes: a brain in a vat with a demon intercepting all nerve inputs and outputs. Unable to accept limits to human knowledge and intellectual capacity, Descartes posed metaphysical dualism as a way for humans [and only humans, not other animals] to bypass the limited/incomplete nature of existence as a physical agent in a physical world.

Bostrom’s loose definition of intelligence does not rule out a possible ability to access reality at a more fundamental level: “[…] superintelligence (or even just moderately enhanced human intelligence) would outperform the current cast of thinkers in answering fundamental questions in science and philosophy.” [Superintelligence, pg 315] This ability has nothing to do with more accurate sensory inputs, it is just a result of “more” thinking, even in complete absence of feedback.

Decidability And Algorithmic Agnosis

All agents are limited in knowledge in a fundamental way. The currently accepted limits of a priori logical certainty are expressed by the Church-Turing Thesis. This is a theoretical boundary on what facts can be proven from a set of given inputs to a theoretical model. Both the model and the set of inputs could be considered “a priori”, in the sense that their empirical validity is not in question; the only thing that the Church-Turing thesis is interested in is the breadth of knowledge that can be logically inferred from a given set of assumptions.

Since Euclid, mathematical proofs have been reduced into rigorous, elementary steps, so simple that even the most passionate skeptic can approve each of them. Combining such steps using strict rules of logic makes the complete proof acceptable, a secondary fact derived from previously accepted facts. The complexity of mathematics is that it is far from obvious what secondary facts are both true and provable.

Computers were first built for mathematical purposes. A traditional computer performs each elementary operation mechanically, and follows the rules of logic much more strictly than any human mathematician could. The reason we trust the results of a mechanical computation is the same as for accepting a mathematical proof: each elementary step is simple, and the rules for combining the steps is followed rigorously. Computer have pushed human error out of the details, into the bigger picture: inputs and models. Even so, we are still stuck in the centuries-long habit of algorithmic decomposition as a model of ascertainment: if we trust each atomic step of a process, and the connection that each atomic part has with its neighbors, we can trust the results of the computation equally. [This is why so much emphasis was put on “computer error” in the early days of computers. Today computation is so cheap that we can just run our calculations redundantly, in the rare case that random hardware faults make a difference.]

Kurt Gödel was the first to show that any sufficiently non-trivial theoretical model is logically incomplete: that is to say, the complete scope of a priori knowledge cannot be known a priori. This untidyness in the systems that provide us with the most exact and universal knowledge available, mathematics and logic, seems to expose an epistemic loophole of some kind, a blind spot within the foundations of metaphysics.

In the same breath as we marvel at this fundamental incompleteness of logic, we note that for practical use of mathematics and logic, it matters very little. Before the deciding of decidability, we must already have chosen our theoretical model and other a priori assumptions. It makes a kind of sense that a limited set of choices can only lead to a limited amount of possibilities. But as finite agents, what choice do we have?

For a cybernetic agent, risks and probabilities are much more useful than limiting activities to just provable a priori certainties. A cybernetic system is a pilot mechanism, working by default outside of terra firma, relying on environmental feedback, continuously updating its maps with fresh data as it navigates the flows of winds and seas. It does not matter so much what can be done with a fixed model and input; rather how fast the continuous stream of input data can be processed and integrated into the live model, while staying within the parameters of a somewhat negotiable set of goals. Life in the pilot seat is a world of continuous choosing; there is usually some freedom of choice, but rarely freedom from not taking any risks.

Finding some meaning within this world of choice is not possible without true contingency. Many thinkers have gone on record stating that the freedom of will that people think they have is an illusion; In the true picture of physics, they say, the world is completely deterministic, and can in principle be predicted down to the least detail, including the choices any single person makes at any single instant. Of course we don’t have such a coherent theory of everything yet, and we cannot access the initial conditions of the Universe, but these things must exist nonetheless, even though we currently don’t know them. Right?

But if the world is nothing more than the mathematics that describe its regularities and irregularities, it must be either trivial, incomplete, or infinite in its axioms; in accordance with Gödel, Church, and Turing. If physics is not trivial, then its determinism is something that a finite mathematician, biological or mechanical, can never be able to predict with complete accuracy and certainty.

Philosophy of Engineering: Limits

Many people are passingly familiar with the Philosophy of Science. Maybe not by that name exactly, but at least the important questions that fall in its area of responsibility will sound familiar: What are the limits of scientific knowledge? How can we be certain that a theory is correct? Widely accepted answers to these questions are readily available.

When it comes to Philosophy of Engineering, however, embarrassing gaps of familiarity appear. What are the limits of man’s technological abilities? Is technological progress inevitable? What are the most popular ethical frameworks used in engineering? No authoritative answers seem to be available, least from engineers themselves.

But what exactly is the relationship between engineering, technology, and science? Science can give definite answers about the absolute limits of human possibilities, and the limits of engineering must fall inside them. But there are subtleties to these questions, related to actual human collaborative capabilities, as well as economic and political contingencies of societies in general.

For Bostrom, technical and scientific development seems a matter of discovery: the more effort is made, the earlier a result can be found. “Think of a “discovery” as an act that moves the arrival of information from a later point in time to an earlier time. The discovery’s value does not equal the value of information discovered but rather the value of having the information available earlier than it otherwise would have been.” [Superintelligence, pg 315, from the context “value” means instrumental, strategic value]

Wiener had a fairly unique perspective in that he was a classically trained philosopher who went on to have a long career at MIT. He never called himself a “philosopher of engineering” [the word “engineering” probably did not have the right connotations for him], but he did dictate a draft for a book that is relevant to the human activity that we now choose to call engineering. It was posthumously published as Invention: the Care and Feeding of Ideas in 1989.

The largest computer that Wiener envisions in his texts is the size of a skyscraper, with as many transistors as the human brain has neurons [God&Golem, ch 6]. This is of course nothing compared to the computers imagined today. Although the minimum size of a working transistor has been shrunk, yet we find it necessary to envision even larger computers. In fact, what Bostrom calls the “cosmic endowment” of mankind is all the accessible matter and energy in the Universe essentially turned into a giant distributed computer. What kind of calculations it will be used for is the most important decision that mankind will have to make, according to Bostrom.

The general idea would be to send out small but super-smart robotic agents out into the galaxy, where they would self-replicate using any raw material they come across, multiply and spread out in all directions based on some prewritten instructions. Pretty soon (in post-human terms) the sky would be filled with Dyson Sphere Computroniums, all just to simulate endless trillions and trillions of happy human subjects (remember, Bostrom cannot distinguish simulations from real persons).

But if life on Earth is nothing exceptional, and billions of years have passed since Big Bang, why have no other space civilization done the same thing? Why are our telescopes not detecting Kardashev scale objects in the endless sky? While physicists are good at estimating the physical makeup of stars and galaxies [well, apart from Dark Matter], it is another thing to look at that matter as modeling clay, to measure the available plasticity it has to be molded into useful shapes. While there is no such general theory of physical plasticity yet in existence, there are some things that can be said about the prospects of space engineering.

The average density of visible space is of a very thin gas. Although there are sizable chunks, islands of solid matter even, in the larger scale these chunks behave like particles of a gas. What we see as “fixed” stars in the night sky are nothing of the sort, our lives are just too short to detect their constant motion.

Arranging the particles floating in this thin gas of space is not dissimilar to Maxwell’s demon sorting the gas particles in a chamber to hot and cold compartments. Arranging all or even most of the matter floating in space is impossible for much the same reason as Maxwell’s demon is, even if the demon is allowed to use self-replication to accomplish its task.

To clarify, I do believe that self-replicating machines are a possible way to “seed” nearby systems by diffusion, if the machines are small and capable of adapting to local conditions and power sources. (In fact, this is a possible origin of biological life on Earth, with the self-replication and adaptation still ongoing in the form of biological evolution.) But I do not think that this method can be used to produce Kardashev scale artifacts, at least stable ones. At this point in the evolution of the Universe, the stable large-scale shapes are the ones that we see out there. [There can still be some interesting “structures” that we do not normally see. For example, astrospheric and galactic current sheets are a fairly new finding.]

The part of our “cosmic endowment” that is plastic enough for us to easily mold with any future technology is not going to be very large portion. And like anything we build or grow, it will not last forever. The statistical nature of thermodynamics means that the large-scale shape of the future of the Universe remains unmovable. Parts of intelligent life may survive for very long times in one form or another, not in any kind of total dominance of all matter and energy, but in the nooks and crannies of the Universe, as statistically insignificant islands of order in a vast sea of chaos.

Yet we may succeed in framing our values so that this temporary accident of
living existence, and this much more temporary accident of human existence, may be taken as all-important positive values, notwithstanding their fugitive character.

In a very real sense we are shipwrecked passengers on a doomed planet. Yet even in a shipwreck, human decencies and human values do not necessarily vanish, and we must make the most of them. We shall go down, but let it be in a manner to which we may look forward as worthy of our dignity. [Human Use, pg 40]

Philosophy of Engineering: Ethics

In the widest sense, engineering or technology is the ability to purposely change some part of the physical world in some way. It consists of techniques, skills, arts, tools, and crafts, which make use of the available plasticity in the physical world to effect the changes. It can be as simple as collecting seeds from plants and planting them in a different place, or as complex as collecting pieces of rock from an asteroid and bringing them back.

It is perhaps nice to imagine that the changes made by technology are expressions of human spirit or power, the will of a genius imposing his order onto a chaotic world. In reality, the forms of plasticity that exist in the world usually come with costs, and take a lot of trials and errors to master. Technology is a world of engineering trade-offs, and if it becomes “indistinguishable from magic” for some observers, it is because someone has purposely hidden all the wiring. [Incidentally, some of the mechanical inventions of Leonardo were inspired by his secret dissections of human and animal carcasses. More recently, neural network classifiers were also inspired by studies of anatomy; the “exposed wiring” of Nature’s engineering.]

Is the creativity of a poet the same kind of creativity that an engineer has? Are the thousands of programmers writing the software for your self-driving car artists? From current parlance, software companies are mainly looking for “coders”, not “rock stars” or other divas. Writing software is probably the most flexible form of expression yet invented, and is available to pretty much everyone willing to learn. Many have used their creative impulses to write open source software, outside of any established software company. Within technology companies, creativity can also flourish with “skunkworks” budgets, which are unfortunately becoming a rarity. [Many of the most widely used software tools today,  from Unix to C to Linux, originated either as skunkworks or as open source projects.]

In a way, the layering of “software” on top of hardware represents the ultimate idea of plasticity as pure, noise-free signals, communicated as intangible information. But it also reminds us that there are two sides to plasticity, that something must remain unchanged (hardware) even when all that can be changed (software), changes.

Even putting aside the unavoidable trade-offs, mankind is not today in a position to plan its use of technology very well. The collective society seems mostly drawn by economics and marketing, when it comes to applying and developing technology. If technology is about using the inherent plasticity around us, and discovering methods to change the physical world in more ways, should these changes not be planned ahead, instead of reacting to one technology related crisis after another?

How does humanity as a whole make plans and choices? A possibility of meaningful one-to-one feedback sessions between all of the billions of people on Earth is not feasible [even less between 10^58 simulated humans]. Useful information has a way of diffusing through even large societies and cultures, via marketplaces of goods and ideas. The existence of these global markets is also an enabler for the progressive development of technologies, through market competition. [It should be noted that market competition as an engine of progress is not infallibly qualitative, as it deals almost exclusively in quantitative terms.]

Many systems of rules-based rule, both ancient and modern, have been developed to govern individual nations and other institutions with some measure of fairness [and of course with powers to enforce the rules themselves]. Some have searched for a universal theory of ethics by which civic government could be run deterministically, like a programmed machine. Although people and animals are capable of feeling pain, a simple calculus of total suffering at one particular moment in time is not a very satisfactory measure of success or failure, but it has been suggested by many as a possible way arrive at universal ethics.

As mentioned previously, experiencing pain is itself a part of the short-time cybernetic guidance system that evolution has built inside us. We are quite capable of ascending above momentary pains, to suffer in the short term for a worthy long-term goal. The stick-and-carrot of short-term pain and pleasure cannot be reverse engineered to reveal what our long-time goals should be, much less any collective goals that transcend the lives of individuals.  Besides, our technological means can already short-circuit these atavistic signals in the body, and close our minds inside a circle of addiction. But I don’t think anyone would consider enclosing all human minds inside virtual feedback loops as a satisfactory way of eliminating human suffering.

A goal that I probably share with both Wiener and Bostrom is to keep humanity around in some form or another, for a while longer at least. This must mean having some lookout for the adverse effects, long-term or short, of our collective technologies. When changing the physical world around us, we should try to keep our options open for future changes as well; not paint ourselves to a corner, or destroy the ladders we just climbed up, or burn every bridge we pass along our triumphant march forward. Future generations will not accept a pre-programmed existence any more than we would; true contingency is necessary for meaningful living.

Technology is not a free lunch, there are always some hidden offsets that we need to be aware of when making decisions. Marketing can try to hide the downsides, or pass the secondary costs to the public. For this reason, transparency and regulation will be more important than ever, as technology keeps progressing.

 

What would a real cloud city look like?

There have been floating cities in fantasy stories for ages. From Hyrule to Urth to Malazan, a fantasy world as imagined today is not really complete without some airborne structures slowly drifting over the landscape. A massive, mountain-sized citadel suspended in mid-air can also be a breath-taking visual element in the hands of a skillful artist, like these three:

cloud cities 1

Flavio Bolla, J Otto Szatmari, Robert McCall

My aim in this article is to try to visualize what a real cloud city would look like, either on Earth or on another planet with an atmosphere (like Venus for example), while staying within the known limits of physics and material science.

That means no antigravity, tractor beams, unobtanium, or anything else indistinguishable from magic. This still leaves us with many forms of physical lift and thrust; most of which do consume energy to keep a mass airborne. The only physical form of lift which does not require continuous power, and is therefore perfect for continuous suspension in mid-air, is buoyancy.

What exactly is buoyancy? A scientific description of it will usually include Archimedes’ Principle (with the famous shout “eureka!”), and give mathematical formulas for calculating the force in different scenarios [e.g. this page from NASA: https://www.grc.nasa.gov/www/k-12/WindTunnel/Activities/buoy_Archimedes.html].

For an aspiring architect wishing to design structures that float in mid-air, there is a simpler, more practical way to define buoyancy:

(1) The density of a (neutrally) buoyant body is equal to its surroundings.

In other words, for a Cloud City to stay aloft by buoyancy alone, its total average density must be the same as the air that surrounds it. If you determine the total mass of your Cloud City, you also determine its total volume at a given altitude and air pressure, and vice versa.

The art of making buoyant structures that carry things that are denser than air (like people, plants and water), is to attach them somehow to materials and shapes that are lighter than the surrounding air. If you can attach your denser-than-air components to lighter-than-air components so that their sum averages out to exactly-as-dense-as-air, the total package becomes neutrally buoyant.

This art of attaching heavier things and lighter things to each other gives the designer freedom to use any materials available, but rule (1) remains as a system-level constraint in any design. As long as you keep the total volume/mass ratio constant, you can distribute the mass of your city as you like, leading to the rule of thumb corollary of (1):

(2) Buoyancy = weight distribution.

In 1670, long before the first manned flight, Francesco Lana of Brescia published the design for his aerial ship. In his design, the lighter-than-air elements were large air-tight spheres with all air pumped out. These vacuum spheres were most likely inspired by Magdeburg hemispheres, an invention by Otto von Guericke demonstrated in 1654.

terzi_aerial_ship

P. Francesco Lana

This type of “vacuum airship” has never been demonstrated at scale, because no existing materials have the sufficient strength-to-mass characteristics to form stable vacuum containers that are lighter than air. [We are however living in the golden age of material science, with new materials being engineered constantly. Vacuum airships are not completely ruled out as a possibility yet.]

All buoyant airships that have been built and flown have been based on “lifting gases”, like helium. Due to the thermodynamic nature of gases, the denseness of a volume filled with gas at a given pressure is closely related to the molar weight of its constituents. Hydrogen and helium, the first elements of the periodic table, have the smallest molar weight [molecular hydrogen, atomic helium], thus they are the gases with most lifting power; in other words their natural densities at a given pressure are the smallest of all known gases.

I will not go into details about the evolution of buoyant air vehicles here. Those interested can read about their history in many books, like the profusely illustrated Travels In Space: A History of Aerial Navigation by F. Seton Valentine and F. L. Tomlinson (Hurst and Blackett, London 1902) [readable e.g. via archive.org]

Even though heavier-than-air vehicles have pretty much conquered the commercial airspace since the mid-1900s, there are still fans of lighter-than-air airships. Here is one futuristic concept illustration for an “aerial cruiseship”:

Dassault-concept

Dassault

As estimated by Dan Grossman at airships.net, the mass of the water in the pool alone would require a ship 4,5 times the size of Hindenburg to lift. [And from the snowiness of the peaks in the background the thing must be kilometers above sea level, requiring an even bigger volume to compensate for the thinner air!]

Another problem with the DS-2020 concept image is that the massive pool is on top of the ship, apparently above the helium gas bags. This puts the whole ship in an unstable configuration, with its center of weight (marked with a red cross) above its center of buoyancy (marked with a blue cross). Without some massive active stabilization system, the whole ship would capsize as soon as it lifts off. (Which isn’t necessarily a bad thing; for my money I would much prefer to swim in a glass-bottom pool over vast scenery than in a conventional one.)

Remembering rule of thumb (2), buoyancy is about mass distribution. In all real airships so far the mass distribution has always been asymmetric, leaving the ship with a center of buoyancy and a center of weight separated by some distance. These kinds of asymmetric bodies have a preferred stable orientation, with their center of weight directly below their center of buoyancy.

The natural orientation of the two centers can be utilized in blimps to control attitude. Here is a simplified sketch animation of a blimp shifting its center of buoyancy back and forth using its ballonets, causing a pitching motion since the center of weight is not moving:

blimpanimtext_640s

[Pitch control can of course be accomplished by moving the center of gravity also; in fact this was the method used in early airships like the first Zeppelin. Ballonets are a more lightweight technique for the purpose, and they are anyway needed in non-rigid or semi-rigid ships, to maintain their shape at different altitudes.]

The location of the centers of buoyancy and gravity in a buoyant body is a result of mass distribution. I have used density diagrams that are helpful in approximating the location of the centers. The center of buoyancy is the geometric center of the whitest areas, which are less dense than air (the gray background). Conversely, the center of weight is the center of areas that are darker/denser than the background/air.

[In theory it is also possible to form a buoyant body where the centers of buoyancy and gravity are collocated. Such designs would not have any stable orientation, so they would just rotate around their centers like soap bubbles in air.]

Another very common, but ultimately unrealistic, design is to make a cloud city flat and wide, like it was floating on top of water. A wide design was suggested by milliner E. Petin already around 1850:

Petin viewing airship platform

The four wide and flat parts around the mid-height of the balloons are not awnings or roofs, they were meant as control surfaces: if they were slanted when ascending or descending, they were supposed to transfer part of the vertical momentum into horizontal travel. The Petin ship was never built at full scale (or not allowed by the authorities to be unmoored according to some reports), so how the ship would have moved in air remains guesswork.

Surface marine ships can have a metacenter, meaning that they can be stable even when their center of gravity is above their center of buoyancy. [This also is the case for example when you lie on top of an air mattress in a pool.] But an airship is not a surface ship; it is totally immersed in air and cannot use any surface effects. It is in error to think that spreading the mass of a cloud city wide like in a surface ship would make it more stable.

The atmosphere of Venus is naturally more dense than that of Earth, so it is possible to build Venus airborne cloud cities where the breathing air also works as a lifting gas. Combining the functions of a space as both living and lifting makes flatter, wider designs possible, but that is not necessarily a good thing for stability. Here is an exaggerated animation of what happens in a wide cloud city design when movement of mass occurs:

dome_unstable

Because the centers are so close together, even a slight shift in the center of gravity tilts the whole structure with a keen angle.

The dome design is possible to improve by increasing the distance between the force centers, for example by making the dome taller and lowering as much mass as possible under the decks, to work as a counterweight. Similar movements now cause smaller tilt angles:

dome_counterweight

For best stability, the structure could be equipped with an automatic stabilizing mechanism, shifting the counterweight to keep the center of gravity always aligned:

dome_stabilized

The Sultan of the Clouds is a 2011 science fiction novella by Geoffrey A. Landis, set in a future Venus cloud colony. Landis is also a scientist, and has advocated Venus cloud-tops as the most suitable location for human colonization in the Solar system. Here are some cover illustrations for the award-winning novella:

sultan covers

Dan Shayu, Aurélien Police, Jeroen Advocaat

Hypatia, the cloud city where the story takes place, consists of many kilometer-wide geodetic domes. The breathable air that also serves as lifting gas, is contained by millions of millimeter-thin polycarbonate plates joined together on a graphite-based frame.

Although it may not be clear from the illustrations, the text also mentions a counterweight under the city, “a rock the size of Gibraltar”, so the design should have the correct stable orientation from having asymmetric force centers. [No mention is made how stability is maintained when people and goods move around.]

Even though the transparent domes of Hypatia are made of millions of individual panels, its denizens are not worried about its stability:

“Here, you know, there is no pressure difference between the atmosphere outside and the lifesphere inside; if there is a break, the gas equilibrates through the gap only very slowly. Even if we had a thousand broken panels, it would take weeks for the city to sink to the irrecoverable depths.”

This may be the case if the breaches are all at the same height and there is no wind. But what about if there are simultaneous breaches at the top and the bottom of the dome? Like hollowing out an egg, Venus atmosphere would flow in from the bottom hole at the same rate that breathing air would flow out from the top hole, pressure staying constant the whole time.

Here is the (again, very exaggerated) animation of a catastrophic failure, starting with a mechanical failure in stabilization, followed by simultaneous breaches at opposite ends of the dome:

dome_fail

It should also be noted that a dome that contains the main buoyant part of the city is not resting on top of the ground, like geodetic domes on Earth. Once it is filled with a lifting gas, the dome is pulling the rest of the city up, with a force equal to the full weight of the rest of the city.

Here is the Venus poster from NASA JPL “Visions of the Future”. The design is of course very stylized, but the conical underpart of the city would make a more sturdy housing for the counterweight, with multiple A-frames. The counterweight could even be made of something useful for the function of the city, that is just located low for stability.

venus jpl poster 50

JPL/NASA, Jessie Kawata

The first known (by me at least) proposal for a Venus floating city is this design from 1971 by Russian engineer and science fiction writer Sergei Zhitomirsky. There is no counterweight as such, just separation of machinery and equipment to different levels, but the tallness of the dome contributes to stability by raising the center of buoyancy.

tm1971

Tekhnika Molodezhi 9/1971, Nikolai Rozhkov

[Zhitomirsky also mentions the possibility of using helium-oxygen mixture instead of nitrogen-oxygen air as lifting gas. Such heliox-mixtures would theoretically work as lifting gases in Earth atmosphere as well, if the dome is large enough. So far I am not aware of anyone ever attempting to fly inside a heliox balloon on Earth.]

While these large domes are stylish and futuristic, real cities are usually grown naturally, part by part. If that growth happens in mid-air (as it needs must on Venus), maintaining rule (1) the whole time can be tricky.

Water life can be a source of inspiration for buoyant designs that grow. Some marine mollusca, like nautilus and spirula, have buoyant air chambers inside their hard shells. As the creature grows in size and mass, it adds new chambers one by one into its shell, creating a beautiful self-similar geometry.

Beautiful as they are, the designs of buoyant sea life may not be possible to translate directly to buoyant air structures: Because the natural density difference between life and air is more drastic than the density difference between life and water, buoyant elements in aerial structures need to be much larger in relative volume.

So what is the answer to my question, what would a real cloud city look like? Well, like cities on the ground, there is no single design that all must follow. From the images in this article, the painting “New World Coming” by J Otto Szatmari is to me perhaps the most realistic, with its vertical shape and the obvious gravity of the mass suspended from multiple balloons. More of his paintings are visible here: https://jotto.artstation.com/

Adrift In Middle Cloud Layer – Notes On The Airborne Colonization Of Venus

The best level of living on a planet may not always be the obvious surface. While Moon and Mars settlements could be built inside lava tubes and other underground burrows, on Venus the best level for a human colony is above ground (way above). There is no need to step foot on a planet like Venus to inhabit it; any “first step” by humans on its surface would be a symbolic gesture only, with little practical benefit for actual colonization.

Living on the ground or in caves is something humans know very well. Living in structures that float high in the clouds however is unlike anything that we have ever done before, and presents many new challenges. To achieve it requires both imagination and careful consideration. The following are some of my notes on the subject.

Vehicle classification based on movement

Airborne vehicles on Venus could be roughly classified to three main types by their movement style: flyers, kites and drifters.

Flyers are actively powered aircraft that continuously move in the air. Their design is aerodynamic, similar to airplanes on Earth. By flying continuously eastward against the superrotation of the atmosphere, a flyer could stay in daylight perpetually, perhaps even be solar powered and not require large energy storage.

Kites are tethered airships that are attached to the surface of Venus with long cables. Unless their “anchors” are moving, they will naturally follow the surface day cycle of Venus, about 1400 hours of continuous daylight followed by an equal duration of moonless night. Their design needs to be aerodynamic, and like kites on Earth, they would be able to “sail” against the prevalent winds to maintain altitude with very little power. Cable systems that reach tens of kilometers above ground would need to have multiple “kytoons” along their lengths, to distribute the weight of the cable itself.

Drifters are freely floating lighter-than-air airships that mostly follow along the existing air currents, though they should be capable of small course corrections when needed. The movement of cloud patterns on Venus suggests that passive drifting would result in a cycle of about 40-50 hours of daylight and 40-50 hours of nighttime, varying depending on altitude and latitude. While drifting the effective airspeed is almost zero, so a drifter does not always have to maintain a streamlined aerodynamic shape. But it is good for drifters to be somewhat “dirigible” [i.e. “steerable”], even capable of powered flight when needed.

This classification may seem arbitrary, but knowing the intended purpose of a ship is helpful in making design decisions. The following notes are mostly intended for the passive drifter type ships, but some may apply to the other types of ships as well.

Yellow and Blue Air

When building and maintaining a ship based on containment and separation of different gases, it is important to be clear about what gases go where, especially since many of them look identical to human senses. This is why the following color-based nomenclature is suggested:

Yellow air stands for any Venus atmosphere based gas mixtures, not breathable for humans. It is used both for the raw atmosphere, and air that has been scrubbed of toxins, but still contains too much CO2.

Blue air is any gas mixture which is breathable for humans. As we know, normal Earth atmosphere is less dense than normal Venus atmosphere, so blue air can typically also be used as a lifting gas. In airship construction, “blue” areas mean all areas of the ship where humans can breath unaided.

This shorthand vocabulary makes it easier to discuss the engineering of an airship for Venus, but it could be adopted in the actual colonization itself; For example, any tanks, valves and pipes handling gases on board the ship could be appropriately color-coded, to help ensure correct operation under any condition. [The couplings could even be designed to be purposely non-compatible, to make accidental mixing less likely.]

[There are of course more colors available for naming gas mixtures; how about “red air” for mixtures of hydrogen and oxygen?]

Colony altitudes: 50-55 km

Choosing the altitude for human habitation is a matter of trade-offs. At about 50 km the atmospheric pressure is the same as at sea level on Earth, but daytime temperatures can be upto 20 degrees higher than the hottest climates on Earth. At 55 km, atmospheric pressure is about the same as on Earth at 5.5 km above sea level (for comparison, the base camp to Mt Everest is at 5.3 km elevation), and the temperature is a fairly pleasant 300 K.

Beside pressure and temperature, altitude also affects the speed of the air currents that push the drifter onward. The amount of radiation that the ship receives is also affected by altitude: this includes not only harmful cosmic radiation, but also the amount of sunlight that penetrates the clouds during daytime. A ship for humans should be designed to handle the range of temperatures, pressures, and haze conditions at these altitudes, with some tolerance to spare.

Indoor temperature in the blue areas can be kept constant by actively cooling it as needed, utilizing some power source to transfer excess heat outside the ship. The outside of the ship should also reflect most wavelengths of light; also window materials should be selective in what wavelengths they pass in. Too much insulation may increase the greenhouse effect; thermally conductive elements in the roof could help with passive cooling.

Indoor pressure of blue air does not need to be kept constant regardless of altitude, it can be adjusted according to outside pressure to avoid straining the structure of the ship. [For safety reasons, the blue areas should still be kept slightly overpressured compared to outside yellow air, to keep inevitable leaks pushing out instead of in.] The partial pressure of oxygen should always be kept at Earth sea level to avoid both mountain sickness and oxygen toxicity. In practice this would mean adjusting only the nitrogen component in blue air to account for pressure changes; this could be achieved using separation devices, for example pressure swing adsorbers, to separate and capture nitrogen out of mixed blue air.

Buoyancy and altitude control

Drifters are designed to be as passive aircraft as possible, but they still need to be able to maintain and control their altitude, perpetually and under all conditions, if permanent habitation is the goal. If the ship ascends too high, or descends too low, the integrity of the whole ship is endangered, due to ambient pressure either tearing it apart or crushing it. In practice, automatic altitude adjustments would be based primarily on readings from barometers and thermometers connected to outside air through static ports, so they would work even without accurate altitude measurements.

Any habitat that can support human life will have to be thousands of kilograms in total mass, most of it denser than yellow air. To stay at an altitude by buoyancy alone, the amount of lift from buoyant elements would need to counteract exactly the amount of gravitational pull. There is no force holding an airship in place, just different forces pulling it in different directions, that need to be equalized. An air “station” should be able to stay afloat even during power shortages, this means defaulting to either neutral or positive buoyancy.

A station should also be able to take on new cargo or send out drone ships. These scenarios imply that the ship’s total mass can change suddenly, and buoyancy must be adjusted simultaneously if altitude is to be kept. Existing lighter-than-air ships on Earth do not have the ability to quickly change the amount of lift provided by gas bags, so some new designs would need to be engineered and developed. Dropping ballast or venting lifting gas are not long-term solutions for adjusting lift on Venus.

Adjustable lift elements could be designed with multiple compartments, ballonets, or as one big compressible envelope with adjustable volume. In each case, reserve tanks of pressurized lifting gas are needed, with pumps to inflate and deflate the gas bags.

The actual transfer of mass onto or off the ships should of course be done carefully. For example, drone ships landing on board should turn off their engines gradually, so that the receiving ship has time to adjust its buoyancy to the added mass.

Launch altitudes: 70-75 km

Since Venus colonies will be airborne, any launches of space vehicles will also need to happen from airborne platforms. To conserve rocket fuel from fighting against air resistance, it makes sense to launch rockets from the highest altitude achievable with buoyancy. If colonization is successful, it should be possible to manufacture rocket fuel from material harvested from yellow air at the colony altitudes, then lift the rocket, along with its fuel and payload, up to thinner atmosphere using special high altitude balloons for launching.

At an altitude of 70-75 km, the air is thin and cold, much like 20-25 km above Earth [or surface levels on Mars] and is about as high as balloons can be made to carry heavy cargo. Compared to colony altitude, visibility is also better, and distance makes colony airships below safer from launch accidents. The pressure is below Armstrong limit, so humans must wear space suits or stay inside a pressurized vehicle.

Coordination of launch is delicate and requires guidance and observation from multiple ships in air and in orbit (ground observation will not be practical on Venus, for several reasons). At high altitude, a “rockoon” does not need to be straight vertical when fired, it can be aimed at an angle so it does not hit the balloon from which it hangs. But it is best to launch with a targeted heading, which requires some maneuvering capabilities in the launch platform. Once the rocket has successfully ignited its engines, it can be carefully released from the balloon platform and initiate burn.

Recovery of the high altitude balloon platform is perhaps desirable but not easy. Without the weight of the multi-ton rocket pulling it down, the balloon will quickly rise too high to maintain its integrity, and will burst if not vented. In theory it might be possible to compress hydrogen from the balloon quickly with electrochemical hydrogen compressors after the release, and start a more controlled descent. Such a system could potentially save hundreds of kilograms of hydrogen from being vented per launch, so it might be worth pursuing at some point. [Helium is nearly as good as hydrogen as a lifting gas, but as a noble gas it cannot be compressed either electrically or chemically.]

In the beginning, there will be only few airships on Venus. Entering from space a ship can end up far away from drifter airships at colony altitudes. Incoming vehicles must be equipped with “ballutes” or other means to turn themselves into airships after entry, and be capable of flying some distance to meet with a colony altitude ship for refueling. [More about navigation later.]

Trimming and stability

Since gravity on Venus is almost the same as on Earth, ships should be designed so that their decks stay level with little or no power. Marine ships floating on a surface can have a metacenter, but passively floating airships and submarines are stable only when their center of buoyancy stays directly above their center of gravity. Distance between the two centers provides torque for righting the ship, if the frame of the ship is rigid enough for leverage.

Just about all cargo needed on a self-supporting colony ship is going to be denser than air, which means that the buoyant elements of the ship will take up most of the ships total volume. Since the center of buoyancy must be above the center of gravity, all passively floating airship designs for normal pressures are going to be big and puffy on top, with smaller denser parts hanging down. [I call this the basic “lollipop” shape.]

Diagram of airship relative density
To have wider and more spacious decks, self-trimming or stabilizing designs should be investigated. Any shifts in the center of gravity will tilt the ship, unless the center of buoyancy is also shifted at the same time. Increasing distance between the two force centers decreases the angle of tilt, but does not completely eliminate it. Some new designs are needed for zero-airspeed stabilization, for example using some of the payload as a movable counterweight. Large gyro flywheels would add angular inertia [at the cost of mass and power]: they will only slow the tilting caused by shifts in the center of gravity, not eliminate it completely.

Shifting the center of buoyancy is possible to some extent. There is an existing technique for this purpose: many airships have fore and aft ballonets that can be filled asymmetrically. [Submarines use an analogous system of “trim tanks” placed at extremes of the ship.]

If the ship is moving in air, either by its own thrust or pulled with cables, a third center is added to the equation, the aerodynamic center [more generally, “center of pressure“, applying also to hydrodynamics in submarines]. Things get more complicated with aerodynamics [for example, the choice of where to place thrusters or towing cable attachments in relation to the force centers], but with predictable airspeed it becomes possible to use relatively small control surfaces, like trim tabs or ailerons, to control attitude and achieve trimming.

Active stabillization at rest is of course possible, for example using compensating thrusters to force the ship level even when the center of gravity is not aligned, but like propulsion thrusters, power is consumed continuously while they are active. For a large “station” style drifter ship, with tons of cargo on board, stability should always be sustainable, even when not moving, or during power outage.

Structural analysis

The “lollipop” shape naturally divides the ship into two main parts, the balloon part at the top which is lighter than (yellow) air, and the dense “gondola” part where people and cargo are situated. The two parts are pulling the ship in opposite directions by forces equal to the total weight of the ship, so their connecting seam is structurally important; It is the foundation of the ship, its “tensile-load-bearing” wall.

The balloon does not need to carry the gondola just by the lower rim of the envelope. Many existing non-rigid and semi-rigid airships use internal suspension cabling, attached to the inside ceiling of the balloon with e.g. catenary curtains and leading down to the roof of the gondola. In addition to distributing the weight of the gondola more evenly to the balloon envelope, the internal cabling allows a non-rigid balloon to hold a more vertically flattened shape.

As a hanging structure that never lands, tensile strength is more important than compressive strength, even in the rigid parts of the ship. The rigidness of the gondola frame is also a trade-off: on the one hand, it helps keep the decks stable and level. On the other hand, if the frame is too rigid, mechanical vibrations get propagated throughout the ship, from machinery or just from the passengers walking about the decks. A combination of rigid and damping elements are probably needed, designed from lightweight and durable materials to keep the total weight down.

Since the structure is not intended to land or stand on its own, and is hanging down, many of the conventions of construction are turned upside down. For example, structural elements must be strongest at the top of the multi-level gondola tower, but less so at the bottom, where they carry less weight.

Bioreactors: not just for food

At its simplest, a photobioreactor is just a transparent plastic bag half filled with water, seeded with some live cyanobacteria, minerals, and trace elements. Yellow air is bubbled slowly from the bottom through the greenish water, where daylight turns it into blue air, collected at the top. The water turns greener and thicker through the day as the bacteria multiply. At the end of the day, the containers can be drained and the excess green mass concentrated for further processing. The plastic bags can then be refilled with water and minerals, in preparation for the next 50-hour day.

The resulting green biomass is an important source of a variety of hydrocarbons, and can be further processed by e.g. fermenting. Various methods of dehydrogenation can be used to produce unsaturated hydrocarbons for making polymers. Even on Earth, biomass grown in bioreactors could become a worthwile “green” replacement for crude oil.

bioreactor

The green biomass can also be eaten (it’s called Spirulina), if it is grown from acceptable ingredients (no too much sulfites or deuterium) and handled properly. It is not a complete food source, so it should be complemented with other forms of bacterial farming, such as yeast for vitamin B12. [Why bacterial farming? Human senses have evolved to see only the macroscale of biology: plants and animals. But a completely artificial ecosystem needs to be built “from the ground up”, starting at the microbiological level, where the real work of biology happens, before advancing on to some carefully chosen angiosperms and invertebrates.]

In an earlier post I implied that “trees are made from just sunlight and CO2, both more abundant on Venus than on Earth”, but that is incorrect. In actuality, for each CO2 molecule that photosynthesis breaks down, it must also break down an H2O molecule. Water and hydrogen are rare on Venus, and will be the main bottleneck for cultivating biomass.

Chemistry: the power of Hydrogen

Hydrogen is the most common element in the visible Universe, and out of all the atoms in the human body, hydrogen atoms are the most common. Hydrogen is so ever-present and chemically active that it is hard to imagine chemistry without it. It is even common practice to leave out hydrogen atoms when drawing chemical structures; there are so many to draw.

In the cloud layer, at colony altitudes for drifters, most of the hydrogen is bound in the clouds themselves, as aerosol droplets of concentrated acid. According to current understanding, the droplets making up the clouds over Venus are not formed by just phase transition, but also by a chemical process fueled by sunlight, called photolysis. In other words, part of the sunlight falling on Venus gets stored as chemical energy in the atmosphere. Could the acidity be just discharged to electrical power, like from a fully charged car battery? [Probably not, at least without grounding the electrical potential somehow.]

The problem with this chemical energy in the form of low pH is handling it safely. All materials of the ship that can face raw yellow air must be able to withstand the chemical energy of cloud stuff without degrading too fast. [Structural integrity is not the only concern, other important material properties could also deteriorate: optical, adhesive, lubricant etc.] Hydrogen and other elements collected and chemically separated from the cloud stuff must also be stored in a safe way, away from raw yellow air.

Cloud harvesting will probably occur in two steps: droplets are collected together into a liquid [drops may even spontaneously condense at the outer surface of the ship, like water condensation on Earth], which can then be electrolytically separated in an airless chamber to collect the hydrogen. This may be possible to do efficiently using nanomembranes similar to fuel cells, but the technology needs to be tailored to Venus. Since sulfuric acid reactions are mostly exothermic, excess heat might become an issue if the released energy cannot be stored or utilized. [Ionic separation may even make it possible to enrich some D from H at the same time.]

Having almost normal gravity can be utilized in industrial separation processes. For example, fractionating columns should work almost as well as on Earth, if they can be kept upright. It is even likely that some separation processes occur naturally in the atmosphere of Venus: For example the ratio of D to H seems to vary somewhat with altitude. It may even become possible to use knowledge of weather patterns to direct harvesting to places where enrichment of D is easier, or for keeping too much D from contaminating the biological ecosystem (including humans).

Both the chemical and biological ecosystems on board should aim at becoming fully closed and recycling, but some waste might still get produced in the long run. One situation where waste may be beneficial is using chaff to study the weather: thousands of ping-ping ball size objects could be released at the same time to the atmosphere, their movement in the winds followed via radar from a distance. Material for chaff can be e.g. something rejected by QA, which does not contain too much rare yellow air elements.

Local manufacture

As far as we know, yellow air is not made of very diverse elements. Only O, C and N can be considered abundant. H, S, Cl, F and some others have been detected in trace amounts, but any other chemical element needed must be imported to the airships, either dropped from orbit or lifted from below. The chemical factories on drifter airships should specialize in producing materials that are made up of only yellow air elements.

Fortunately this includes many forms of polymers and elastomers, carbon fiber precursors, synthetic resins and hardeners. Even photoactive, light emitting, and piezoelectric compounds are possible. Most polymers are insulators, but some polymers can be made conductive, both thermally and electrically. Their conductivity is not quite as good as Cu or Al, and they are certainly more difficult to form into electrical wiring.

Many interesting 2-D lattice molecules are also possible in theory to form out of yellow air elements, but the processes to manufacture and apply graphene-like materials are not mature yet. Carbon nanotube wires are in theory better conductors than Cu, which would make it possible to create very lightweight inductors and windings for electromagnets, if they could be manufactured at scale. [Special ferromagnetic metals for magnetic cores would still be needed to build efficient electromagnetic motors or turbines.] Using ordinary carbon fiber for electrical wiring and electromagnetic windings is not an optimal solution; it may work, but inefficient conductivity wastes part of the electricity as heat; and there is already too much of heat in colony altitudes of Venus. [Perhaps more suitable for Mars colonies?]

In a self-sufficient colony there should exist the capability to create replacement parts for any of the structural parts of the ship itself, if not on every ship, at least distributed among a fleet of ships. Biomass produced by the bioreactors can be used as a raw ingredient to many kinds of materials. Especially useful would be strong fiber filaments that could be robotically woven into flexible fabrics for sails, parachutes and balloon envelopes, or combined with resins and hardeners to form rigid composite structures, pressure tanks, fractionating columns, or any rigid parts for the frame.

It is unlikely that sophisticated nanoscale items such as high-end computer chips or nanomembranes can be produced locally, so spare ones need to be imported and kept in store for emergencies. Essential sensory equipment, such as pressure gauges, barometers and radar antennae, may be possible to build locally, but the reliability of such “home-made” instruments must be well tested before relying solely on them.
For many reasons it is good to separate the manufacturing areas from the main blue areas, and let the solvents and hardeners evaporate fully before taking locally created polymers into use. [There is not much benefit in having a free shield from cosmic rays, if you end up getting cancer anyway due to chemical exposure.]

Power sources and storage

Solar cells should be possible to use even in the middle cloud layer; although less light is available than at launch altitude or orbit, daylight is so diffuse that solar panels would work oriented in any direction, even downwards. The drifter day cycle means that collected solar power must be either stored for use during the 50 hour night, or an alternative power source must be found that works without daylight.

Electrical batteries will definitely be used, they are convenient and well known technology that works well with electronics, radio and lights. But there are other means of storing power than batteries. One storage alternative could be compressed air (of suitable color), something that may anyway be necessary to store reserve lifting gases. Pressure tanks may also be easier to manufacture locally than efficient electrical batteries, and can be made without rare metals.

Stored compressed air does not always need to be turned into electricity: compressed air can for example be vented via a Coandă thruster to propel or turn the ship. Pneumatic motors are possible to make without metals or magnets or electric hazards, and their operation is based on air-tight seals and gas pressure; familiar concepts when living inside an airship. But some way to create compressed air is needed to run pneumatic motors. Outside of manually powered pumps [which should be considered as a backup system in case electrical power fails] or phase-change engines, this means electrical pumps running on solar power.

An added bonus of compressed air as a power source is that venting compressed air actually removes heat. Surrounding a pressure tank with a heat exchanger and a heat pump could be utilised in directly cooling blue areas, even if the tanks themselves are kept outside in the yellow areas for safety.

Other than solar panels and cloud harvesting, energy collection from the environment may require long-winded equipment, to utilize the natural differentials of different altitudes. A kite sail could be floated a few kilometers above the ship, or a turbine dragged a few kilometers below the ship, to collect power from the difference in wind speed. A more complex “cable” might be able to use “aerothermal energy”, in the same way that geothermal energy is pumped from below ground on Earth. Any system with long cables or pipes is also vulnerable to the buildup of static electrical charges. [If they can be safely utilized, why not just harvest lightning directly?]

It is unlikely that the pressure differential between altitudes can be siphoned, even with a long capillary tube. But if pressure tanks are easy to manufacture, it might be possible to let nature fill them one by one: drop them down with a mechanism that closes their valve when a predetermined ambient temperature is reached, and inflates an accordion bellows balloon that slowly lifts the tank back up. There are disadvantages to this scheme that may hurt overall efficiency, but the method can also be justified with collecting air samples from different altitudes for scientific purposes.

Navigation and communication

The visibility in the cloud layers is probably not good enough to navigate accurately by sight. Even if a mythical Viking sunstone would show the Sun behind the clouds, placing the horizon would still be guesswork. Flying in colony altitudes will depend on instruments even during the day.

This is not that dangerous for a fleet of drifters within a few kilometers of each other, all passively sailing along the same winds. Visibility should extend that far, and light beacons should be required on all ships, even during the day [of course radio beacons will also be required on all ships, with ship identifications]. Flyer type ships however are much faster, and at superrotation speeds travel to the edge of visibility in seconds. A supersonic rocket is effectively blind in the cloud layers, and must rely on other wavelengths.

Knowing exactly where you are and what direction you are facing is also important if you need to send data to an object in orbit, or any kind of tight beam communication inside the cloud layer. On Earth, geopositioning systems work by broadcasting a simple time signal from multiple satellites. This setup is possible on Venus as well, but needs to be set up in advance. The intelligence is at the receiving end, where the periodic time signals from the satellites are analyzed to arrive at an estimated coordinate. Translation from satellite time signals to Venus surface coordinates will require calibration with another positioning method, preferably triangulated from multiple ships.

A viable positioning method that works without satellites, surface beacons or other ships is radar surface feature recognition. A computer machine learning system trained on mappings from previous missions should be able to correlate radar data to surface coordinates with good accuracy. With a high-resolution radar, it may become the standard against which other positioning systems will be calibrated.

A fleet of ships drifting within line of sight distance to each other is a fairly safe place to live in terms of navigation. [Not all of them need to be manned ships.] Constantly broadcasting your ship identity and positioning coordinates to surrounding ships makes it easier to model not just the position of all ships, but also the pitch, roll, and yaw of your own ship in relation to multiple lines of sight.

Flying in air makes sound-based communication between ships possible. This is mainly a curiosity, but does bring a nice human element to life on Venus. A passive drifter is itself fairly silent, compared to flyers and multicopter drones. Silent ships changing altitude is a potential hazard to nearby ships, and could be accompanied with alarm beeps, like a truck reversing on Earth. And of course there is the possibility of noisy neighbor airships, with their uninsulated envelopes vibrating to the music playing inside. [There may even exist natural noises on Venus. Even though lightning has so not been positively confirmed on Venus, there have been indications of thunder-like noises propagating in the atmosphere. Maybe someone alive today will become the first human to hear thunder on another planet?]

Venus will have its own data communication network, of course. Venus ships will produce a lot of data themselves, which is beneficial to store in a replicated, distributed way among the fleet of ships. The distributed cloud [yes, this will be the first cloud system with a literal cloud layer] could also host any data caching or mirroring from other planets, with priority-coordinated access to the high-latency interplanetary data links. To enable efficient data distribution, the antenna systems on board each ship must be capable of detecting and tracking the beacon signals of nearby ships, and directing their higher frequencies at each other for maximum bandwidth [somewhat like 3DBF in 5G]. Most of the equipment for the computer network must come from Earth for the foreseeable future; offline bulk data storage media [optical or chemical rather than magnetic due to rareness of magnetic raw materials] is a possible first candidate for local manufacture.

Some assembly required

Dropping into the atmosphere from space limits the possible size and shape of individual airships sent to Venus. Much larger structures become possible if assembly can take place at colony altitude. And even if a colony consists of multiple smaller ships floating close to each other, they will need to transfer materials and people between ships from time to time. A modular design, standardized across the fleet [like ISO containers] is highly desirable.

Doing construction or assembly that hangs downwards is completely opposite of most construction done on Earth. Only nests built by birds and flying insects are examples of hanging-in-air construction. Is it even possible to combine modularity, lightweight construction, and airtight seals between different colored airs, while hanging down from balloons?

This sketch concept uses a rectangular rigid module constructed of straight lengths of rods or pipes, arranged in an interlaced pattern, like wicker or a bird’s nest, that distributes mechanical forces in all directions. The design is rotationally symmetric, which means that two modules can interlock facing any of its six faces towards each other, to ease assembly and design.

In the interior of each cubic module is an open space, roughly spherical [unofficially called “egg space”, continuing the bird’s nest analogy], where different payloads can be attached and carried. The exposed rods of the frame provide both support for the inner payload and anchoring for hauling the module from the outside, or for attaching various equipment. The open frame can also be reinforced where needed, even replace the payload completely with structural elements for some modules in the assembly.

Blue quarters for habitation can be built inside module frames, and be connected with flexible or inflatable corridors. Rooms can also be erected using only inflatable elements attached to the module frame. Slightly overpressured wall elements could be useful in keeping blue and yellow air separate, or detecting leaks. For more permanent construction, instead of gas the wall elements could be injected with an aerosolized resin that hardens into a foam. This results in less “bouncy castle” feel, adds insulation from outside heat, and avoids having to adjust inside-wall pressure when the altitude changes.

What next?

So far colonization of Venus has been a thought experiment, what current technology might allow given what we know about the conditions on Venus. But there is a lot that needs to happen before we can send the first humans to live on Venus.

Local weather is crucial to flyers and drifters in the cloud layer. We have studied what we can from looking at the cloud patterns, but it would be prudent to send more unmanned missions to study aspects of weather (including electrical aspects, thunder and/or lightning, and radio weather) first-hand. Detailed chemical composition of the more hidden cloud layers could be studied at the same time, not just for their interaction with manmade ships, but also to ensure that there are no naturally occurring complex macromolecules or processes that human colonization might contaminate.

Even if volunteers might be available, it would not be ethical to send humans to Venus if there is no feasible way for them to return. There have been designs for air-based launches from Earth, but so far all missions with humans on board have been launched from the ground. Launching from a high-altitude balloon would save rocket fuel on Earth too, so it makes sense to mature the technology here first. [There are some entrepreneurs working in this area, for example zero2infinity with bloostar.]

Despite the obvious differences, many of the challenges of Venus airborne colonization are the same as those of Moon or Mars ground-based colonization. All are behind gravity wells, so equipment shipped over must be both lightweight and built to last. All will need reliable systems for blue air management, as well as medical diagnosis and treatment. Photovoltaics and battery technology is needed for all destinations, as are advanced automation and robotics. Developing these space technologies will also help Venus colonization.

Different airship models can be tested and developed in Earth’s atmosphere first, before modifying them for Venus. Unfortunately there doesn’t seem to be much interest in long-duration unaided flight on Earth to drive the needed technical development on its own. Currently the record duration for untethered airship flight is 20 days, from 1999. For successful Venus colonization, flight durations should be counted in months and years, not days. [The 1999 record was not even the primary goal of the mission; it was to circumnavigate the globe without landing.]

And even if it suddenly became fashionable to make floating cities on Earth, they would be much easier to assemble on the ground than in air. Even if it is technically possible to build floating cities on Earth, there is no real economic incentive to play “floor is lava” during their construction. But such games need to played here at least part of the time, to gain the practical knowledge and skills needed for sustainable Venus airborne colonization.

Terminology

airborne: English word meaning “carried by air”
ballonet: adjustable non-lifting gas bag inside the outer envelope of some airships
ballute: fusion of “balloon” and “parachute”
catenary curtain: a load-distributing internal cable attachment in some airships
envelope: airship jargon for “gas bag”
kytoon: fusion of “kite” and “balloon”
LTA: contraction of “lighter-than-air”
metastable: loading a surface ship with its center of buoyancy below the center of gravity
pH: “power of Hydrogen”, a logarithmic measure of ion concentration in a solution
rockoon: fusion of “rocket” and “balloon”
self-trimming: a mechanism that helps keep cargo evenly loaded on a ship
static port: external air sensor fitting on an aircraft
superrotation: the rotation of Venus’s atmosphere, faster than surface rotation
trimming: in this context, keeping a ship level

History

The earliest mention of using buoyant airships on Venus I have found is in The Exploration of The Solar System by Felix Godwin (New York, Plenum Press, 1960). [This charmingly detailed but outdated book is otherwise an excellent example of smart and imaginative extrapolation from insufficient data.]

“(21) The non-rigid airship is for some purposes the ideal form of transportation on Venus. Owing to the dense air, it can carry considerable loads. Furthermore, it is completely unrestricted by the terrain and can hover anywhere, either for observation or for discharging cargo.” [pg 86.]

Once data about the harsh surface conditions started to come in from the early missions, the idea of sending buoyant vehicles into the atmosphere of Venus gained more traction. Many countries had plans for putting scientific aerostats on Venus in the 1960s. For example in 1967 Martin Marietta Corporation made a feasibility study for NASA of a Buoyant Venus Station (BVS), considering payload masses of 90 kg, 907 kg, and 2268 kg.

Two aerostats (21 kg each) were eventually launched into the middle cloud layer in 1985, as part of VEGA. The multinational mission was a success, and radio telemetry from the helium-filled balloons was tracked for 46 hours from 20 radio telescopes around the Earth. French scientist J. E. Blamont is credited with the original proposal.

Manned missions on dirigible airships were also discussed. In issue 9/1969 of Tekhnika Molodezhi (“Technique – Youth”), pg 14-16, V. Ivanov writes

“In fact, above the inhospitable surface of Venus, it is very convenient to drift in a dense atmosphere. In addition to devices such as bathyscaphe, it is advisable to launch balloon-probes or even airships to our heavenly neighbor. For example, a small balloon probe, drifting at a height of fifty kilometers, is capable of transmitting data on its way, about the downstream terrain for many days in a row. Perhaps relatively quickly people will create in the upper layers of the atmosphere of Venus a drifting laboratory that will prove to be more effective than a manned artificial satellite of the planet.” [translated from Russian by Google Translate]

The idea of dredging the surface of Venus from a buoyant ship with a long cable was also floated. In Aviatsiya i Kosmonavtika (“Aviation and Astronautics”) 10/1973, pg 34-35, G. Moskalenko writes

“The aerostatic type device can be equipped with a cable hanging downwards with research equipment suspended for it for vertical sounding of the atmosphere, as well as mechanisms for taking ground from the surface. The length of the rope is not difficult to increase due to the attachment of intermediate lifting balls, which compensate for the load on the rope. It is interesting to note that by picking up the appropriate lifting balls, the cable can easily be lifted above the bearing balloon.” [translated from Russian by Google Translate]

The futuristic idea of living permanently on Venus in large floating habitats also emerged early on. In issue 9/1971 of Tekhnika Molodezhi, pg 55, S. Zhitomirsky writes:

“[…]the composition of the Venusian atmosphere suggests a more tempting solution – the station can be inside the balloon. Indeed, carbon dioxide is one and a half times heavier than air, and a light shell containing air will float in a carbon dioxide atmosphere. If the inhabitants of Venus prefer not a nitrogen-oxygen but a helium-oxygen mixture for breathing, the lifting force of their “air” will sharply increase. […] To the edges of the platform is attached a huge spherical shell, which limits the airspace of the island. It is transparent, and through it you can see the whitish sky of Venus, eternally covered with multilayered luminous clouds. The shell is made of several layers of synthetic film. Between them, gas formulations containing indicator substances are circulating.” [translated from Russian by Google Translate]

[As the zonal wind speeds were apparently unknown at the time, Zhitomirsky assumed that the flying islands can move at about 13 km/h to stay constantly in daylight. The actual airspeed required to do that outside of polar regions is actually 20-30 times higher, infeasible for a ship that big.]

Links

Venus colonization has its fans [“Friends of Fria”, as Peter Kokh called them], but finding relevant discussion about the topic can be frustrating. [For example, the domain name venussociety.org is reserved, but has no content at this time.] I can recommend two links which both have pointers to deeper sources:

This 2011 article By Robert Walker presents a friendly introduction to the topic, and also a source of links to further discussions on various internet forums: “Will We Build Colonies That Float Over Venus Like Buckminster Fuller’s Cloud Nine?”

Venus Labs has published a highly detailed “Handbook For the Development Of Venus”, Rethinking Our Sister Planet, written by Karen R. Pease. The book is a seriously detailed imagining of how a manned mission might be accomplished with existing technology. It has lots of links and sources of information. [There are a lot of original ideas in the book as well, but I can’t say that I completely agree with all the proposals. One thing that strikes me particularly is the insistence of housing people high inside the balloon envelope, even doing bungee jumps while hanging from the ceiling. To me it sounds a bit like the wild stunts of wing walkers playing tennis on the wings of biplane in the 1920s: it is perhaps possible but very risky and uncomfortable, and ultimately has nothing to do with the primary engineering purpose of wings or balloons.]

On the Nature of Asymmetry

So I entered the FQXi essay contest this year. You can read my essay “On the Nature of Asymmetry” on the FQXi website, in its full PDF glory.

I only found out about the website, and the contest, about a month ago, so my entry feels a bit rushed and unfinished. But I think having an external deadline was a good motivator nonetheless. I can still develop the ideas further some other time.

If you like, you can rate the contest entries at the FQXi website, by registering an email address. The rating period ends in about a month, after which the best rated entries advance to an anonymous expert panel.

The Simulation Narrative

Most of the millions of people lining up to see the latest blockbuster film know that the mind-boggling effects they are about to see on the big screen are made “with computers”. Big movies can cost hundreds of millions to make, and typically less than half of the budget is spent on marketing the premiere, or paying the actors upfront. Plenty of money left over to buy a big computer and press the ‘make effects’ button, right? Except that these movies close with 5-10 minutes of rolling credits, and about half of them are names of people working in visual effects, not computers. (Seems like a tentpole movie crew these days has more TDs (Technical Directors) than any other kind of directors combined.) [If you think making cool computer effects sounds easy, just download the open source tool blender, and create whatever your mind can imagine …]

Computer simulations are no longer just for engineering and science, they can be used as extensions of our imagination. A simplified set of rules and initial conditions are input, then a few quick test renders are made with low resolution. Twiddling the knobs, how many particles, viscosity, damping, scattering, and finding the correct lighting and camera angles, etc., iterating until you are happy or (more likely) forget what you were trying to accomplish.

Selene Endymion MAN Napoli Inv9240

Before computers, before visual effects and film, people had to use their own imagination to make entertaining simulations. The most light-weight technology to accomplish that was storytelling, guided narrative with characters and settings. The rules of the simulation were the rules of plausibility within the world of the story. The storyteller created the events, but the listeners enacted them in their imaginations. The storyteller received immediate feedback from his audience if the story became too implausible.

But once the audience “buys in” to the characters and their narratives, they become emotionally invested in them as if they were real people. Fictional characters, today protected by copyrights and corporate trademarks, can still suffer unexpected fates, and new audiences to a fictional world often demand to be protected from “spoilers” that would make it difficult for them to simulate them in their imaginations. Real people do not know what their future is, and to ‘play’ the role convincingly and without foreshadowing, it is best to live with incomplete information.

If I start to read a book of fiction, written many decades ago, when is the simulation of the characters happening?  Does every reader simulate the main characters each time they read the book, or did the author execute the simulation, and the readers are only verifying that the story is plausible? Certainly I feel like I am imagining the phenomenal ‘qualia’ that the characters in the book are experiencing, but at the same time I know that I am just reading a story that was finished a long time ago. Am I lending parts of my consciousness to these paper zombies?

In a well-known book of genre-defining fantasy, after hundreds of pages of detailed world-building, two characters are beyond weary, in the darkest trenches of seemingly unending war, when one of them starts to wonder if they shall

“[…] ever be put into songs or tales. We’re in one, of course; but I mean: put into words, you know, told by the fireside, or read out of a great big book with red and black letters, years and years afterwards.”

It’s not a bad way to put it, but even for myself at age 12, characters in a book discussing the possibility of them being characters in a book was just too self-referential to be plausible, and pushed me ‘out’ of the story for a moment. (A bit like characters in a Tex Avery cartoon running so fast they run outside the film. We get the joke, but let’s keep the “fourth wall” were it is for now.)

Since the book was written long ago, and has not been edited since, it can be argued that none of its characters have free will. The reluctant hero makes friends, sacrifices comforts, has unexpected encounters and adventures, all while trying to get rid of the “MacGuffin” that has fallen into his hands. When at last he arrives at the portal of Chaos where the artifact was forged, does his determination to destroy it falter, or will something totally unpredictable happen? To have any enjoyment in the unfolding of the story, the readers must believe that the actions of the characters have significance, and play their roles in our minds as if they had free will.

There are also professional actors, people who take to the stage night after night, repeating familiar lines and reacting to the events of the screenplay as if they were happening for the first time:

“For Hecuba! What’s Hecuba to him, or he to Hecuba, That he should weep for her?”

A good performance can evoke both the immediacy and intimacy of a real emotional reaction, but the audience still needs to participate in the act of imagining the events as actual, to understand at some emotional level “what it is like” for the characters in the play to have their prescribed experiences.

What to me really sells a scene is the interplay of the actors, not so much how photorealistic the visual effects happen to be. A painted canopy plays the part of a majestic sky, or a sterile promontory becomes earth for the gravedigger, if all the actors act as though it were so.

As convincing as our simulations can be, the point of fiction is that we enter it knowing that it is fiction, that we can always put the book down, or step outside of the theater. Fiction is not realtime, and it always requires audiences to imagine some parts of it (for example, what happens between scenes?). We choose not to pay attention to the man behind the curtain, or analyze the plot too much, when we want to immerse ourselves for a moment.

[Having said that, I don’t mean to imply that it is impossible to become lost inside made-up stories, and confuse them into reality in a quixotic manner, but that is usually not the intention of the storyteller (though it could be useful to the intentions  of a shrewd marketer, politician or cult leader).]

Time inside the simulation is independent from time in the real world. In addition to pausing the simulation, monolithic or pre-computed simulations can be executed in different sequential order from the assumed order inside the simulated world. This is used to great effect in some books, which describe the same event multiple times in different chapters, but from the point of view of different characters. Each perspective usually gives the reader some extra information, something that no character in the simulation can have. Viewing from outside the simulation, the audience gets an almost god-like view of the situation, sometimes even enhanced with indexes and bookmarks so they can page back and review the events in previous chapter (but not forward, since it would “spoil” the freshness of the simulated experience).

Pre-written narrative simulations, movies and plays, edit out the parts that are though to be uninteresting. This is a careful balancing act, because editing out too much leaves the characters and their actions too distant, and harder to relate to. Leaving in too much unnecessary details on the other hand can appear gratuitous and put off many viewers and readers, who will surely find better things to occupy their time.

Computer simulations today almost always consist of time-steps. A numerical approximation of some evolution equation uses the results of the previous steps to compute the next step of the simulation. The smaller the time interval used, the closer the approximation is to the real solution [in the mathematical sense, for example a piecewise linear line approximating a smooth curve], and the longer it takes to compute. If the simulation is pre-computed, the audience need not view every individual step, to make use of the simulation. [Note: In terms of the blender software, the physics timestep is independent of the framerate of the animation, and changing either will affect the needed baking and/or rendering time.]

When played at sufficient number of frames per second, our mediocre senses are fooled to interpret a sequence of still images as moving pictures [interestingly, while higher framerates increase the realism, and are technically possible today, moviegoers still prefer the “cinematic” 24 fps for big screen cinema, or even less for dream-like sequences]. But it would be naive to think that the timeline of the physical world also consists of individual static states, progressing in infinitesimally short step transitions across the whole Universe. Such ideas of motion were debated already hundreds of years BC by the Eleatic school, most famously with the paradoxes of Zenon. [Note: Unfortunately, even logical and analytical philosophy usually implicitly assumes that there is always a “world at time t” snapshot, at any chosen time t, with defined entities and properties. But that is a topic for another post.]

Stories can be used used to simulate, or theorize, about the minds of others. By vicariously identifying with characters, we can sometimes glimpse across the inter-subjective gap. Even a child can understand that a character in the story does not have all the information, that Red Riding Hood is asking questions because she doesn’t know who she is speaking with. But even voluntary participation in a simulative narrative can reveal hidden agendas in the audience, through transference: For example, did Shakespeare put baits in his Scottish play to “catch the conscience” of king James, or just harmless references to the new king’s known interests?

Some stories contain such powerful or virulent motivations for their characters, that the audience starts doubt their own volition as participants of the simulation. [Note: This could to be related to hypnosis, which can also be induced using only words and signs, and makes subjects doubt their own volition to some degree.] Being part of something larger, even if it is just a simulation, is a recognizable desire in the human psyche. Experience, of real life, as well as other kinds of stories, can also recontextualize previous narratives in new light, and help reframe them with a different viewpoint. [An example of this could be a new parent realizing: “maybe my parents were just as clueless as I am now?”; a kind of subject-object relationship reversal, in the psychoanalytical sense.]

In this transitional state between the simulator and the simulated, we might also strive to theorize on the motivations of a possible future ‘superintelligence’. Why would it spend so much effort to compute realistic ‘ancestor simulations’, extrapolating scenarios from its vast collections of historical data, as in the simulation argument by Nick Bostrom? Perhaps the motivations are the same as when we try to understand our ancestors from the knowledge that we have: If you don’t understand history, you are doomed to repeat it, over and over. Just as intelligence does not imply wisdom, superintelligence certainly does not imply ‘superwisdom’.

Collection and storage of ever more detailed data today gives another perspective to simulation as a way of stepping outside of time. If we can store as much information about the state of the world today as we can, and build a simulated snapshot state of it at a point in time, the transhumanist proposition is that with enough data (how much?) this would be indistinguishable from the real thing, at least for the simulated subjects inside the simulated snapshot. The idea is identical to [and identically depressing as] the afterlife scenarios of many religions and cults. No release for Sisyphos, or holodeck Moriarty from his “Ship in a Bottle”.

The problem of using statistics or probability to determine the ontological status of your consciousness is problematic for many reasons, among them the transitory nature of conscious experience. For the total tally of fictional consciousnesses, do you count the number of different characters in all the scripts, all the actors, or all the audience members? Does it matter if all scripted characters are actually “played” to any audience with phenomenal consciousness? Does a simulation character need to have phenomenal consciousness all the time, or just during some key scenes, zoning out as sleepwalking zombies for most of the time? Since there can be no definite multiplicity for such an ill-defined entity as a consciousness separate from its substrate, counting of probabilities from a statistical point of view is as meaningful as arguing about the number of angels that can dance on the point of a needle.

I don’t consider myself to be any technological “luddite”, on the contrary I believe that technological progress has the potential to create many more breakthroughs in the future, or even an unprecedented ‘flowering’ of the kind that rarely happens in the evolution of life on Earth (for example, the appearance of fruit-bearing plants  (a.k.a. flowers) during the Early Cretaceous). I do dislike the word “singularity” in this context though; especially since it originally means an attempt at extrapolation beyond the limits of the current working model. (For example, the financial “singularity” of 2008, when loan derivatives cast off the surly bonds of fundaments, and left the global markets in free fall.) All flowers have their roots in the earth, and in the past, which they must grow from. No ‘sky-hook’, or ‘future-hook’ can seed the present.

The Pilot of Consciousness

We do not know in detail how human consciousness arose, and we only have direct evidence of our own consciousness. But it is common sense to assume that most normally behaving people are conscious to some degree, and that consciousness is a result of biological processes of the body, and its organs, even though we cannot see it directly, the way we appear to see our own consciousness.

Assumptions about the level of consciousness of other people affect modern society at a deep level. A conscious mind is thought to have free will, to a greater degree than simpler animals or machines do. In criminal law, a person can only be judged on the actions they take consciously, but not while for example sleepwalking or hallucinating.

The old metaphor is that consciousness is to the body like a pilot is to the ship. The pilot of a ship needs information and feedback to do his job, but he does not have direct access to it. Instead, he gets his information secondhand from the lookouts, and the various instruments for measuring speed with knots, the compass, and elsewhere. The pilot also does not row the oars himself, or stoke the engines, he just sends instructions below and assumes they will be carried out. The pilot does nothing directly, but all vital information must flow timely through him. Neither is steersman the same role as the captain; piloting work means reacting to the currents and the winds as they happen, not long-term goal-setting or strategic planning.

Ship procession fresco, part 4, Akrotiri, Greece

[Note: The old greek word for pilot is kubernetes, which is the etymological root word for both ‘cyber-‘ and ‘govern-‘ words.]

Piloting a ship is not always hectic, at times the ship can be safely moored at harbour, or the sailing can be so smooth that the pilot can take a nap. But when the ship is at strange seas, risking greatest danger from outside forces, pilot-consciousness kicks in fully, alerting all lookouts and powering the engines to full reserve power, ready to react to whatever happens. When the outside forces show their full might, the pilot is more worried about the ship surviving the next wave, than getting to the destination on time.

The state or level of consciousness is often associated with some feelings of anticipation, alertness, even worry or anxiety; such feelings can even prevent dialing down the level of consciousness to restful sleep, and thereby cause more stress the next day. Pain can only be felt when conscious, hence the cliché of pinching yourself to check if you are dreaming or not. Pathos [the root word for ‘-pathy’ words, like empathy or psychopathy], in all its meanings, is a strong catalyst to rouse consciousness. Only humans are thought to be capable of becoming truly conscious of their own mortality, the conscious mind thus becoming aware of the limits of its own existence.

When the pilot takes over and commandeers a vehicle, the flexibility of consciousness allows him to extend his notion of self to include the vessel. For example, an experienced driver can experience his car sliding on a slick patch of road as a tactile sensation, as if a part of himself was touching the road, and not the tires. In the same way, human consciousness naturally tends to identify itself as the whole individual. Sigmund Freud named the normal conscious part of the mind ‘ego’, which is ‘self’ in latin. His key observation was that the mind is much more than the ego, and that true self-knowledge requires careful study, which he called psycho-analysis.

Introspection is an imperfect tool for studying one’s own mind, due to the many literal and metaphorical blind spots involved. The ego is very capable of fooling itself. This is why it is not considered safe to try doing psycho-analysis by yourself, you should have guidance from someone who has gone through the process. Same applies for some methods of controlling consciousness through meditation.

There are methods of self-discovery that are less dangerous, such as the various personality tests. To extend the metaphor, different pilots have their own favorite places on the ‘bridge’, their habitual ways of operating the ship, or specific feelings associated with its operations. Your ‘center’ may not be in the same place as someone else’s. For example, a procrastinator waits until the last possible moment to make a decision; it could be that only the imminence and finality of a deadline makes their choices feel ‘right’ or ‘real’ enough to commit to. Another example is risk-seeking/aversion: some people only feel alive when in some amount of danger; others do their utmost to pass risks and responsibilities to other people.

Most pilots become habituated to a specific level of stress when operating the self-ship, and cannot function well without it; the types and levels of preferred stress can vary much between individuals. Too much stress however can break the pilot and damage the ship. This is also variable between individuals. Hans Eysenck theorized that sensitivity of an individual to be easily traumatized is correlated to intraversion, or that extraversion could be even redefined in terms of tough-mindedness; but there are other models as well, such as psychological ‘resilience‘, which supposedly can be trained as a ‘life skill’.

Habits are also something that can be consciously trained, and paying attention to our own habits is very healthy in the long run. Consciousness is tuned to fairly limited range of timescales; changes that happen too fast or too slowly do not enter consciousness. Daily habits creep slowly, and without photographs it would be hard to believe how much we change over time. Almost all of the atoms and molecules in our bodies are swapped to new ones every few years, yet our sense of identity remains continuous.

Heraclitus says that “a man’s character is his destiny”, and to know thyself means knowing your weaknesses, as well as strengths. Multitasking is a typical weakness that the pilot often confuses for a strength. Consciousness appears to be the stage where all experience terminates, but the real multitasking happens at the edges; the decision of which of the competing stimuli enter consciousness is never a completely conscious decision. The same applies to commands outgoing, unfortunately. Completeness of control can be an illusion, a form of magical thinking.

Many philosophers have also been fascinated with the true nature of the biggest ‘blind spot’ of consciousness: consciousness itself. There have been various efforts to formalize the ‘contents’ of consciousness, or to model consciousness in terms of ‘properties’ that some entity may or may not ‘have’. There are inherent limitations with these approaches, they should be taken in the original context of phaneroscopy, without drawing any metaphysical conclusions from them.

Not many deny that life, and consciousness, is a process, and human viewpoint is one of moving inexorably forward through Time. The ‘contents’ of consciousness form an unstoppable stream, moving in relation to our self-identity. It seems to us that our mind is anchored to something unmoving and unchanging, with the world changing around it. Yet we identify no specific ‘qualia’ for change or motion, or atomic perceptions of time passing. [There are some thresholds to when we begin recognizing a rhythm, though.]

The true nature of subjective experience may be a ‘hard problem’, but no harder than explaining the true nature of Time. The human condition is to flow from an unchangeable past, inexorably and continuously forward, towards an unknown future, and to only ever be able to act in the present. The pilot role is necessary exactly because the flow that powers all flows cannot be stopped, it can only be navigated.

A Likely Story

Is cosmology a science? Is scientific cosmology even possible, because it is about events so unique and fundamental that no test in any laboratory can truly repeat them? Questions like these pop up often enough, and you can find many good answers to them through e.g. quora, which I will not repeat here.

For the layman thinker, the difference between truth and lies is simple and clear, and it would be natural to expect the difference between science and non-science to be simple and clear as well. The human brain is a categorizing machine that wants to put everything in its proper place. Unfortunately the demarcation of science versus not-science is not so clear.

Tischbein - Oldenburg

Karl Popper modeled his philosophy of science on the remarkable history of general relativity. In 1916, Albert Einstein published his long-awaited theory, and made sensational predictions, reported in newspapers around the world, that would not be possible to verify until the next total eclipse of the Sun. It was almost like a step in classical aristeia, where the hero loudly and publicly claims what preposterous thing he will do, before going on to achieve exactly that. Popper’s ideas about falsification are based on this rare and dramatic triumph of armchair theory-making, not so much on everyday practical science work.

If we want a philosophy of science that really covers most of what gets published as science these days, what we really need is a philosophy of statistics and probability. Unfortunately, statistics does not have the same appeal as a good story, and more often gets blamed for being misleading than lauded as a necessary method towards more certain truths. There is a non-zero probability that some day popularizations of science could be as enthusiastic about P-values, null hypotheses, bayesians, as they are today about black holes, dark energy and exotic matter.

Under the broadest umbrella of scientific endeavors, there are roughly two kinds of approaches. One, like general relativity, looks for things that never change, universal rules that apply in all places and times. These include the ‘laws’ of physics, and the logical-mathematical framework necessary for expressing them (whether that should include the axioms of statistics and probability, if any, is the question).

The other approach is the application of such frameworks, to make observations about how some particular system evolves. For example, how mountains form and erode, how birds migrate, how plagues are transmitted, what is the future of a solar system or galaxy, how climate changes over time, what are the relationships between different phyla in the great tree of life, and so on. Many of such fields study uniquely evolved things, such as a particular language or a form of life. In many cases it is not possible or practical to “repeat an experiment” starting from the initial state, which is why it is so important to record and share the raw data, so that it can be analyzed by others.

From the point of view of the theoretical physicists, it is often considered serendipitous that the fundamental laws of physics are discoverable, and even understandable by humans. But it could also be that the laws that we have discovered so far are just approximations that are “good enough” to be usable with the imperfect instruments available to us.

The “luck” of the theorist has been that so many physical systems are dominated by one kind of force, with the other forces weaker by many orders of magnitude. For example, the orbit of the Earth around the Sun is dominated by gravitational forces, while the electromagnetic interactions are insignificant. In another kind of system, for example the semiconducting circuits of a microprocessor, electromagnetism dominates and gravity is insignificant. The dominant physics model depends on the scale and granularity of the system under study (the physical world is not truly scale invariant).

As the experimental side of physics has developed, our measurements have become more precise. When we achieve more reliable decimals to physical measurements, we sometimes need to add new theories, to account for things like unexpected fine structure in spectral lines. The more precision we want from our theories, the more terms we need to add to our equations, making them less simple, further away from a pythagorean ideal.

The nature of measurement makes statistical methods applicable regardless of whether measurement errors originate from a fundamental randomness, or from a determinism we don’t understand yet. The most eager theorists, keen to unify the different forces, have proposed entire new dimensions, hidden in the decimal dust. But for such theories to be practically useful, they must make predictions that differ, at least statistically, from the assumed distribution of measurement errors.

Many theorists and philosophers abhor the uncertainty associated with probability and statistics. (Part of this is probably due to personality of each individual, some innate unwillingness to accept uncertainty or risk.) To some extent this can be a good thing, as it drives them to search for patterns behind what first seems random.

But even for philosophers, statistics could be more than just a convenient box labeled ‘miscellaneous’. Like in the Parmenides dialogue, even dirt can have ideal qualities.

Even though statistics is the study of variables and variability, its name comes from the same root as “static”. When statistics talks about what is changeable, it always makes implicit assumptions about what does not change, some ‘other’ that we compare the changes against.

It is often said that statistical correlation does not imply causation, but does cosmic causation even make sense where cosmic time does not exist? Can we really make any statistical assumptions about the distribution of matter and energy in the ‘initial’ state of all that exists, if that includes all of space and time?

One of the things that Einstein was trying to correct, when working on general relativity, was causality, which was considered broken in the 1905 version of relativity, since causes did not always precede their effects, depending on the movement of the observer. General relativity fixed it so that physical events always obey the timeline of any physical observer, but only by introducing the possibility of macroscopic event horizons, and strange geometries of observable spacetime. But the nature of event horizons prevents us from observing any event that could be the primal cause of all existence, since it would be outside of the timeline from our point of view. We can make estimates of the ‘age’ of the Universe, but this is a statistical concept, no physical observer experiences time in the clock that measures the age.

Before Einstein, cosmology did not exist as a science. At most, it was thought that the laws of physics would be enough to account for all the motion in the world, starting from some ‘first mover’ who once pushed everything in the cosmos to action. This kind of mechanistic view of the Universe as a process, entity or event, separate from but subservient to a universal time, is no longer compatible with modern physics. In the current models, continuity of time is broken not only at event horizons, but also at the Planck scales of time and distance. (Continuing the example in Powers of Two, Planck length would be reached in the sixth chessboard down, if gold were not atomic.)

Why is causality so important to us, that we would rather turn the universe into swiss cheese than part with it? The way we experience time, as a flow, and how we maintain identity in that flow, has a lot to do with it. Stories, using language to form sequences of words, or just as remembered sequences of images, dreams and songs, are deeply embedded into the human psyche. Our very identities as individuals are stories, stories are what make us human, and plausible causes make plausible stories.

Knowledge, Fast and Slow

Ars longa, vita brevis

Due to the shortness of human life, it is impossible for one person to know everything. In modern science, there can be no “renaissance men”, who have deep understanding of all the current fields of scientific knowledge. Where it was possible for Henri Poincaré to master all the mathematics of his time, a hundred years later no-one in their right minds would attempt a similar mastery, due to the sheer amount of published research.

A large portion of the hubris of the so-called renaissance men, like Leonardo da Vinci, can be traced to a single source: the books on architecture written by Vitruvius more than a thousand years earlier, rediscovered in 1414 and widely circulated by a new innovation, the printing press. In these books, dedicated to emperor Augustus, Vitruvius describes what kind of education is needed to become an architect: nothing less than enkuklios paideia, universal knowledge of all the arts and crafts.

Of course an architect should understand how a building is going to be used, and how light and sound interact with different building materials. But some of the things that Vitruvius writes are probably meant as indirect flattery to his audience and employer, the first emperor. Augustus would likely have fancied himself “the architect” of the whole roman empire, in both the literal and the figurative sense.

Paideia was a core hellenic tradition, it was how knowledge and skills were kept alive and passed on to the future generations. General studies were attended until the age of about 12, after which it was normal to choose your future profession, and start an apprenticeship in it. But it was also not uncommon for some aristo to send their offspring to an enkuklios paideia, a roving apprenticeship. They would spend months, maybe a year at a time learning from the masters of one profession, then move to another place to learn something completely different for a time. A born ruler would anyway not be needing any single profession as such, but some knowledge of all professions would help him rule (or alternatively, human nature being what it is, the burden of tolerating the privileged brats of the idle class must be shared by all (“it takes a village”)).

Chiron instructs young Achilles - Ancient Roman fresco

Over the centuries, enkuklios paideia transformed into the word encyclopedia, which today means a written collection of current knowledge in all disciplines. As human knowledge is being created and corrected at accelerating rates, printed versions are becoming outdated faster than they can be printed and read. Online encyclopedias, something only envisioned by people like Douglas Engelbart half a century ago, have now become a daily feature of life, and most written human knowledge is in principle available anywhere, anytime, as near as the nearest smartphone.

Does that mean that we are all now vitruvian architects, renaissance geniuses, with working knowledge of all professions? Well no, human life is still too short to read, let alone understand, all of wikipedia, or keep up with its constant changes. And not everything can be learned by reading or even watching a video, some things can only be learned by doing.

For the purposes of this essay, I am stating that there are roughly two types of knowledge that a human can learn. The first one, let’s call it epistemic knowledge, consists of answers to “what” questions. This is the kind of knowledge that can be looked up or written down fast; for example, the names of people and places, numeric quantities, articles of law. Once discovered, like the end result of a sports match, they can be easily distributed all around the world. But, if they are lost or forgotten, they are lost forever, like all the writings in languages we no longer understand.

The other type of knowledge I will call technical knowledge, consisting of answers to “how” questions. In a sense technical knowledge is any acquired skill that is learned through training, that eventually becomes second nature, something we know how to do without consciously thinking about it. Examples are the skills that all children must learn through trial and error, like walking or speaking. Even something as complex as driving a car can become so automatic that we do it as naturally as walking.

[Sidenote: the naming of the two types here as “epistemic” and “technical” is not arbitrary, they are based on two ancient greek words for knowledge.]

The division to epistemic and technical knowledge is not any fundamental divide, and many contexts have both epistemic and technical aspects. Sometimes the two even depend on each other, like names are dependent on language, or writing depends on the alphabet.

Both kinds of knowledge are stored in the brain, and can be lost if the brain is damaged somehow. But whereas an amnesiac can be just told what their name and birthday is, learning to ride a bicycle again cannot be done by just reading a wikipedia article on the subject. The hardest part of recovering from a brain injury can be having to relearn skills that an adult takes for granted, like walking, eating or speaking.

In contrast to epistemic knowledge, technical knowledge can sometimes be reconstructed after being lost. Even though no documents readable to us have survived from the stone age, we can still rediscover what it may have been like to work with stone tools, through experimental archaeology.

Technical knowledge exists also in many wild animals. Younger members of the pack follow the older ones around, observe what they do and try to imitate them, in a kind of natural apprenticeship. Much has been said about so-called mirror neurons that are though to be behind this phenomenon, in both humans and animals.

New techniques are not just learned by repetitive training and imitation, entirely new techniques can be discovered in practice. Usually some competitive drive is present, like in sports. For example, high jump sets its goal in the simplest of terms: jump over this bar without knocking it off. But it took years before someone tried to use something other than the “scissors” technique. Once the superiority of a new jumping technique became evident, everyone starting to learn it, and improve on it, thus raising the bar for everyone.

New techniques offer significant competitive advantages not only in sports, but also in the struggles between nations and corporations. Since we are so good at imitating and adapting, the strategic advantage of a new technique will eventually be lost, if the adversary is able to observe how it is performed. The high jump takes place in front of all, competitors and judges alike, and everything the athlete does is potentially analyzed by the adverse side. (This does not rule out subterfuge, and the preparatory training can also be kept secret.)

About the time of the industrial revolution, it became apparent that tools and machines can embody useful technical knowledge in a way that is intrinsically hidden from view. Secret techniques that observers cannot imitate even in their imaginations are, to them, indistinguishable from magic. To encourage inventors to disclose new techniques, but still gain temporary competitive advantage in the marketplace, the patent system was established. Since a patent would only get granted if the technique was disclosed, everyone would benefit, and no inventor need take their discoveries to the grave with them, for fear of them being “stolen”. Today international patent agreements cover many countries, and corporations sometimes decide to share patent portfolios, but nations have also been known to classify some technologies secret for strategic military purposes.

Even though technical knowledge is the slow type of knowledge, it is still much easier to learn an existing technique from someone than it was for that someone to invent, discover or develop in the first place. This fact allows societies to progress, as the fruits of knowledge are shared, kept alive and even developed further. One area where this may not apply so well is in the arena of pure thought, since it mostly happens hidden from view, inside the skull. This could be one reason why philosophy and mathematics have always been associated with steep learning curves. Socrates never believed that philosophy could be passed on by writing books, only dialogue and discussions could be truly instructive, the progress of thought made more explicit thereby. This is also why rhetoric and debate is often considered as prerequisite for studying philosophy (though Socrates had not much love for the rhetors of his time either).

From all the tools that we have developed, digital computers seem the most promising candidates for managing knowledge outside of a living brain. Words, numbers and other data can be encoded as digital information, stored and transported reliably from one medium to another, at faster rates than with any other tool available to us. Most of it can be classified as the first type of knowledge, the kind that can be looked up in a database management system. Are there also analogues of the second type of knowledge in computers?

In traditional computer programming, a program is written, tested and debugged by human programmers, using their technical skills and knowledge and all the tools available to them. These kind of computer programs are not written just for the compiler, the source code needs to be understood by humans as well, so they know that/how it works, and can fix it or develop it further if needed.  The “blueprint” (i.e. the software parts) of a machine can be finalized even after the hardware has been built and delivered to the customer, but it is still essentially a blueprint designed by a human.

Nowadays it is also possible for some pieces of software to be trained into performing a task, such as recognizing patterns in big data. The development of such software involves a lot of testing, of the trial and error kind, but not algorithmic programming in the traditional sense. Some kind of an adaptive system, for example an artificial neural network, is trained with a set of example input data, guided to imitate the choices that a human (or other entity with the knowledge) made on the same data. The resulting, fully trained state of the adaptive system is not understandable in the same way that a program written by a human is, but since it is all digital structures, it can be copied and distributed just as easily as human-written software.

This kind of machine learning has obvious similarities to the slow type of knowledge in animals. The principles are the same as teaching a dog to do a trick, except in machine learning we can just turn the learning mode off when we are done training. And of course, machines are not actively improving their skills, or making new discoveries as competing individuals. (Not yet, at least.)