What would a real cloud city look like?

There have been floating cities in fantasy stories for ages. From Hyrule to Urth to Malazan, a fantasy world as imagined today is not really complete without some airborne structures slowly drifting over the landscape. A massive, mountain-sized citadel suspended in mid-air can also be a breath-taking visual element in the hands of a skillful artist, like these three:

cloud cities 1

Flavio Bolla, J Otto Szatmari, Robert McCall

My aim in this article is to try to visualize what a real cloud city would look like, either on Earth or on another planet with an atmosphere (like Venus for example), while staying within the known limits of physics and material science.

That means no antigravity, tractor beams, unobtanium, or anything else indistinguishable from magic. This still leaves us with many forms of physical lift and thrust; most of which do consume energy to keep a mass airborne. The only physical form of lift which does not require continuous power, and is therefore perfect for continuous suspension in mid-air, is buoyancy.

What exactly is buoyancy? A scientific description of it will usually include Archimedes’ Principle (with the famous shout “eureka!”), and give mathematical formulas for calculating the force in different scenarios [e.g. this page from NASA: https://www.grc.nasa.gov/www/k-12/WindTunnel/Activities/buoy_Archimedes.html].

For an aspiring architect wishing to design structures that float in mid-air, there is a simpler, more practical way to define buoyancy:

(1) The density of a (neutrally) buoyant body is equal to its surroundings.

In other words, for a Cloud City to stay aloft by buoyancy alone, its total average density must be the same as the air that surrounds it. If you determine the total mass of your Cloud City, you also determine its total volume at a given altitude and air pressure, and vice versa.

The art of making buoyant structures that carry things that are denser than air (like people, plants and water), is to attach them somehow to materials and shapes that are lighter than the surrounding air. If you can attach your denser-than-air components to lighter-than-air components so that their sum averages out to exactly-as-dense-as-air, the total package becomes neutrally buoyant.

This art of attaching heavier things and lighter things to each other gives the designer freedom to use any materials available, but rule (1) remains as a system-level constraint in any design. As long as you keep the total volume/mass ratio constant, you can distribute the mass of your city as you like, leading to the rule of thumb corollary of (1):

(2) Buoyancy = weight distribution.

In 1670, long before the first manned flight, Francesco Lana of Brescia published the design for his aerial ship. In his design, the lighter-than-air elements were large air-tight spheres with all air pumped out. These vacuum spheres were most likely inspired by Magdeburg hemispheres, an invention by Otto von Guericke demonstrated in 1654.

terzi_aerial_ship

P. Francesco Lana

This type of “vacuum airship” has never been demonstrated at scale, because no existing materials have the sufficient strength-to-mass characteristics to form stable vacuum containers that are lighter than air. [We are however living in the golden age of material science, with new materials being engineered constantly. Vacuum airships are not completely ruled out as a possibility yet.]

All buoyant airships that have been built and flown have been based on “lifting gases”, like helium. Due to the thermodynamic nature of gases, the denseness of a volume filled with gas at a given pressure is closely related to the molar weight of its constituents. Hydrogen and helium, the first elements of the periodic table, have the smallest molar weight [molecular hydrogen, atomic helium], thus they are the gases with most lifting power; in other words their natural densities at a given pressure are the smallest of all known gases.

I will not go into details about the evolution of buoyant air vehicles here. Those interested can read about their history in many books, like the profusely illustrated Travels In Space: A History of Aerial Navigation by F. Seton Valentine and F. L. Tomlinson (Hurst and Blackett, London 1902) [readable e.g. via archive.org]

Even though heavier-than-air vehicles have pretty much conquered the commercial airspace since the mid-1900s, there are still fans of lighter-than-air airships. Here is one futuristic concept illustration for an “aerial cruiseship”:

Dassault-concept

Dassault

As estimated by Dan Grossman at airships.net, the mass of the water in the pool alone would require a ship 4,5 times the size of Hindenburg to lift. [And from the snowiness of the peaks in the background the thing must be kilometers above sea level, requiring an even bigger volume to compensate for the thinner air!]

Another problem with the DS-2020 concept image is that the massive pool is on top of the ship, apparently above the helium gas bags. This puts the whole ship in an unstable configuration, with its center of weight (marked with a red cross) above its center of buoyancy (marked with a blue cross). Without some massive active stabilization system, the whole ship would capsize as soon as it lifts off. (Which isn’t necessarily a bad thing; for my money I would much prefer to swim in a glass-bottom pool over vast scenery than in a conventional one.)

Remembering rule of thumb (2), buoyancy is about mass distribution. In all real airships so far the mass distribution has always been asymmetric, leaving the ship with a center of buoyancy and a center of weight separated by some distance. These kinds of asymmetric bodies have a preferred stable orientation, with their center of weight directly below their center of buoyancy.

The natural orientation of the two centers can be utilized in blimps to control attitude. Here is a simplified sketch animation of a blimp shifting its center of buoyancy back and forth using its ballonets, causing a pitching motion since the center of weight is not moving:

blimpanimtext_640s

[Pitch control can of course be accomplished by moving the center of gravity also; in fact this was the method used in early airships like the first Zeppelin. Ballonets are a more lightweight technique for the purpose, and they are anyway needed in non-rigid or semi-rigid ships, to maintain their shape at different altitudes.]

The location of the centers of buoyancy and gravity in a buoyant body is a result of mass distribution. I have used density diagrams that are helpful in approximating the location of the centers. The center of buoyancy is the geometric center of the whitest areas, which are less dense than air (the gray background). Conversely, the center of weight is the center of areas that are darker/denser than the background/air.

[In theory it is also possible to form a buoyant body where the centers of buoyancy and gravity are collocated. Such designs would not have any stable orientation, so they would just rotate around their centers like soap bubbles in air.]

Another very common, but ultimately unrealistic, design is to make a cloud city flat and wide, like it was floating on top of water. A wide design was suggested by milliner E. Petin already around 1850:

Petin viewing airship platform

The four wide and flat parts around the mid-height of the balloons are not awnings or roofs, they were meant as control surfaces: if they were slanted when ascending or descending, they were supposed to transfer part of the vertical momentum into horizontal travel. The Petin ship was never built at full scale (or not allowed by the authorities to be unmoored according to some reports), so how the ship would have moved in air remains guesswork.

Surface marine ships can have a metacenter, meaning that they can be stable even when their center of gravity is above their center of buoyancy. [This also is the case for example when you lie on top of an air mattress in a pool.] But an airship is not a surface ship; it is totally immersed in air and cannot use any surface effects. It is in error to think that spreading the mass of a cloud city wide like in a surface ship would make it more stable.

The atmosphere of Venus is naturally more dense than that of Earth, so it is possible to build Venus airborne cloud cities where the breathing air also works as a lifting gas. Combining the functions of a space as both living and lifting makes flatter, wider designs possible, but that is not necessarily a good thing for stability. Here is an exaggerated animation of what happens in a wide cloud city design when movement of mass occurs:

dome_unstable

Because the centers are so close together, even a slight shift in the center of gravity tilts the whole structure with a keen angle.

The dome design is possible to improve by increasing the distance between the force centers, for example by making the dome taller and lowering as much mass as possible under the decks, to work as a counterweight. Similar movements now cause smaller tilt angles:

dome_counterweight

For best stability, the structure could be equipped with an automatic stabilizing mechanism, shifting the counterweight to keep the center of gravity always aligned:

dome_stabilized

The Sultan of the Clouds is a 2011 science fiction novella by Geoffrey A. Landis, set in a future Venus cloud colony. Landis is also a scientist, and has advocated Venus cloud-tops as the most suitable location for human colonization in the Solar system. Here are some cover illustrations for the award-winning novella:

sultan covers

Dan Shayu, Aurélien Police, Jeroen Advocaat

Hypatia, the cloud city where the story takes place, consists of many kilometer-wide geodetic domes. The breathable air that also serves as lifting gas, is contained by millions of millimeter-thin polycarbonate plates joined together on a graphite-based frame.

Although it may not be clear from the illustrations, the text also mentions a counterweight under the city, “a rock the size of Gibraltar”, so the design should have the correct stable orientation from having asymmetric force centers. [No mention is made how stability is maintained when people and goods move around.]

Even though the transparent domes of Hypatia are made of millions of individual panels, its denizens are not worried about its stability:

“Here, you know, there is no pressure difference between the atmosphere outside and the lifesphere inside; if there is a break, the gas equilibrates through the gap only very slowly. Even if we had a thousand broken panels, it would take weeks for the city to sink to the irrecoverable depths.”

This may be the case if the breaches are all at the same height and there is no wind. But what about if there are simultaneous breaches at the top and the bottom of the dome? Like hollowing out an egg, Venus atmosphere would flow in from the bottom hole at the same rate that breathing air would flow out from the top hole, pressure staying constant the whole time.

Here is the (again, very exaggerated) animation of a catastrophic failure, starting with a mechanical failure in stabilization, followed by simultaneous breaches at opposite ends of the dome:

dome_fail

It should also be noted that a dome that contains the main buoyant part of the city is not resting on top of the ground, like geodetic domes on Earth. Once it is filled with a lifting gas, the dome is pulling the rest of the city up, with a force equal to the full weight of the rest of the city.

Here is the Venus poster from NASA JPL “Visions of the Future”. The design is of course very stylized, but the conical underpart of the city would make a more sturdy housing for the counterweight, with multiple A-frames. The counterweight could even be made of something useful for the function of the city, that is just located low for stability.

venus jpl poster 50

JPL/NASA, Jessie Kawata

The first known (by me at least) proposal for a Venus floating city is this design from 1971 by Russian engineer and science fiction writer Sergei Zhitomirsky. There is no counterweight as such, just separation of machinery and equipment to different levels, but the tallness of the dome contributes to stability by raising the center of buoyancy.

tm1971

Tekhnika Molodezhi 9/1971, Nikolai Rozhkov

[Zhitomirsky also mentions the possibility of using helium-oxygen mixture instead of nitrogen-oxygen air as lifting gas. Such heliox-mixtures would theoretically work as lifting gases in Earth atmosphere as well, if the dome is large enough. So far I am not aware of anyone ever attempting to fly inside a heliox balloon on Earth.]

While these large domes are stylish and futuristic, real cities are usually grown naturally, part by part. If that growth happens in mid-air (as it needs must on Venus), maintaining rule (1) the whole time can be tricky.

Water life can be a source of inspiration for buoyant designs that grow. Some marine mollusca, like nautilus and spirula, have buoyant air chambers inside their hard shells. As the creature grows in size and mass, it adds new chambers one by one into its shell, creating a beautiful self-similar geometry.

Beautiful as they are, the designs of buoyant sea life may not be possible to translate directly to buoyant air structures: Because the natural density difference between life and air is more drastic than the density difference between life and water, buoyant elements in aerial structures need to be much larger in relative volume.

So what is the answer to my question, what would a real cloud city look like? Well, like cities on the ground, there is no single design that all must follow. From the images in this article, the painting “New World Coming” by J Otto Szatmari is to me perhaps the most realistic, with its vertical shape and the obvious gravity of the mass suspended from multiple balloons. More of his paintings are visible here: https://jotto.artstation.com/

Advertisements

Adrift In Middle Cloud Layer – Notes On The Airborne Colonization Of Venus

The best level of living on a planet may not always be the obvious surface. While Moon and Mars settlements could be built inside lava tubes and other underground burrows, on Venus the best level for a human colony is above ground (way above). There is no need to step foot on a planet like Venus to inhabit it; any “first step” by humans on its surface would be a symbolic gesture only, with little practical benefit for actual colonization.

Living on the ground or in caves is something humans know very well. Living in structures that float high in the clouds however is unlike anything that we have ever done before, and presents many new challenges. To achieve it requires both imagination and careful consideration. The following are some of my notes on the subject.

Vehicle classification based on movement

Airborne vehicles on Venus could be roughly classified to three main types by their movement style: flyers, kites and drifters.

Flyers are actively powered aircraft that continuously move in the air. Their design is aerodynamic, similar to airplanes on Earth. By flying continuously eastward against the superrotation of the atmosphere, a flyer could stay in daylight perpetually, perhaps even be solar powered and not require large energy storage.

Kites are tethered airships that are attached to the surface of Venus with long cables. Unless their “anchors” are moving, they will naturally follow the surface day cycle of Venus, about 1400 hours of continuous daylight followed by an equal duration of moonless night. Their design needs to be aerodynamic, and like kites on Earth, they would be able to “sail” against the prevalent winds to maintain altitude with very little power. Cable systems that reach tens of kilometers above ground would need to have multiple “kytoons” along their lengths, to distribute the weight of the cable itself.

Drifters are freely floating lighter-than-air airships that mostly follow along the existing air currents, though they should be capable of small course corrections when needed. The movement of cloud patterns on Venus suggests that passive drifting would result in a cycle of about 40-50 hours of daylight and 40-50 hours of nighttime, varying depending on altitude and latitude. While drifting the effective airspeed is almost zero, so a drifter does not always have to maintain a streamlined aerodynamic shape. But it is good for drifters to be somewhat “dirigible” [i.e. “steerable”], even capable of powered flight when needed.

This classification may seem arbitrary, but knowing the intended purpose of a ship is helpful in making design decisions. The following notes are mostly intended for the passive drifter type ships, but some may apply to the other types of ships as well.

Yellow and Blue Air

When building and maintaining a ship based on containment and separation of different gases, it is important to be clear about what gases go where, especially since many of them look identical to human senses. This is why the following color-based nomenclature is suggested:

Yellow air stands for any Venus atmosphere based gas mixtures, not breathable for humans. It is used both for the raw atmosphere, and air that has been scrubbed of toxins, but still contains too much CO2.

Blue air is any gas mixture which is breathable for humans. As we know, normal Earth atmosphere is less dense than normal Venus atmosphere, so blue air can typically also be used as a lifting gas. In airship construction, “blue” areas mean all areas of the ship where humans can breath unaided.

This shorthand vocabulary makes it easier to discuss the engineering of an airship for Venus, but it could be adopted in the actual colonization itself; For example, any tanks, valves and pipes handling gases on board the ship could be appropriately color-coded, to help ensure correct operation under any condition. [The couplings could even be designed to be purposely non-compatible, to make accidental mixing less likely.]

[There are of course more colors available for naming gas mixtures; how about “red air” for mixtures of hydrogen and oxygen?]

Colony altitudes: 50-55 km

Choosing the altitude for human habitation is a matter of trade-offs. At about 50 km the atmospheric pressure is the same as at sea level on Earth, but daytime temperatures can be upto 20 degrees higher than the hottest climates on Earth. At 55 km, atmospheric pressure is about the same as on Earth at 5.5 km above sea level (for comparison, the base camp to Mt Everest is at 5.3 km elevation), and the temperature is a fairly pleasant 300 K.

Beside pressure and temperature, altitude also affects the speed of the air currents that push the drifter onward. The amount of radiation that the ship receives is also affected by altitude: this includes not only harmful cosmic radiation, but also the amount of sunlight that penetrates the clouds during daytime. A ship for humans should be designed to handle the range of temperatures, pressures, and haze conditions at these altitudes, with some tolerance to spare.

Indoor temperature in the blue areas can be kept constant by actively cooling it as needed, utilizing some power source to transfer excess heat outside the ship. The outside of the ship should also reflect most wavelengths of light; also window materials should be selective in what wavelengths they pass in. Too much insulation may increase the greenhouse effect; thermally conductive elements in the roof could help with passive cooling.

Indoor pressure of blue air does not need to be kept constant regardless of altitude, it can be adjusted according to outside pressure to avoid straining the structure of the ship. [For safety reasons, the blue areas should still be kept slightly overpressured compared to outside yellow air, to keep inevitable leaks pushing out instead of in.] The partial pressure of oxygen should always be kept at Earth sea level to avoid both mountain sickness and oxygen toxicity. In practice this would mean adjusting only the nitrogen component in blue air to account for pressure changes; this could be achieved using separation devices, for example pressure swing adsorbers, to separate and capture nitrogen out of mixed blue air.

Buoyancy and altitude control

Drifters are designed to be as passive aircraft as possible, but they still need to be able to maintain and control their altitude, perpetually and under all conditions, if permanent habitation is the goal. If the ship ascends too high, or descends too low, the integrity of the whole ship is endangered, due to ambient pressure either tearing it apart or crushing it. In practice, automatic altitude adjustments would be based primarily on readings from barometers and thermometers connected to outside air through static ports, so they would work even without accurate altitude measurements.

Any habitat that can support human life will have to be thousands of kilograms in total mass, most of it denser than yellow air. To stay at an altitude by buoyancy alone, the amount of lift from buoyant elements would need to counteract exactly the amount of gravitational pull. There is no force holding an airship in place, just different forces pulling it in different directions, that need to be equalized. An air “station” should be able to stay afloat even during power shortages, this means defaulting to either neutral or positive buoyancy.

A station should also be able to take on new cargo or send out drone ships. These scenarios imply that the ship’s total mass can change suddenly, and buoyancy must be adjusted simultaneously if altitude is to be kept. Existing lighter-than-air ships on Earth do not have the ability to quickly change the amount of lift provided by gas bags, so some new designs would need to be engineered and developed. Dropping ballast or venting lifting gas are not long-term solutions for adjusting lift on Venus.

Adjustable lift elements could be designed with multiple compartments, ballonets, or as one big compressible envelope with adjustable volume. In each case, reserve tanks of pressurized lifting gas are needed, with pumps to inflate and deflate the gas bags.

The actual transfer of mass onto or off the ships should of course be done carefully. For example, drone ships landing on board should turn off their engines gradually, so that the receiving ship has time to adjust its buoyancy to the added mass.

Launch altitudes: 70-75 km

Since Venus colonies will be airborne, any launches of space vehicles will also need to happen from airborne platforms. To conserve rocket fuel from fighting against air resistance, it makes sense to launch rockets from the highest altitude achievable with buoyancy. If colonization is successful, it should be possible to manufacture rocket fuel from material harvested from yellow air at the colony altitudes, then lift the rocket, along with its fuel and payload, up to thinner atmosphere using special high altitude balloons for launching.

At an altitude of 70-75 km, the air is thin and cold, much like 20-25 km above Earth [or surface levels on Mars] and is about as high as balloons can be made to carry heavy cargo. Compared to colony altitude, visibility is also better, and distance makes colony airships below safer from launch accidents. The pressure is below Armstrong limit, so humans must wear space suits or stay inside a pressurized vehicle.

Coordination of launch is delicate and requires guidance and observation from multiple ships in air and in orbit (ground observation will not be practical on Venus, for several reasons). At high altitude, a “rockoon” does not need to be straight vertical when fired, it can be aimed at an angle so it does not hit the balloon from which it hangs. But it is best to launch with a targeted heading, which requires some maneuvering capabilities in the launch platform. Once the rocket has successfully ignited its engines, it can be carefully released from the balloon platform and initiate burn.

Recovery of the high altitude balloon platform is perhaps desirable but not easy. Without the weight of the multi-ton rocket pulling it down, the balloon will quickly rise too high to maintain its integrity, and will burst if not vented. In theory it might be possible to compress hydrogen from the balloon quickly with electrochemical hydrogen compressors after the release, and start a more controlled descent. Such a system could potentially save hundreds of kilograms of hydrogen from being vented per launch, so it might be worth pursuing at some point. [Helium is nearly as good as hydrogen as a lifting gas, but as a noble gas it cannot be compressed either electrically or chemically.]

In the beginning, there will be only few airships on Venus. Entering from space a ship can end up far away from drifter airships at colony altitudes. Incoming vehicles must be equipped with “ballutes” or other means to turn themselves into airships after entry, and be capable of flying some distance to meet with a colony altitude ship for refueling. [More about navigation later.]

Trimming and stability

Since gravity on Venus is almost the same as on Earth, ships should be designed so that their decks stay level with little or no power. Marine ships floating on a surface can have a metacenter, but passively floating airships and submarines are stable only when their center of buoyancy stays directly above their center of gravity. Distance between the two centers provides torque for righting the ship, if the frame of the ship is rigid enough for leverage.

Just about all cargo needed on a self-supporting colony ship is going to be denser than air, which means that the buoyant elements of the ship will take up most of the ships total volume. Since the center of buoyancy must be above the center of gravity, all passively floating airship designs for normal pressures are going to be big and puffy on top, with smaller denser parts hanging down. [I call this the basic “lollipop” shape.]

Diagram of airship relative density
To have wider and more spacious decks, self-trimming or stabilizing designs should be investigated. Any shifts in the center of gravity will tilt the ship, unless the center of buoyancy is also shifted at the same time. Increasing distance between the two force centers decreases the angle of tilt, but does not completely eliminate it. Some new designs are needed for zero-airspeed stabilization, for example using some of the payload as a movable counterweight. Large gyro flywheels would add angular inertia [at the cost of mass and power]: they will only slow the tilting caused by shifts in the center of gravity, not eliminate it completely.

Shifting the center of buoyancy is possible to some extent. There is an existing technique for this purpose: many airships have fore and aft ballonets that can be filled asymmetrically. [Submarines use an analogous system of “trim tanks” placed at extremes of the ship.]

If the ship is moving in air, either by its own thrust or pulled with cables, a third center is added to the equation, the aerodynamic center [more generally, “center of pressure“, applying also to hydrodynamics in submarines]. Things get more complicated with aerodynamics [for example, the choice of where to place thrusters or towing cable attachments in relation to the force centers], but with predictable airspeed it becomes possible to use relatively small control surfaces, like trim tabs or ailerons, to control attitude and achieve trimming.

Active stabillization at rest is of course possible, for example using compensating thrusters to force the ship level even when the center of gravity is not aligned, but like propulsion thrusters, power is consumed continuously while they are active. For a large “station” style drifter ship, with tons of cargo on board, stability should always be sustainable, even when not moving, or during power outage.

Structural analysis

The “lollipop” shape naturally divides the ship into two main parts, the balloon part at the top which is lighter than (yellow) air, and the dense “gondola” part where people and cargo are situated. The two parts are pulling the ship in opposite directions by forces equal to the total weight of the ship, so their connecting seam is structurally important; It is the foundation of the ship, its “tensile-load-bearing” wall.

The balloon does not need to carry the gondola just by the lower rim of the envelope. Many existing non-rigid and semi-rigid airships use internal suspension cabling, attached to the inside ceiling of the balloon with e.g. catenary curtains and leading down to the roof of the gondola. In addition to distributing the weight of the gondola more evenly to the balloon envelope, the internal cabling allows a non-rigid balloon to hold a more vertically flattened shape.

As a hanging structure that never lands, tensile strength is more important than compressive strength, even in the rigid parts of the ship. The rigidness of the gondola frame is also a trade-off: on the one hand, it helps keep the decks stable and level. On the other hand, if the frame is too rigid, mechanical vibrations get propagated throughout the ship, from machinery or just from the passengers walking about the decks. A combination of rigid and damping elements are probably needed, designed from lightweight and durable materials to keep the total weight down.

Since the structure is not intended to land or stand on its own, and is hanging down, many of the conventions of construction are turned upside down. For example, structural elements must be strongest at the top of the multi-level gondola tower, but less so at the bottom, where they carry less weight.

Bioreactors: not just for food

At its simplest, a photobioreactor is just a transparent plastic bag half filled with water, seeded with some live cyanobacteria, minerals, and trace elements. Yellow air is bubbled slowly from the bottom through the greenish water, where daylight turns it into blue air, collected at the top. The water turns greener and thicker through the day as the bacteria multiply. At the end of the day, the containers can be drained and the excess green mass concentrated for further processing. The plastic bags can then be refilled with water and minerals, in preparation for the next 50-hour day.

The resulting green biomass is an important source of a variety of hydrocarbons, and can be further processed by e.g. fermenting. Various methods of dehydrogenation can be used to produce unsaturated hydrocarbons for making polymers. Even on Earth, biomass grown in bioreactors could become a worthwile “green” replacement for crude oil.

bioreactor

The green biomass can also be eaten (it’s called Spirulina), if it is grown from acceptable ingredients (no too much sulfites or deuterium) and handled properly. It is not a complete food source, so it should be complemented with other forms of bacterial farming, such as yeast for vitamin B12. [Why bacterial farming? Human senses have evolved to see only the macroscale of biology: plants and animals. But a completely artificial ecosystem needs to be built “from the ground up”, starting at the microbiological level, where the real work of biology happens, before advancing on to some carefully chosen angiosperms and invertebrates.]

In an earlier post I implied that “trees are made from just sunlight and CO2, both more abundant on Venus than on Earth”, but that is incorrect. In actuality, for each CO2 molecule that photosynthesis breaks down, it must also break down an H2O molecule. Water and hydrogen are rare on Venus, and will be the main bottleneck for cultivating biomass.

Chemistry: the power of Hydrogen

Hydrogen is the most common element in the visible Universe, and out of all the atoms in the human body, hydrogen atoms are the most common. Hydrogen is so ever-present and chemically active that it is hard to imagine chemistry without it. It is even common practice to leave out hydrogen atoms when drawing chemical structures; there are so many to draw.

In the cloud layer, at colony altitudes for drifters, most of the hydrogen is bound in the clouds themselves, as aerosol droplets of concentrated acid. According to current understanding, the droplets making up the clouds over Venus are not formed by just phase transition, but also by a chemical process fueled by sunlight, called photolysis. In other words, part of the sunlight falling on Venus gets stored as chemical energy in the atmosphere. Could the acidity be just discharged to electrical power, like from a fully charged car battery? [Probably not, at least without grounding the electrical potential somehow.]

The problem with this chemical energy in the form of low pH is handling it safely. All materials of the ship that can face raw yellow air must be able to withstand the chemical energy of cloud stuff without degrading too fast. [Structural integrity is not the only concern, other important material properties could also deteriorate: optical, adhesive, lubricant etc.] Hydrogen and other elements collected and chemically separated from the cloud stuff must also be stored in a safe way, away from raw yellow air.

Cloud harvesting will probably occur in two steps: droplets are collected together into a liquid [drops may even spontaneously condense at the outer surface of the ship, like water condensation on Earth], which can then be electrolytically separated in an airless chamber to collect the hydrogen. This may be possible to do efficiently using nanomembranes similar to fuel cells, but the technology needs to be tailored to Venus. Since sulfuric acid reactions are mostly exothermic, excess heat might become an issue if the released energy cannot be stored or utilized. [Ionic separation may even make it possible to enrich some D from H at the same time.]

Having almost normal gravity can be utilized in industrial separation processes. For example, fractionating columns should work almost as well as on Earth, if they can be kept upright. It is even likely that some separation processes occur naturally in the atmosphere of Venus: For example the ratio of D to H seems to vary somewhat with altitude. It may even become possible to use knowledge of weather patterns to direct harvesting to places where enrichment of D is easier, or for keeping too much D from contaminating the biological ecosystem (including humans).

Both the chemical and biological ecosystems on board should aim at becoming fully closed and recycling, but some waste might still get produced in the long run. One situation where waste may be beneficial is using chaff to study the weather: thousands of ping-ping ball size objects could be released at the same time to the atmosphere, their movement in the winds followed via radar from a distance. Material for chaff can be e.g. something rejected by QA, which does not contain too much rare yellow air elements.

Local manufacture

As far as we know, yellow air is not made of very diverse elements. Only O, C and N can be considered abundant. H, S, Cl, F and some others have been detected in trace amounts, but any other chemical element needed must be imported to the airships, either dropped from orbit or lifted from below. The chemical factories on drifter airships should specialize in producing materials that are made up of only yellow air elements.

Fortunately this includes many forms of polymers and elastomers, carbon fiber precursors, synthetic resins and hardeners. Even photoactive, light emitting, and piezoelectric compounds are possible. Most polymers are insulators, but some polymers can be made conductive, both thermally and electrically. Their conductivity is not quite as good as Cu or Al, and they are certainly more difficult to form into electrical wiring.

Many interesting 2-D lattice molecules are also possible in theory to form out of yellow air elements, but the processes to manufacture and apply graphene-like materials are not mature yet. Carbon nanotube wires are in theory better conductors than Cu, which would make it possible to create very lightweight inductors and windings for electromagnets, if they could be manufactured at scale. [Special ferromagnetic metals for magnetic cores would still be needed to build efficient electromagnetic motors or turbines.] Using ordinary carbon fiber for electrical wiring and electromagnetic windings is not an optimal solution; it may work, but inefficient conductivity wastes part of the electricity as heat; and there is already too much of heat in colony altitudes of Venus. [Perhaps more suitable for Mars colonies?]

In a self-sufficient colony there should exist the capability to create replacement parts for any of the structural parts of the ship itself, if not on every ship, at least distributed among a fleet of ships. Biomass produced by the bioreactors can be used as a raw ingredient to many kinds of materials. Especially useful would be strong fiber filaments that could be robotically woven into flexible fabrics for sails, parachutes and balloon envelopes, or combined with resins and hardeners to form rigid composite structures, pressure tanks, fractionating columns, or any rigid parts for the frame.

It is unlikely that sophisticated nanoscale items such as high-end computer chips or nanomembranes can be produced locally, so spare ones need to be imported and kept in store for emergencies. Essential sensory equipment, such as pressure gauges, barometers and radar antennae, may be possible to build locally, but the reliability of such “home-made” instruments must be well tested before relying solely on them.
For many reasons it is good to separate the manufacturing areas from the main blue areas, and let the solvents and hardeners evaporate fully before taking locally created polymers into use. [There is not much benefit in having a free shield from cosmic rays, if you end up getting cancer anyway due to chemical exposure.]

Power sources and storage

Solar cells should be possible to use even in the middle cloud layer; although less light is available than at launch altitude or orbit, daylight is so diffuse that solar panels would work oriented in any direction, even downwards. The drifter day cycle means that collected solar power must be either stored for use during the 50 hour night, or an alternative power source must be found that works without daylight.

Electrical batteries will definitely be used, they are convenient and well known technology that works well with electronics, radio and lights. But there are other means of storing power than batteries. One storage alternative could be compressed air (of suitable color), something that may anyway be necessary to store reserve lifting gases. Pressure tanks may also be easier to manufacture locally than efficient electrical batteries, and can be made without rare metals.

Stored compressed air does not always need to be turned into electricity: compressed air can for example be vented via a Coandă thruster to propel or turn the ship. Pneumatic motors are possible to make without metals or magnets or electric hazards, and their operation is based on air-tight seals and gas pressure; familiar concepts when living inside an airship. But some way to create compressed air is needed to run pneumatic motors. Outside of manually powered pumps [which should be considered as a backup system in case electrical power fails] or phase-change engines, this means electrical pumps running on solar power.

An added bonus of compressed air as a power source is that venting compressed air actually removes heat. Surrounding a pressure tank with a heat exchanger and a heat pump could be utilised in directly cooling blue areas, even if the tanks themselves are kept outside in the yellow areas for safety.

Other than solar panels and cloud harvesting, energy collection from the environment may require long-winded equipment, to utilize the natural differentials of different altitudes. A kite sail could be floated a few kilometers above the ship, or a turbine dragged a few kilometers below the ship, to collect power from the difference in wind speed. A more complex “cable” might be able to use “aerothermal energy”, in the same way that geothermal energy is pumped from below ground on Earth. Any system with long cables or pipes is also vulnerable to the buildup of static electrical charges. [If they can be safely utilized, why not just harvest lightning directly?]

It is unlikely that the pressure differential between altitudes can be siphoned, even with a long capillary tube. But if pressure tanks are easy to manufacture, it might be possible to let nature fill them one by one: drop them down with a mechanism that closes their valve when a predetermined ambient temperature is reached, and inflates an accordion bellows balloon that slowly lifts the tank back up. There are disadvantages to this scheme that may hurt overall efficiency, but the method can also be justified with collecting air samples from different altitudes for scientific purposes.

Navigation and communication

The visibility in the cloud layers is probably not good enough to navigate accurately by sight. Even if a mythical Viking sunstone would show the Sun behind the clouds, placing the horizon would still be guesswork. Flying in colony altitudes will depend on instruments even during the day.

This is not that dangerous for a fleet of drifters within a few kilometers of each other, all passively sailing along the same winds. Visibility should extend that far, and light beacons should be required on all ships, even during the day [of course radio beacons will also be required on all ships, with ship identifications]. Flyer type ships however are much faster, and at superrotation speeds travel to the edge of visibility in seconds. A supersonic rocket is effectively blind in the cloud layers, and must rely on other wavelengths.

Knowing exactly where you are and what direction you are facing is also important if you need to send data to an object in orbit, or any kind of tight beam communication inside the cloud layer. On Earth, geopositioning systems work by broadcasting a simple time signal from multiple satellites. This setup is possible on Venus as well, but needs to be set up in advance. The intelligence is at the receiving end, where the periodic time signals from the satellites are analyzed to arrive at an estimated coordinate. Translation from satellite time signals to Venus surface coordinates will require calibration with another positioning method, preferably triangulated from multiple ships.

A viable positioning method that works without satellites, surface beacons or other ships is radar surface feature recognition. A computer machine learning system trained on mappings from previous missions should be able to correlate radar data to surface coordinates with good accuracy. With a high-resolution radar, it may become the standard against which other positioning systems will be calibrated.

A fleet of ships drifting within line of sight distance to each other is a fairly safe place to live in terms of navigation. [Not all of them need to be manned ships.] Constantly broadcasting your ship identity and positioning coordinates to surrounding ships makes it easier to model not just the position of all ships, but also the pitch, roll, and yaw of your own ship in relation to multiple lines of sight.

Flying in air makes sound-based communication between ships possible. This is mainly a curiosity, but does bring a nice human element to life on Venus. A passive drifter is itself fairly silent, compared to flyers and multicopter drones. Silent ships changing altitude is a potential hazard to nearby ships, and could be accompanied with alarm beeps, like a truck reversing on Earth. And of course there is the possibility of noisy neighbor airships, with their uninsulated envelopes vibrating to the music playing inside. [There may even exist natural noises on Venus. Even though lightning has so not been positively confirmed on Venus, there have been indications of thunder-like noises propagating in the atmosphere. Maybe someone alive today will become the first human to hear thunder on another planet?]

Venus will have its own data communication network, of course. Venus ships will produce a lot of data themselves, which is beneficial to store in a replicated, distributed way among the fleet of ships. The distributed cloud [yes, this will be the first cloud system with a literal cloud layer] could also host any data caching or mirroring from other planets, with priority-coordinated access to the high-latency interplanetary data links. To enable efficient data distribution, the antenna systems on board each ship must be capable of detecting and tracking the beacon signals of nearby ships, and directing their higher frequencies at each other for maximum bandwidth [somewhat like 3DBF in 5G]. Most of the equipment for the computer network must come from Earth for the foreseeable future; offline bulk data storage media [optical or chemical rather than magnetic due to rareness of magnetic raw materials] is a possible first candidate for local manufacture.

Some assembly required

Dropping into the atmosphere from space limits the possible size and shape of individual airships sent to Venus. Much larger structures become possible if assembly can take place at colony altitude. And even if a colony consists of multiple smaller ships floating close to each other, they will need to transfer materials and people between ships from time to time. A modular design, standardized across the fleet [like ISO containers] is highly desirable.

Doing construction or assembly that hangs downwards is completely opposite of most construction done on Earth. Only nests built by birds and flying insects are examples of hanging-in-air construction. Is it even possible to combine modularity, lightweight construction, and airtight seals between different colored airs, while hanging down from balloons?

This sketch concept uses a rectangular rigid module constructed of straight lengths of rods or pipes, arranged in an interlaced pattern, like wicker or a bird’s nest, that distributes mechanical forces in all directions. The design is rotationally symmetric, which means that two modules can interlock facing any of its six faces towards each other, to ease assembly and design.

In the interior of each cubic module is an open space, roughly spherical [unofficially called “egg space”, continuing the bird’s nest analogy], where different payloads can be attached and carried. The exposed rods of the frame provide both support for the inner payload and anchoring for hauling the module from the outside, or for attaching various equipment. The open frame can also be reinforced where needed, even replace the payload completely with structural elements for some modules in the assembly.

Blue quarters for habitation can be built inside module frames, and be connected with flexible or inflatable corridors. Rooms can also be erected using only inflatable elements attached to the module frame. Slightly overpressured wall elements could be useful in keeping blue and yellow air separate, or detecting leaks. For more permanent construction, instead of gas the wall elements could be injected with an aerosolized resin that hardens into a foam. This results in less “bouncy castle” feel, adds insulation from outside heat, and avoids having to adjust inside-wall pressure when the altitude changes.

What next?

So far colonization of Venus has been a thought experiment, what current technology might allow given what we know about the conditions on Venus. But there is a lot that needs to happen before we can send the first humans to live on Venus.

Local weather is crucial to flyers and drifters in the cloud layer. We have studied what we can from looking at the cloud patterns, but it would be prudent to send more unmanned missions to study aspects of weather (including electrical aspects, thunder and/or lightning, and radio weather) first-hand. Detailed chemical composition of the more hidden cloud layers could be studied at the same time, not just for their interaction with manmade ships, but also to ensure that there are no naturally occurring complex macromolecules or processes that human colonization might contaminate.

Even if volunteers might be available, it would not be ethical to send humans to Venus if there is no feasible way for them to return. There have been designs for air-based launches from Earth, but so far all missions with humans on board have been launched from the ground. Launching from a high-altitude balloon would save rocket fuel on Earth too, so it makes sense to mature the technology here first. [There are some entrepreneurs working in this area, for example zero2infinity with bloostar.]

Despite the obvious differences, many of the challenges of Venus airborne colonization are the same as those of Moon or Mars ground-based colonization. All are behind gravity wells, so equipment shipped over must be both lightweight and built to last. All will need reliable systems for blue air management, as well as medical diagnosis and treatment. Photovoltaics and battery technology is needed for all destinations, as are advanced automation and robotics. Developing these space technologies will also help Venus colonization.

Different airship models can be tested and developed in Earth’s atmosphere first, before modifying them for Venus. Unfortunately there doesn’t seem to be much interest in long-duration unaided flight on Earth to drive the needed technical development on its own. Currently the record duration for untethered airship flight is 20 days, from 1999. For successful Venus colonization, flight durations should be counted in months and years, not days. [The 1999 record was not even the primary goal of the mission; it was to circumnavigate the globe without landing.]

And even if it suddenly became fashionable to make floating cities on Earth, they would be much easier to assemble on the ground than in air. Even if it is technically possible to build floating cities on Earth, there is no real economic incentive to play “floor is lava” during their construction. But such games need to played here at least part of the time, to gain the practical knowledge and skills needed for sustainable Venus airborne colonization.

Terminology

airborne: English word meaning “carried by air”
ballonet: adjustable non-lifting gas bag inside the outer envelope of some airships
ballute: fusion of “balloon” and “parachute”
catenary curtain: a load-distributing internal cable attachment in some airships
envelope: airship jargon for “gas bag”
kytoon: fusion of “kite” and “balloon”
LTA: contraction of “lighter-than-air”
metastable: loading a surface ship with its center of buoyancy below the center of gravity
pH: “power of Hydrogen”, a logarithmic measure of ion concentration in a solution
rockoon: fusion of “rocket” and “balloon”
self-trimming: a mechanism that helps keep cargo evenly loaded on a ship
static port: external air sensor fitting on an aircraft
superrotation: the rotation of Venus’s atmosphere, faster than surface rotation
trimming: in this context, keeping a ship level

History

The earliest mention of using buoyant airships on Venus I have found is in The Exploration of The Solar System by Felix Godwin (New York, Plenum Press, 1960). [This charmingly detailed but outdated book is otherwise an excellent example of smart and imaginative extrapolation from insufficient data.]

“(21) The non-rigid airship is for some purposes the ideal form of transportation on Venus. Owing to the dense air, it can carry considerable loads. Furthermore, it is completely unrestricted by the terrain and can hover anywhere, either for observation or for discharging cargo.” [pg 86.]

Once data about the harsh surface conditions started to come in from the early missions, the idea of sending buoyant vehicles into the atmosphere of Venus gained more traction. Many countries had plans for putting scientific aerostats on Venus in the 1960s. For example in 1967 Martin Marietta Corporation made a feasibility study for NASA of a Buoyant Venus Station (BVS), considering payload masses of 90 kg, 907 kg, and 2268 kg.

Two aerostats (21 kg each) were eventually launched into the middle cloud layer in 1985, as part of VEGA. The multinational mission was a success, and radio telemetry from the helium-filled balloons was tracked for 46 hours from 20 radio telescopes around the Earth. French scientist J. E. Blamont is credited with the original proposal.

Manned missions on dirigible airships were also discussed. In issue 9/1969 of Tekhnika Molodezhi (“Technique – Youth”), pg 14-16, V. Ivanov writes

“In fact, above the inhospitable surface of Venus, it is very convenient to drift in a dense atmosphere. In addition to devices such as bathyscaphe, it is advisable to launch balloon-probes or even airships to our heavenly neighbor. For example, a small balloon probe, drifting at a height of fifty kilometers, is capable of transmitting data on its way, about the downstream terrain for many days in a row. Perhaps relatively quickly people will create in the upper layers of the atmosphere of Venus a drifting laboratory that will prove to be more effective than a manned artificial satellite of the planet.” [translated from Russian by Google Translate]

The idea of dredging the surface of Venus from a buoyant ship with a long cable was also floated. In Aviatsiya i Kosmonavtika (“Aviation and Astronautics”) 10/1973, pg 34-35, G. Moskalenko writes

“The aerostatic type device can be equipped with a cable hanging downwards with research equipment suspended for it for vertical sounding of the atmosphere, as well as mechanisms for taking ground from the surface. The length of the rope is not difficult to increase due to the attachment of intermediate lifting balls, which compensate for the load on the rope. It is interesting to note that by picking up the appropriate lifting balls, the cable can easily be lifted above the bearing balloon.” [translated from Russian by Google Translate]

The futuristic idea of living permanently on Venus in large floating habitats also emerged early on. In issue 9/1971 of Tekhnika Molodezhi, pg 55, S. Zhitomirsky writes:

“[…]the composition of the Venusian atmosphere suggests a more tempting solution – the station can be inside the balloon. Indeed, carbon dioxide is one and a half times heavier than air, and a light shell containing air will float in a carbon dioxide atmosphere. If the inhabitants of Venus prefer not a nitrogen-oxygen but a helium-oxygen mixture for breathing, the lifting force of their “air” will sharply increase. […] To the edges of the platform is attached a huge spherical shell, which limits the airspace of the island. It is transparent, and through it you can see the whitish sky of Venus, eternally covered with multilayered luminous clouds. The shell is made of several layers of synthetic film. Between them, gas formulations containing indicator substances are circulating.” [translated from Russian by Google Translate]

[As the zonal wind speeds were apparently unknown at the time, Zhitomirsky assumed that the flying islands can move at about 13 km/h to stay constantly in daylight. The actual airspeed required to do that outside of polar regions is actually 20-30 times higher, infeasible for a ship that big.]

Links

Venus colonization has its fans [“Friends of Fria”, as Peter Kokh called them], but finding relevant discussion about the topic can be frustrating. [For example, the domain name venussociety.org is reserved, but has no content at this time.] I can recommend two links which both have pointers to deeper sources:

This 2011 article By Robert Walker presents a friendly introduction to the topic, and also a source of links to further discussions on various internet forums: “Will We Build Colonies That Float Over Venus Like Buckminster Fuller’s Cloud Nine?”

Venus Labs has published a highly detailed “Handbook For the Development Of Venus”, Rethinking Our Sister Planet, written by Karen R. Pease. The book is a seriously detailed imagining of how a manned mission might be accomplished with existing technology. It has lots of links and sources of information. [There are a lot of original ideas in the book as well, but I can’t say that I completely agree with all the proposals. One thing that strikes me particularly is the insistence of housing people high inside the balloon envelope, even doing bungee jumps while hanging from the ceiling. To me it sounds a bit like the wild stunts of wing walkers playing tennis on the wings of biplane in the 1920s: it is perhaps possible but very risky and uncomfortable, and ultimately has nothing to do with the primary engineering purpose of wings or balloons.]

On the Nature of Asymmetry

So I entered the FQXi essay contest this year. You can read my essay “On the Nature of Asymmetry” on the FQXi website, in its full PDF glory.

I only found out about the website, and the contest, about a month ago, so my entry feels a bit rushed and unfinished. But I think having an external deadline was a good motivator nonetheless. I can still develop the ideas further some other time.

If you like, you can rate the contest entries at the FQXi website, by registering an email address. The rating period ends in about a month, after which the best rated entries advance to an anonymous expert panel.

The Simulation Narrative

Most of the millions of people lining up to see the latest blockbuster film know that the mind-boggling effects they are about to see on the big screen are made “with computers”. Big movies can cost hundreds of millions to make, and typically less than half of the budget is spent on marketing the premiere, or paying the actors upfront. Plenty of money left over to buy a big computer and press the ‘make effects’ button, right? Except that these movies close with 5-10 minutes of rolling credits, and about half of them are names of people working in visual effects, not computers. (Seems like a tentpole movie crew these days has more TDs (Technical Directors) than any other kind of directors combined.) [If you think making cool computer effects sounds easy, just download the open source tool blender, and create whatever your mind can imagine …]

Computer simulations are no longer just for engineering and science, they can be used as extensions of our imagination. A simplified set of rules and initial conditions are input, then a few quick test renders are made with low resolution. Twiddling the knobs, how many particles, viscosity, damping, scattering, and finding the correct lighting and camera angles, etc., iterating until you are happy or (more likely) forget what you were trying to accomplish.

Selene Endymion MAN Napoli Inv9240

Before computers, before visual effects and film, people had to use their own imagination to make entertaining simulations. The most light-weight technology to accomplish that was storytelling, guided narrative with characters and settings. The rules of the simulation were the rules of plausibility within the world of the story. The storyteller created the events, but the listeners enacted them in their imaginations. The storyteller received immediate feedback from his audience if the story became too implausible.

But once the audience “buys in” to the characters and their narratives, they become emotionally invested in them as if they were real people. Fictional characters, today protected by copyrights and corporate trademarks, can still suffer unexpected fates, and new audiences to a fictional world often demand to be protected from “spoilers” that would make it difficult for them to simulate them in their imaginations. Real people do not know what their future is, and to ‘play’ the role convincingly and without foreshadowing, it is best to live with incomplete information.

If I start to read a book of fiction, written many decades ago, when is the simulation of the characters happening?  Does every reader simulate the main characters each time they read the book, or did the author execute the simulation, and the readers are only verifying that the story is plausible? Certainly I feel like I am imagining the phenomenal ‘qualia’ that the characters in the book are experiencing, but at the same time I know that I am just reading a story that was finished a long time ago. Am I lending parts of my consciousness to these paper zombies?

In a well-known book of genre-defining fantasy, after hundreds of pages of detailed world-building, two characters are beyond weary, in the darkest trenches of seemingly unending war, when one of them starts to wonder if they shall

“[…] ever be put into songs or tales. We’re in one, of course; but I mean: put into words, you know, told by the fireside, or read out of a great big book with red and black letters, years and years afterwards.”

It’s not a bad way to put it, but even for myself at age 12, characters in a book discussing the possibility of them being characters in a book was just too self-referential to be plausible, and pushed me ‘out’ of the story for a moment. (A bit like characters in a Tex Avery cartoon running so fast they run outside the film. We get the joke, but let’s keep the “fourth wall” were it is for now.)

Since the book was written long ago, and has not been edited since, it can be argued that none of its characters have free will. The reluctant hero makes friends, sacrifices comforts, has unexpected encounters and adventures, all while trying to get rid of the “MacGuffin” that has fallen into his hands. When at last he arrives at the portal of Chaos where the artifact was forged, does his determination to destroy it falter, or will something totally unpredictable happen? To have any enjoyment in the unfolding of the story, the readers must believe that the actions of the characters have significance, and play their roles in our minds as if they had free will.

There are also professional actors, people who take to the stage night after night, repeating familiar lines and reacting to the events of the screenplay as if they were happening for the first time:

“For Hecuba! What’s Hecuba to him, or he to Hecuba, That he should weep for her?”

A good performance can evoke both the immediacy and intimacy of a real emotional reaction, but the audience still needs to participate in the act of imagining the events as actual, to understand at some emotional level “what it is like” for the characters in the play to have their prescribed experiences.

What to me really sells a scene is the interplay of the actors, not so much how photorealistic the visual effects happen to be. A painted canopy plays the part of a majestic sky, or a sterile promontory becomes earth for the gravedigger, if all the actors act as though it were so.

As convincing as our simulations can be, the point of fiction is that we enter it knowing that it is fiction, that we can always put the book down, or step outside of the theater. Fiction is not realtime, and it always requires audiences to imagine some parts of it (for example, what happens between scenes?). We choose not to pay attention to the man behind the curtain, or analyze the plot too much, when we want to immerse ourselves for a moment.

[Having said that, I don’t mean to imply that it is impossible to become lost inside made-up stories, and confuse them into reality in a quixotic manner, but that is usually not the intention of the storyteller (though it could be useful to the intentions  of a shrewd marketer, politician or cult leader).]

Time inside the simulation is independent from time in the real world. In addition to pausing the simulation, monolithic or pre-computed simulations can be executed in different sequential order from the assumed order inside the simulated world. This is used to great effect in some books, which describe the same event multiple times in different chapters, but from the point of view of different characters. Each perspective usually gives the reader some extra information, something that no character in the simulation can have. Viewing from outside the simulation, the audience gets an almost god-like view of the situation, sometimes even enhanced with indexes and bookmarks so they can page back and review the events in previous chapter (but not forward, since it would “spoil” the freshness of the simulated experience).

Pre-written narrative simulations, movies and plays, edit out the parts that are though to be uninteresting. This is a careful balancing act, because editing out too much leaves the characters and their actions too distant, and harder to relate to. Leaving in too much unnecessary details on the other hand can appear gratuitous and put off many viewers and readers, who will surely find better things to occupy their time.

Computer simulations today almost always consist of time-steps. A numerical approximation of some evolution equation uses the results of the previous steps to compute the next step of the simulation. The smaller the time interval used, the closer the approximation is to the real solution [in the mathematical sense, for example a piecewise linear line approximating a smooth curve], and the longer it takes to compute. If the simulation is pre-computed, the audience need not view every individual step, to make use of the simulation. [Note: In terms of the blender software, the physics timestep is independent of the framerate of the animation, and changing either will affect the needed baking and/or rendering time.]

When played at sufficient number of frames per second, our mediocre senses are fooled to interpret a sequence of still images as moving pictures [interestingly, while higher framerates increase the realism, and are technically possible today, moviegoers still prefer the “cinematic” 24 fps for big screen cinema, or even less for dream-like sequences]. But it would be naive to think that the timeline of the physical world also consists of individual static states, progressing in infinitesimally short step transitions across the whole Universe. Such ideas of motion were debated already hundreds of years BC by the Eleatic school, most famously with the paradoxes of Zenon. [Note: Unfortunately, even logical and analytical philosophy usually implicitly assumes that there is always a “world at time t” snapshot, at any chosen time t, with defined entities and properties. But that is a topic for another post.]

Stories can be used used to simulate, or theorize, about the minds of others. By vicariously identifying with characters, we can sometimes glimpse across the inter-subjective gap. Even a child can understand that a character in the story does not have all the information, that Red Riding Hood is asking questions because she doesn’t know who she is speaking with. But even voluntary participation in a simulative narrative can reveal hidden agendas in the audience, through transference: For example, did Shakespeare put baits in his Scottish play to “catch the conscience” of king James, or just harmless references to the new king’s known interests?

Some stories contain such powerful or virulent motivations for their characters, that the audience starts doubt their own volition as participants of the simulation. [Note: This could to be related to hypnosis, which can also be induced using only words and signs, and makes subjects doubt their own volition to some degree.] Being part of something larger, even if it is just a simulation, is a recognizable desire in the human psyche. Experience, of real life, as well as other kinds of stories, can also recontextualize previous narratives in new light, and help reframe them with a different viewpoint. [An example of this could be a new parent realizing: “maybe my parents were just as clueless as I am now?”; a kind of subject-object relationship reversal, in the psychoanalytical sense.]

In this transitional state between the simulator and the simulated, we might also strive to theorize on the motivations of a possible future ‘superintelligence’. Why would it spend so much effort to compute realistic ‘ancestor simulations’, extrapolating scenarios from its vast collections of historical data, as in the simulation argument by Nick Bostrom? Perhaps the motivations are the same as when we try to understand our ancestors from the knowledge that we have: If you don’t understand history, you are doomed to repeat it, over and over. Just as intelligence does not imply wisdom, superintelligence certainly does not imply ‘superwisdom’.

Collection and storage of ever more detailed data today gives another perspective to simulation as a way of stepping outside of time. If we can store as much information about the state of the world today as we can, and build a simulated snapshot state of it at a point in time, the transhumanist proposition is that with enough data (how much?) this would be indistinguishable from the real thing, at least for the simulated subjects inside the simulated snapshot. The idea is identical to [and identically depressing as] the afterlife scenarios of many religions and cults. No release for Sisyphos, or holodeck Moriarty from his “Ship in a Bottle”.

The problem of using statistics or probability to determine the ontological status of your consciousness is problematic for many reasons, among them the transitory nature of conscious experience. For the total tally of fictional consciousnesses, do you count the number of different characters in all the scripts, all the actors, or all the audience members? Does it matter if all scripted characters are actually “played” to any audience with phenomenal consciousness? Does a simulation character need to have phenomenal consciousness all the time, or just during some key scenes, zoning out as sleepwalking zombies for most of the time? Since there can be no definite multiplicity for such an ill-defined entity as a consciousness separate from its substrate, counting of probabilities from a statistical point of view is as meaningful as arguing about the number of angels that can dance on the point of a needle.

I don’t consider myself to be any technological “luddite”, on the contrary I believe that technological progress has the potential to create many more breakthroughs in the future, or even an unprecedented ‘flowering’ of the kind that rarely happens in the evolution of life on Earth (for example, the appearance of fruit-bearing plants  (a.k.a. flowers) during the Early Cretaceous). I do dislike the word “singularity” in this context though; especially since it originally means an attempt at extrapolation beyond the limits of the current working model. (For example, the financial “singularity” of 2008, when loan derivatives cast off the surly bonds of fundaments, and left the global markets in free fall.) All flowers have their roots in the earth, and in the past, which they must grow from. No ‘sky-hook’, or ‘future-hook’ can seed the present.

The Pilot of Consciousness

We do not know in detail how human consciousness arose, and we only have direct evidence of our own consciousness. But it is common sense to assume that most normally behaving people are conscious to some degree, and that consciousness is a result of biological processes of the body, and its organs, even though we cannot see it directly, the way we appear to see our own consciousness.

Assumptions about the level of consciousness of other people affect modern society at a deep level. A conscious mind is thought to have free will, to a greater degree than simpler animals or machines do. In criminal law, a person can only be judged on the actions they take consciously, but not while for example sleepwalking or hallucinating.

The old metaphor is that consciousness is to the body like a pilot is to the ship. The pilot of a ship needs information and feedback to do his job, but he does not have direct access to it. Instead, he gets his information secondhand from the lookouts, and the various instruments for measuring speed with knots, the compass, and elsewhere. The pilot also does not row the oars himself, or stoke the engines, he just sends instructions below and assumes they will be carried out. The pilot does nothing directly, but all vital information must flow timely through him. Neither is steersman the same role as the captain; piloting work means reacting to the currents and the winds as they happen, not long-term goal-setting or strategic planning.

Ship procession fresco, part 4, Akrotiri, Greece

[Note: The old greek word for pilot is kubernetes, which is the etymological root word for both ‘cyber-‘ and ‘govern-‘ words.]

Piloting a ship is not always hectic, at times the ship can be safely moored at harbour, or the sailing can be so smooth that the pilot can take a nap. But when the ship is at strange seas, risking greatest danger from outside forces, pilot-consciousness kicks in fully, alerting all lookouts and powering the engines to full reserve power, ready to react to whatever happens. When the outside forces show their full might, the pilot is more worried about the ship surviving the next wave, than getting to the destination on time.

The state or level of consciousness is often associated with some feelings of anticipation, alertness, even worry or anxiety; such feelings can even prevent dialing down the level of consciousness to restful sleep, and thereby cause more stress the next day. Pain can only be felt when conscious, hence the cliché of pinching yourself to check if you are dreaming or not. Pathos [the root word for ‘-pathy’ words, like empathy or psychopathy], in all its meanings, is a strong catalyst to rouse consciousness. Only humans are thought to be capable of becoming truly conscious of their own mortality, the conscious mind thus becoming aware of the limits of its own existence.

When the pilot takes over and commandeers a vehicle, the flexibility of consciousness allows him to extend his notion of self to include the vessel. For example, an experienced driver can experience his car sliding on a slick patch of road as a tactile sensation, as if a part of himself was touching the road, and not the tires. In the same way, human consciousness naturally tends to identify itself as the whole individual. Sigmund Freud named the normal conscious part of the mind ‘ego’, which is ‘self’ in latin. His key observation was that the mind is much more than the ego, and that true self-knowledge requires careful study, which he called psycho-analysis.

Introspection is an imperfect tool for studying one’s own mind, due to the many literal and metaphorical blind spots involved. The ego is very capable of fooling itself. This is why it is not considered safe to try doing psycho-analysis by yourself, you should have guidance from someone who has gone through the process. Same applies for some methods of controlling consciousness through meditation.

There are methods of self-discovery that are less dangerous, such as the various personality tests. To extend the metaphor, different pilots have their own favorite places on the ‘bridge’, their habitual ways of operating the ship, or specific feelings associated with its operations. Your ‘center’ may not be in the same place as someone else’s. For example, a procrastinator waits until the last possible moment to make a decision; it could be that only the imminence and finality of a deadline makes their choices feel ‘right’ or ‘real’ enough to commit to. Another example is risk-seeking/aversion: some people only feel alive when in some amount of danger; others do their utmost to pass risks and responsibilities to other people.

Most pilots become habituated to a specific level of stress when operating the self-ship, and cannot function well without it; the types and levels of preferred stress can vary much between individuals. Too much stress however can break the pilot and damage the ship. This is also variable between individuals. Hans Eysenck theorized that sensitivity of an individual to be easily traumatized is correlated to intraversion, or that extraversion could be even redefined in terms of tough-mindedness; but there are other models as well, such as psychological ‘resilience‘, which supposedly can be trained as a ‘life skill’.

Habits are also something that can be consciously trained, and paying attention to our own habits is very healthy in the long run. Consciousness is tuned to fairly limited range of timescales; changes that happen too fast or too slowly do not enter consciousness. Daily habits creep slowly, and without photographs it would be hard to believe how much we change over time. Almost all of the atoms and molecules in our bodies are swapped to new ones every few years, yet our sense of identity remains continuous.

Heraclitus says that “a man’s character is his destiny”, and to know thyself means knowing your weaknesses, as well as strengths. Multitasking is a typical weakness that the pilot often confuses for a strength. Consciousness appears to be the stage where all experience terminates, but the real multitasking happens at the edges; the decision of which of the competing stimuli enter consciousness is never a completely conscious decision. The same applies to commands outgoing, unfortunately. Completeness of control can be an illusion, a form of magical thinking.

Many philosophers have also been fascinated with the true nature of the biggest ‘blind spot’ of consciousness: consciousness itself. There have been various efforts to formalize the ‘contents’ of consciousness, or to model consciousness in terms of ‘properties’ that some entity may or may not ‘have’. There are inherent limitations with these approaches, they should be taken in the original context of phaneroscopy, without drawing any metaphysical conclusions from them.

Not many deny that life, and consciousness, is a process, and human viewpoint is one of moving inexorably forward through Time. The ‘contents’ of consciousness form an unstoppable stream, moving in relation to our self-identity. It seems to us that our mind is anchored to something unmoving and unchanging, with the world changing around it. Yet we identify no specific ‘qualia’ for change or motion, or atomic perceptions of time passing. [There are some thresholds to when we begin recognizing a rhythm, though.]

The true nature of subjective experience may be a ‘hard problem’, but no harder than explaining the true nature of Time. The human condition is to flow from an unchangeable past, inexorably and continuously forward, towards an unknown future, and to only ever be able to act in the present. The pilot role is necessary exactly because the flow that powers all flows cannot be stopped, it can only be navigated.

A Likely Story

Is cosmology a science? Is scientific cosmology even possible, because it is about events so unique and fundamental that no test in any laboratory can truly repeat them? Questions like these pop up often enough, and you can find many good answers to them through e.g. quora, which I will not repeat here.

For the layman thinker, the difference between truth and lies is simple and clear, and it would be natural to expect the difference between science and non-science to be simple and clear as well. The human brain is a categorizing machine that wants to put everything in its proper place. Unfortunately the demarcation of science versus not-science is not so clear.

Tischbein - Oldenburg

Karl Popper modeled his philosophy of science on the remarkable history of general relativity. In 1916, Albert Einstein published his long-awaited theory, and made sensational predictions, reported in newspapers around the world, that would not be possible to verify until the next total eclipse of the Sun. It was almost like a step in classical aristeia, where the hero loudly and publicly claims what preposterous thing he will do, before going on to achieve exactly that. Popper’s ideas about falsification are based on this rare and dramatic triumph of armchair theory-making, not so much on everyday practical science work.

If we want a philosophy of science that really covers most of what gets published as science these days, what we really need is a philosophy of statistics and probability. Unfortunately, statistics does not have the same appeal as a good story, and more often gets blamed for being misleading than lauded as a necessary method towards more certain truths. There is a non-zero probability that some day popularizations of science could be as enthusiastic about P-values, null hypotheses, bayesians, as they are today about black holes, dark energy and exotic matter.

Under the broadest umbrella of scientific endeavors, there are roughly two kinds of approaches. One, like general relativity, looks for things that never change, universal rules that apply in all places and times. These include the ‘laws’ of physics, and the logical-mathematical framework necessary for expressing them (whether that should include the axioms of statistics and probability, if any, is the question).

The other approach is the application of such frameworks, to make observations about how some particular system evolves. For example, how mountains form and erode, how birds migrate, how plagues are transmitted, what is the future of a solar system or galaxy, how climate changes over time, what are the relationships between different phyla in the great tree of life, and so on. Many of such fields study uniquely evolved things, such as a particular language or a form of life. In many cases it is not possible or practical to “repeat an experiment” starting from the initial state, which is why it is so important to record and share the raw data, so that it can be analyzed by others.

From the point of view of the theoretical physicists, it is often considered serendipitous that the fundamental laws of physics are discoverable, and even understandable by humans. But it could also be that the laws that we have discovered so far are just approximations that are “good enough” to be usable with the imperfect instruments available to us.

The “luck” of the theorist has been that so many physical systems are dominated by one kind of force, with the other forces weaker by many orders of magnitude. For example, the orbit of the Earth around the Sun is dominated by gravitational forces, while the electromagnetic interactions are insignificant. In another kind of system, for example the semiconducting circuits of a microprocessor, electromagnetism dominates and gravity is insignificant. The dominant physics model depends on the scale and granularity of the system under study (the physical world is not truly scale invariant).

As the experimental side of physics has developed, our measurements have become more precise. When we achieve more reliable decimals to physical measurements, we sometimes need to add new theories, to account for things like unexpected fine structure in spectral lines. The more precision we want from our theories, the more terms we need to add to our equations, making them less simple, further away from a pythagorean ideal.

The nature of measurement makes statistical methods applicable regardless of whether measurement errors originate from a fundamental randomness, or from a determinism we don’t understand yet. The most eager theorists, keen to unify the different forces, have proposed entire new dimensions, hidden in the decimal dust. But for such theories to be practically useful, they must make predictions that differ, at least statistically, from the assumed distribution of measurement errors.

Many theorists and philosophers abhor the uncertainty associated with probability and statistics. (Part of this is probably due to personality of each individual, some innate unwillingness to accept uncertainty or risk.) To some extent this can be a good thing, as it drives them to search for patterns behind what first seems random.

But even for philosophers, statistics could be more than just a convenient box labeled ‘miscellaneous’. Like in the Parmenides dialogue, even dirt can have ideal qualities.

Even though statistics is the study of variables and variability, its name comes from the same root as “static”. When statistics talks about what is changeable, it always makes implicit assumptions about what does not change, some ‘other’ that we compare the changes against.

It is often said that statistical correlation does not imply causation, but does cosmic causation even make sense where cosmic time does not exist? Can we really make any statistical assumptions about the distribution of matter and energy in the ‘initial’ state of all that exists, if that includes all of space and time?

One of the things that Einstein was trying to correct, when working on general relativity, was causality, which was considered broken in the 1905 version of relativity, since causes did not always precede their effects, depending on the movement of the observer. General relativity fixed it so that physical events always obey the timeline of any physical observer, but only by introducing the possibility of macroscopic event horizons, and strange geometries of observable spacetime. But the nature of event horizons prevents us from observing any event that could be the primal cause of all existence, since it would be outside of the timeline from our point of view. We can make estimates of the ‘age’ of the Universe, but this is a statistical concept, no physical observer experiences time in the clock that measures the age.

Before Einstein, cosmology did not exist as a science. At most, it was thought that the laws of physics would be enough to account for all the motion in the world, starting from some ‘first mover’ who once pushed everything in the cosmos to action. This kind of mechanistic view of the Universe as a process, entity or event, separate from but subservient to a universal time, is no longer compatible with modern physics. In the current models, continuity of time is broken not only at event horizons, but also at the Planck scales of time and distance. (Continuing the example in Powers of Two, Planck length would be reached in the sixth chessboard down, if gold were not atomic.)

Why is causality so important to us, that we would rather turn the universe into swiss cheese than part with it? The way we experience time, as a flow, and how we maintain identity in that flow, has a lot to do with it. Stories, using language to form sequences of words, or just as remembered sequences of images, dreams and songs, are deeply embedded into the human psyche. Our very identities as individuals are stories, stories are what make us human, and plausible causes make plausible stories.

Knowledge, Fast and Slow

Ars longa, vita brevis

Due to the shortness of human life, it is impossible for one person to know everything. In modern science, there can be no “renaissance men”, who have deep understanding of all the current fields of scientific knowledge. Where it was possible for Henri Poincaré to master all the mathematics of his time, a hundred years later no-one in their right minds would attempt a similar mastery, due to the sheer amount of published research.

A large portion of the hubris of the so-called renaissance men, like Leonardo da Vinci, can be traced to a single source: the books on architecture written by Vitruvius more than a thousand years earlier, rediscovered in 1414 and widely circulated by a new innovation, the printing press. In these books, dedicated to emperor Augustus, Vitruvius describes what kind of education is needed to become an architect: nothing less than enkuklios paideia, universal knowledge of all the arts and crafts.

Of course an architect should understand how a building is going to be used, and how light and sound interact with different building materials. But some of the things that Vitruvius writes are probably meant as indirect flattery to his audience and employer, the first emperor. Augustus would likely have fancied himself “the architect” of the whole roman empire, in both the literal and the figurative sense.

Paideia was a core hellenic tradition, it was how knowledge and skills were kept alive and passed on to the future generations. General studies were attended until the age of about 12, after which it was normal to choose your future profession, and start an apprenticeship in it. But it was also not uncommon for some aristo to send their offspring to an enkuklios paideia, a roving apprenticeship. They would spend months, maybe a year at a time learning from the masters of one profession, then move to another place to learn something completely different for a time. A born ruler would anyway not be needing any single profession as such, but some knowledge of all professions would help him rule (or alternatively, human nature being what it is, the burden of tolerating the privileged brats of the idle class must be shared by all (“it takes a village”)).

Chiron instructs young Achilles - Ancient Roman fresco

Over the centuries, enkuklios paideia transformed into the word encyclopedia, which today means a written collection of current knowledge in all disciplines. As human knowledge is being created and corrected at accelerating rates, printed versions are becoming outdated faster than they can be printed and read. Online encyclopedias, something only envisioned by people like Douglas Engelbart half a century ago, have now become a daily feature of life, and most written human knowledge is in principle available anywhere, anytime, as near as the nearest smartphone.

Does that mean that we are all now vitruvian architects, renaissance geniuses, with working knowledge of all professions? Well no, human life is still too short to read, let alone understand, all of wikipedia, or keep up with its constant changes. And not everything can be learned by reading or even watching a video, some things can only be learned by doing.

For the purposes of this essay, I am stating that there are roughly two types of knowledge that a human can learn. The first one, let’s call it epistemic knowledge, consists of answers to “what” questions. This is the kind of knowledge that can be looked up or written down fast; for example, the names of people and places, numeric quantities, articles of law. Once discovered, like the end result of a sports match, they can be easily distributed all around the world. But, if they are lost or forgotten, they are lost forever, like all the writings in languages we no longer understand.

The other type of knowledge I will call technical knowledge, consisting of answers to “how” questions. In a sense technical knowledge is any acquired skill that is learned through training, that eventually becomes second nature, something we know how to do without consciously thinking about it. Examples are the skills that all children must learn through trial and error, like walking or speaking. Even something as complex as driving a car can become so automatic that we do it as naturally as walking.

[Sidenote: the naming of the two types here as “epistemic” and “technical” is not arbitrary, they are based on two ancient greek words for knowledge.]

The division to epistemic and technical knowledge is not any fundamental divide, and many contexts have both epistemic and technical aspects. Sometimes the two even depend on each other, like names are dependent on language, or writing depends on the alphabet.

Both kinds of knowledge are stored in the brain, and can be lost if the brain is damaged somehow. But whereas an amnesiac can be just told what their name and birthday is, learning to ride a bicycle again cannot be done by just reading a wikipedia article on the subject. The hardest part of recovering from a brain injury can be having to relearn skills that an adult takes for granted, like walking, eating or speaking.

In contrast to epistemic knowledge, technical knowledge can sometimes be reconstructed after being lost. Even though no documents readable to us have survived from the stone age, we can still rediscover what it may have been like to work with stone tools, through experimental archaeology.

Technical knowledge exists also in many wild animals. Younger members of the pack follow the older ones around, observe what they do and try to imitate them, in a kind of natural apprenticeship. Much has been said about so-called mirror neurons that are though to be behind this phenomenon, in both humans and animals.

New techniques are not just learned by repetitive training and imitation, entirely new techniques can be discovered in practice. Usually some competitive drive is present, like in sports. For example, high jump sets its goal in the simplest of terms: jump over this bar without knocking it off. But it took years before someone tried to use something other than the “scissors” technique. Once the superiority of a new jumping technique became evident, everyone starting to learn it, and improve on it, thus raising the bar for everyone.

New techniques offer significant competitive advantages not only in sports, but also in the struggles between nations and corporations. Since we are so good at imitating and adapting, the strategic advantage of a new technique will eventually be lost, if the adversary is able to observe how it is performed. The high jump takes place in front of all, competitors and judges alike, and everything the athlete does is potentially analyzed by the adverse side. (This does not rule out subterfuge, and the preparatory training can also be kept secret.)

About the time of the industrial revolution, it became apparent that tools and machines can embody useful technical knowledge in a way that is intrinsically hidden from view. Secret techniques that observers cannot imitate even in their imaginations are, to them, indistinguishable from magic. To encourage inventors to disclose new techniques, but still gain temporary competitive advantage in the marketplace, the patent system was established. Since a patent would only get granted if the technique was disclosed, everyone would benefit, and no inventor need take their discoveries to the grave with them, for fear of them being “stolen”. Today international patent agreements cover many countries, and corporations sometimes decide to share patent portfolios, but nations have also been known to classify some technologies secret for strategic military purposes.

Even though technical knowledge is the slow type of knowledge, it is still much easier to learn an existing technique from someone than it was for that someone to invent, discover or develop in the first place. This fact allows societies to progress, as the fruits of knowledge are shared, kept alive and even developed further. One area where this may not apply so well is in the arena of pure thought, since it mostly happens hidden from view, inside the skull. This could be one reason why philosophy and mathematics have always been associated with steep learning curves. Socrates never believed that philosophy could be passed on by writing books, only dialogue and discussions could be truly instructive, the progress of thought made more explicit thereby. This is also why rhetoric and debate is often considered as prerequisite for studying philosophy (though Socrates had not much love for the rhetors of his time either).

From all the tools that we have developed, digital computers seem the most promising candidates for managing knowledge outside of a living brain. Words, numbers and other data can be encoded as digital information, stored and transported reliably from one medium to another, at faster rates than with any other tool available to us. Most of it can be classified as the first type of knowledge, the kind that can be looked up in a database management system. Are there also analogues of the second type of knowledge in computers?

In traditional computer programming, a program is written, tested and debugged by human programmers, using their technical skills and knowledge and all the tools available to them. These kind of computer programs are not written just for the compiler, the source code needs to be understood by humans as well, so they know that/how it works, and can fix it or develop it further if needed.  The “blueprint” (i.e. the software parts) of a machine can be finalized even after the hardware has been built and delivered to the customer, but it is still essentially a blueprint designed by a human.

Nowadays it is also possible for some pieces of software to be trained into performing a task, such as recognizing patterns in big data. The development of such software involves a lot of testing, of the trial and error kind, but not algorithmic programming in the traditional sense. Some kind of an adaptive system, for example an artificial neural network, is trained with a set of example input data, guided to imitate the choices that a human (or other entity with the knowledge) made on the same data. The resulting, fully trained state of the adaptive system is not understandable in the same way that a program written by a human is, but since it is all digital structures, it can be copied and distributed just as easily as human-written software.

This kind of machine learning has obvious similarities to the slow type of knowledge in animals. The principles are the same as teaching a dog to do a trick, except in machine learning we can just turn the learning mode off when we are done training. And of course, machines are not actively improving their skills, or making new discoveries as competing individuals. (Not yet, at least.)

Life on Other Planets

Much of the public interest in space has always revolved around the idea of life on other planets. We know that there are other planets, even around other stars, some of them even “Earth-like”, in some optimistic definitions of the word. But the current lack of detailed information about such places makes fertile ground for imaginative speculation. I will try to refrain from speculation here, and stick to the factual.

We have many inborn instincts that deceive our minds about the matter. For one, we think that recognizing life and intelligence is easy. Just bring it before us and we will categorize it as either:

  1. alive and intelligent (example: elephants, some whales, great apes)
  2. alive and non-intelligent (example: bacteria, lichen, viruses)
  3. non-living and intelligent (example: ?)
  4. non-living and non-intelligent (example: vacuum, atoms, radiation).

It is of course our generalist diet and dependence on social behavior that make recognizing life and intelligence so important and instinctive to us. It takes a lot of effort not to anthropize everything that we encounter, not to react to what our brain thinks is a face for example.

Our ancestors walked for thousands of kilometers, encountered new environments and ecosystems, and survived. We are all descendants of surviving colonists. Our species has traveled just about everywhere on the surface of this planet, and marked all the habitable places. That is, habitable for humans. As we have expanded the reach of our scientific equipment, we have found life in places where we never though there would be life: at the bottom of the ocean at thousands of atmospheres of pressure, in scalding heat or acidity, living with no direct access to sunlight. We call these kind of beings “extremophiles”, because they live in environments that are extreme compared to our human views of habitability.

What makes life, and indeed intelligence, so interesting is that it breaks molds, defies definition, and jumps from host to another, forever in between destinations. Life, as we understand it, exists in the interstitials, in the sweet spots between states and phases, conversing in an endless dialogue between solid and fluid. As much as we humans like categorizing things, life itself cannot be contained in any box (or if it can, it will die inside it).

Life, as we understand it, exists in the interstitials, in the sweet spots between states and phases, conversing in an endless dialogue between solid and fluid.

As a species, we have traveled thousands of kilometers vertically, but move just ten kilometers straight up or down, and the Earth becomes hostile; with either too much or too little pressure for human habitation. It is not the planet as a whole that is habitable, just certain zones within a thin layer of it are. Our senses evolved to this thin layer between the earth and the sky, and until modern science did not even become aware of the many invisible layers and processes above and below this one, that our lives depend on: the ozone layer, the magnetosphere, ground water, deep ocean currents.

The circulatory systems of the Earth, the water cycles, the carbon cycles, and various mineral cycles, all must align at sweet spots for life to flourish. Such fertile locations include of course the alluvial plains, where human city-cultures arose some ten thousand years ago. But there are also two especially active locations on Earth: the Amazon, and the Great Barrier Reef.

Gravity presses tons of bioavailable minerals to the bottom of the oceans, where the lack of sunlight prevents plants from making use of them. Only deep ocean currents, or continental drift can bring these nutrients back to the surface. By the rotation of the Earth, nutrient dust is regularly carried in the air from Africa to Southern America, where it meets water vapor coming from the Pacific (both wind directions driven, paradoxically, by the very same rotation of the planet). Where they meet, the Amazon.

The Great Barrier Reef, the largest structure created by life on Earth, was born when a large slice of coastal plain became flooded about ten thousand years ago. The edge of the continental shelf stays close enough to the surface to receive plenty of sunlight, creating large lagoon-like areas between it and the current coastline. Like they do for ships that sink, corals started to metabolize the trees and other matter from the moment they became submerged, slowly covering them in an accumulation of rock-like deposit. In addition to material flowing in from land, nearby ocean currents such as the Capricorn Eddy help sustain the ecosystem by bringing up nutrient-rich waters from the seabed.

When we now look to the Solar system, our colonist intuition tells us to look for solid surface, terra firma, somewhere to raise a flag and stake a claim. But to truly make humans a multiplanetary species, we need to build, grow, or transport an entire ecosystem capable of sustaining both itself and us humans at the top of the food chain. Quite a few species will have to become multiplanetary in order for that to happen.

As it happens, one of the most promising location for humans discovered outside of Earth exists about 50 km above the surface of Venus, where the levels of temperature, pressure, gravity and radiation are all comparable to the thin layer of Earth we call home. There is of course nothing solid or liquid at that altitude, nowhere to plant anything, not even the intrepid explorer’s flag. A conceptual project (HAVOC) has been proposed to study the conditions in that altitude in Venus (and possibly to invent a flagpole adapted for clouds). But to make that layer into a permanent second home for humans requires designing the ecosystem from scratch. This idea is daunting, but also liberating. I for one am excited to imagine the steps needed to grow our own “Great Reef” to float in the sky of another planet. Most of the building materials should already be present; if you think about it, trees on Earth create solid wood almost entirely out of air, out of thin air. On Venus, CO2 and sunlight are in abundance.

In the same way as when life arose from the seas and moved onto land, moving life into space will have to be at least as much adaptation as conquest. Climbing the formidable gravity wells accommodate travelers best when packed into the smallest mass possible. Ideally, just the instructions how to grow life could be packed into small “seeds” that could then adapt to the local conditions when they arrive (or it is thinkable that this has already happened long ago, and we are the result of panspermia).

All currently known life forms, even extremophiles, have evolved and adapted into the wonderful thin layer of our planet, this “region of interaction” first named biosphere by Eduard Suess. What are the necessary characteristics of such a layer of interaction, and how do they contribute to life as we know it?

On first approximation, the surface of the planet is where the different phases of matter separate, the solid earth, the liquid water, and the gaseous air, like the concentric spheres of classical cosmology. But the solidness and fluidness of a substance is relative, and our senses experience them as such because we have evolved into this layer. It is thinkable that a lifeform adapted to a different layer, or with different mass and strength would see things differently. For example, a bird might sense air currents like a fish senses water, or an elephant senses vibrations in the ground.

The point of view also changes with timescale, and density. The smallest flying insects experience the air viscosity differently, and their flight is more like swimming than gliding. If you are made of gossamer you experience more things as hard and solid than if you were made of diamond. The slower your perception is, the more foggy movement appears, and so on.

f_211_193_171_1024_crop_rot

Courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.

The spherical shape of a planet’s surface is the result of the opposing forces that act on its mass: Gravity pulls everything together, while pressure pushes outward in all directions. The total sum of countless trillions of small collisions eventually separate the mass of the forming planet into layers, of which the separation between “surface” and “atmosphere” is just one.

The end result of the separation process could be just a lifeless set of perfect concentric spheres, like the rings of Saturn. But on Earth, the separation is not complete, there are continuous cycles of matter, interacting over the layer boundaries. An example is the water cycle, continuously evaporating, condensing, raining, sublimating, diluting and conveying all over the biosphere.

This spontaneous layering to spheres follows density, so it does not necessarily result in ordering by phases of matter. In addition, increasing pressure towards the center can melt material that would otherwise solidify. The current theory about the internal structure of our planet is that it has a solid inner core, surrounded by liquid outer core, surrounded by viscous mantle, with a mostly solid crust on top

And of course at planetary scales, solidity is relative. Also gases, when they are thick and viscous enough, can behave more like liquids.

pia18337_rot_crop

Gored Clump in Saturn’s F Ring (Image Credit: NASA/JPL-Caltech/Space Science Institute)

Chain reactions must triggered by something. Just like a snowflake cannot form without a seeding speck of dust, the complex biochemistry of life cannot appear if all base materials are just cleanly separated. Some flaws must be present in the interface, it cannot be a perfect mirror. Crystallization cannot start without a seed, and crystals do not naturally grow into perfect spheres. Exterior solid crusts inevitably erode into uneven rocks and sands, due to to tidal forces, winds and waves, even meteor collisions if nothing else.

The concentric spheres of matter must interact, even interpermeate to some extent, to make them more fertile places for life to evolve. Reservoirs, niches, potentials and flows should be present, with local variations in temperature, flow speed, density and such. Growth can then slowly adapt to different situations, as long as the overall conditions are stable enough.

Such interactive layering does not happen only on planets. The surface of the Sun is also wildly active and complex, but due to its heat cannot sustain the kind of complex biochemistry that we associate with life. The valencies of the chemical bonds in an organism need to be compatible with the ambient energy levels (including radiation) of the environment, so that macromolecules can be both synthesized and broken down near each other.

By a big stretch of the imagination, we might theorize a system of life not based on macromolecule synthesis, or even molecules. For example, we don’t really know what happens in the layers of quark soup in the pressures of a rotating neutron star. But for now, such things are beyond what is known, clearly in the realm of speculation which I said I would try to avoid here.

Even if we stay within the realm of chemistry, we should be looking for life in the shapes and forms that are expressed through it, rather than reducing it to any kind of quantitative process. Otherwise there would be no other purpose to life than “to hydrogenate carbon dioxide”.

Closeup of the two sides of a gold coin

Chrisos design

I wanted a gold coin design for Powers of Two, something that looked timeless and, unlike real ancient coins, could be stacked. For the design, I was greatly inspired by the description of the chrisos given to Severian at the beginning of The Book of The New Sun: the autarch’s face in profile on the front, and a flying ship on the back.

As I am not much of an artist, I still needed more inspiration. The profile on my chrisos is actually based on statues of Antinoos, a greek prince of Rome in Egyptian headdress: suitably cosmopolitan, and somewhat androgynous in appearance. This choice of subject also gave the opportunity to include hieroglyphs, which look suitably alien to modern readers. The ones on the right are copied from an artifact dedicated to Antinoos, the Barberini obelisk that now stands in Rome. In English they mean roughly “Lord Osiris-Antinoos”, according to Erman, Grimm and Grenier. With this vertical text on the right side of the profile, I needed some symbols on the left as well, for balance. I chose the ankh, the djed, and the was, three symbols which were often associated with Osiris.

The obverse side, at first glance, could be mistaken for an alien spaceship against a field of stars. But if you look closely, you can see that it could also be an Egyptian ship, reflected on mirror-calm water surface. The stars in the background are (of course) from the obsolete constellation Antinous, how it might look from the ground in Northern Africa, looking towards west.

coins_gold

For the renders, in blender cycles, I used the interference OSL shader written by prutser. As input it takes actual measured values of the complex index of refraction of a physical material, acquired from thin foils similar to what Faraday used. For the render above and in Powers of Two, I used pure gold with no interference layer, with just some procedurally generated scratches. As bonus here are two renders of the same coin with other IOR values, and with a varying interference layer on top, representing patina or dirt. They are meant to look like tarnished silver and copper (perhaps asimi and aes in Gene Wolfe’s terminology?).

coins_silvercoins_copper

gold coins, in stacks on the lowest row of a wooden chessboard

Powers of Two

Place one gold piece in the first square, double the number in the second square, double again in the next square, and so on, filling all 64 squares of the board. How many gold pieces is that in total?

In real life, you should not try this indoors, since the stacks of coins will soon increase in height beyond any roof made by man. The board should also be sturdy, to support the tons of weight placed on it. Due to shearing winds at higher altitudes, the coins would need to be fused together pretty soon, turning from stacks into solid rods of gold.

Barely halfway through the board, at square 35, all gold ever refined by humankind so far, 180 thousand metric tons, would have been cast into coin-rods and placed on the same chessboard. Around the same time, the rods are no longer pressing all their weight against it, since their center of mass would be orbiting the Earth at ever higher altitudes. Eventually the rods would become tethers, their rotating inertia pulling away from the board game with more force than Earth’s gravity pressing them against it.

In the end, assuming enough gold is refined, the stack of coins in the final square would reach well into the Kuiper belt, beyond the orbit of Neptune. Even if the gold is welded into a solid rod, it would not stay straight, or even in one piece, for very long. Light would take over four hours to travel from one end to the other, and local pulling forces would not be able to balance out along the whole rod.

The whole exercise is of course just an educational story, meant to reveal the poor grasp of numbers and magnitude that human intuition is cursed with. It is not known where or by whom it was originally told, but most versions of it are told with grains of wheat or rice, or salt, all things that come in small sizes.

But what if instead of doubling the amount in each square, you were to halve the amount of gold in each square? Start with a single gold coin, cut it in half, move the half into the next square, then a quarter, and so on? Would it be possible to have a piece of gold in each 64 squares of the chessboard?

Gold is soft and malleable enough to cut, at least when pure (this is why real gold coins are usually not pure gold; but let’s assume purity here, for the sake of the argument). In fact, with stone-age tools it is possible to beat gold to such thinness that its edge becomes invisible: it is thinner than the shortest wavelength of visible light. Already on the second row of the chessboard it becomes necessary to use gold leaf instead of nuggets. This is good for the visibility of the remaining gold; it also helps that gold foil naturally attaches itself to the underlying surface, making it less likely that a light breeze blows away the invisible gold dust in the lower squares.

Gold was considered the noblest substance by the ancients, embodying the idea of material permanence. If gold can be beaten so thin that sunlight is visible through it, why not beat it even thinner, until it becomes as thin and light as the Emperor’s new clothes? What is the internal force that makes thinning the foil ever harder, the thinner it gets? And could we, in theory, continue dividing the gold forever, if we had the means to see it and the power to thin it?

In the middle of 19th century, the great experimentalist Michael Faraday attached gold leaf to glass plates, and studied it under the most powerful microscopes available. He was looking for, among other things, hints of any fine structure, such as the existence of atoms or molecules, strongly suggested by the works of Dalton, Avogadro, Berzelius and others. In his Bakerian lecture from 1856 he writes

“Yet in the best microscope, and with the highest power, the leaf seemed to be continuous, the occurrence of the smallest sensible hole making that continuity at other parts apparent, and every part possessing its proper green colour. How such a film can act as a plate on polarized light in the manner it does, is one of the queries suggested by the phenomena which requires solution.”

Faraday had a knack, and was already famous, for making unusual experiments and finding strange natural phenomena for other, more theoretical-minded scientists to then try to explain. It was only a few years later, in 1865, that Jacob Loschmidt referred to Faraday’s experiment, in a paper that, for the first time in history, made a reasonable estimation for the mass of a single atom: a trillionth of a milligram [he used trillion in the long scale, meaning 1018].

Applying Loschmidt’s estimation to the chessboard, assuming the coin is a solid 8 grams of Au, it could easily be divided 63 times, with still hundreds of gold atoms left even in the last square (8×1021 / 263). There is indeed plenty of room at the bottom, as Dr. Feynman said. Since then, we have of course made more accurate measurements of the atomic mass, but Loschmidt’s estimate was very close to the mark [the actual number of atoms in 8 grams of Au is about 2.4×1022, just three times higher].

We have had the knowledge about the approximate size of atoms for more than 150 years, and have made steady progress in the precision and accuracy of our instruments. But the sight of the apparently empty lower half of the chessboard really demonstrates how much about the physical world is hidden from the senses we were born with.

The gold leafs in squares 38-50 are of the diameter of typical living cells, and visible with a microscope. Optical microscopes become useless pretty soon after square 50, because the diameters become smaller than the wavelengths of visible light. All the complex biochemistry of life, with practically infinite variations of form, happens at a scale too small for us to see. But there is plenty of room for the variations; every view in a microscope is like choosing one asteroid in a galaxy cluster of star systems to look at.

Taking the hint from Feynman, the world-wide electronics industry has proceeded down the ladder, doubling the transistor count of production chips about every 18 months since the mid nineteen-sixties, by miniaturizing components. After fifty years, this so-called Moore’s Law has an unknown future, but the incredible impact to human society of affordable computing has been achieved by traversing just half-way down the chessboard of magnitude.

The problem of accuracy is not just with the instruments used, it is also in the amount of raw data needed to process the information. Every new bit of information doubles the number of possible combinations. To represent measurements of 22 digits of accuracy, for example the recent discovery of gravitational waves, more than 64 bits of precision is needed. For comparison, all the pictures in this post were created with blender, which uses single-precision floats internally, with just 24 bits of precision. This is enough for human vision, and requires less hardware.

Even with trillions of pixels in quadruple-precision accuracy, the human brain would not have the internal bandwidth to grasp both ends of the chessboard of magnitude at the same time. The most common way to display physics magnitudes is to use the zoom effect, as famously done in the Powers of Ten film in 1977. The zoom is a compromise, with the apparent motion as if traveling to other worlds, and no intuitive way to gauge the logarithmic speed of movement; but it seems no better way to convey differences in magnitude has been invented, so far.

NIL EX NIHILO - NIL AD NIHILUM - IUNGIT AMOR - DISSILIUNT ODIIS - CORPORIBUS CAECIS IGITUR NATURA GERIT RES