Uncategorized

# “Mostly Empty Space”

We hear it all the time: “Atoms are mostly empty space.” Sometimes, that factoid is even quantified: “Atoms are X % empty space.” And as part of my ongoing mission to exorcise nagging questions spawned by third-grade science books, I want to find out what X actually is. How much of an atom is actually empty space?

Well, atoms are just a nucleus of protons (and sometimes neutrons), with electrons sort of vaguely existing in their vicinity. They don’t really orbit. Because quantum mechanics is weird and terrifying, you can think of an electron as being smeared out in a haze of existence around the nucleus. There’s a probability cloud surrounding the nucleus, and at each point within that cloud, the density of the cloud determines how likely the electron is to be hanging out there. Depending on the electron’s energy level in the atom, the probability cloud might be spherical or it might be dumbbell-shaped or it might be a bit like an onion. But the probability cloud is always blurry.

Which means that, in record time, we’ve hit a damn stumbling block. Here’s an irrelevant-looking picture:

That’s the graph of the function exp(-x^2), which doesn’t have a lot to do with electrons or atoms, but is a simple analogy for the probability density in a spherical electron cloud. How wide would you say that bump is? 2 units? 4 units? Just like with swear words, no matter where the fuck you draw the line, somebody’s going to disagree with you. But sooner or later, you’ve gotta buckle down and decide that A is over the line and B isn’t. Luckily, science is built to be sensible and rigorous, so as long as we pick a defined point where the bump (or electron cloud) ends, and as long as we all work from the same definition (or tell each other if our definitions are different), we can at least have concrete numbers to work from.

So, to answer the question “How much of an atom is empty space?” I’m going to use the covalent radius for the atoms in question. This is the radius of the atom as deduced from how far it sits from other atoms when it forms covalent molecular bonds. There are other definitions that come closer to our intuitive idea of radius (van der Waals radius, for instance) but covalent radii are easier to measure, and are often known with higher precision.

So now we have a way to look up one parameter: the radius of an atom, and therefore, its volume. The smallest and least massive atom is hydrogen, with a radius of about 25 picometers (0.025 nanometers, or 20,000 times smaller than a bacterium). Hydrogen is a nice atom. It has one proton and one electron. That’s it. And the probability cloud for its single electron is a pleasant spherical shape (at least in the ground state). The largest atom is cesium, with a radius of 260 picometers (0.26 nanometers, about 2,000 times smaller than a bacterium). And the most massive naturally-occurring atom is (arguably) uranium, with a radius of 175 picometers. It’ll make sense why I included two different “largest” atoms in a moment.

To figure out what fraction of an atom is empty space, we need to know how much of it is not empty space. (The missile knows where it is because the missile knows where it isn’t…) Since I spent the start of this post talking about electrons (and since the answer is nice and simple), let’s ask the question: what’s the volume of an electron?

Well, as far as physics can tell (as of January 2021), the answer is zero. The electron has no substructure that we know of—it has no internal parts. There’s just this infinitesimal speck that has all the properties of an electron, and that’s as much as we know about them. Quantum physics and experimental evidence suggest an electron cannot be larger than 10-18 meters—if it were, that’d cause observable effects. So, for our purposes, electrons are so small they’re not worth including.

That only leaves the nucleus. And hoo boy, if you thought the weird fuzziness of the electron cloud was frustrating, you ain’t seen nothin’ yet.

Let’s start with hydrogen, since it’s nice and simple. One zero-volume electron just sort of weirdly hanging out, in an unpleasant blurry (but spherically-symmetric) fashion in the vicinity of a single proton. Unlike the electron, the proton does have a measurable radius. It’s still a fuzzy, blurry, jittery thing that you can never quite pin down, but if you shoot, say, electrons at it and see how they bounce off, you can get an idea, and from that data, decide that the most sensible radius for a proton is 0.877 femtometers. That’s 0.000877 picometers, or 0.877 millionths of a nanometer. If a proton were the size of a 100-micron-diameter dust speck (right on the limit of naked-eye visibility; roughly the diameter of a hair), then a hair would be almost half the diameter of the earth. Did I mention that protons are really small? ‘Cause they are.

So a hydrogen atom is about 25 picometers in radius, and the proton, which is the only thing in it that takes up any space, has a radius of about 0.877 femtometers. The formula for volume of a sphere gives us a simple answer for “How much of a hydrogen atom is empty space?” 99.99999999999568%.

You guys know me, though—that’s too abstract a number. Too many digits. Let’s take, say, the United States. The USA is a big country. If it were a hydrogen atom, the whole thing would be empty space, except for a single patch about 24 inches (72 centimeters) across. Just big enough for an adult human to stand in. You probably wouldn’t be able to see it with the naked eye from an airplane. (I know I switched from volume to area here, but I used the same percentage to get the “area” of the proton, so the comparison is still valid, mathematically.)

For heavier elements, life gets more complicated. As I said, electrons are impossible to pin down for certain. They just exist in the nucleus’s general vicinity. Their existence is smeared out in a particular way around the nucleus. (That’s not exactly an accurate description of how it works, but I don’t know enough quantum mechanics to take you any deeper without the risk of misleading you.) The same is true for the nucleus, but because the protons and neutrons in a nucleus are much more massive, and because they’re so close together, and because they experience an additional very strong force (the strong nuclear force) that the electron doesn’t, their jittering is even more intense.

As a result, we know about the radii of atomic nuclei in the same vague way we know about the radius of a proton: we shoot particles at the nuclei and see how they bounce off. Most will just graze and barely deflect at all. Some will hit the nucleus closer to head-on, and some will hit it square enough to come back at you. By plotting how often electrons bounce off and at what angles, for a given electron speed, we can build up a pretty convincing picture of where all the matter in the nucleus is.

The radius of an atomic nucleus is roughly 1.5 femtometers times the cube-root of the element’s atomic number (for elements with atomic numbers above 20). For cesium, the largest atom by covalent radius, the nucleus has a radius around 5.7 femtometers. A cesium atom has a covalent radius of about 260 picometers, and therefore, is 99.99999999999895% empty space. If the United States were a cesium atom, the nucleus would be barely the size of one and a half sheets of standard printer paper. And uranium, the largest atom we’re concerned with (by mass) has a covalent radius of 175 picometers with a nuclear radius of 6.8 femtometers. 99.9999999999941% empty space. Compared to the United States, that’d be a circle about 34 inches (86 cm) across. Big enough to sit in, but not lie in comfortably.

You guys know me. I usually like to finish my posts with some clever coda. Some moral for the story. But there’s really not one this time. This time, the question was “How much of an atom is empty space,” and the answer is…well, it’s right up above.

Standard

# Pixel Solar System

(Click for full view.)

(Don’t worry. I’ve got one more bit of pixel art on the back burner, and after that, I’ll give it a break for a while.)

This is our solar system. Each pixel represents one astronomical unit, which is the average distance between Earth and Sun: 1 AU, 150 million kilometers, 93.0 million miles, 8 light-minutes and 19 light-seconds, 35,661 United States diameters, 389 times the Earth-Moon distance, or a 326-year road trip, if you drive 12 hours a day every day at roughly highway speed. Each row is 1000 pixels (1000 AU) across, and the slices are stacked so they fit in a reasonably-shaped image.

At the top-left of the image is a yellow dot representing the Sun. Mercury and Venus aren’t visible in this image. The next major body is the blue dot representing the Earth. Next comes a red dot representing Mars. Then Jupiter (peachy orange), Saturn (a salmon-pink color, which is two pixels wide because the difference between Saturn’s closest and furthest distance from the Sun is just about 1 AU), Uranus (cyan, elongated for the same reason), Neptune (deep-blue), Pluto (brick-red, extending slightly within the orbit of Neptune and extending significantly farther out), Sedna (a slightly unpleasant brownish), the Voyager 2 probe (yellow, inside the stripe for Sedna), Planet Nine (purple, if it exists; the orbits are quite approximate and overlap a fair bit with Sedna’s orbit). Then comes the Oort Cloud (light-blue), which extends ridiculously far and may be where some of our comets come from. After a large gap comes Proxima Centauri, the nearest (known) star, in orange. Alpha Centauri (the nearest star system known to host a planet) comes surprisingly far down, in yellow. All told, the image covers just over 5 light-years.

Standard
Uncategorized

# A Toyota on Mars (Cars, Part 1)

I’ve said this before: I drive a 2007 Toyota Yaris. It’s a tiny economy car that looks like this:

(Image from RageGarage.net)

The 2007 Yaris has a standard Toyota 4-cylinder engine that can produce about 100 horsepower (74.570 kW) and 100 foot-pounds of torque (135.6 newton-metres). A little leprechaun told me that my particular Yaris can reach 110 mph (177 km/h) for short periods, although the leprechaun was shitting his pants the entire time.

A long time ago, [I computed how fast my Yaris could theoretically go]. But that was before I discovered Motor Trend’s awesome Roadkill YouTube series. Binge-watching that show led to a brief obsession with cars, engines, and drivetrains. There’s something very compelling about watching two men with the skills of veteran mechanics but maturity somewhere around the six-year-old level (they’re half a notch above me). And because of that brief obsession, I learned enough to re-do some of the calculations from my previous post, and say with more authority just how fast my Yaris can go.

Let’s start out with the boring case of an ordinary Yaris with an ordinary Yaris engine driving on an ordinary road in an ordinary Earth atmosphere. As I said, the Yaris can produce 100 HP and 100 ft-lbs of torque. But that’s not what reaches the wheels. What reaches the wheels depends on the drivetrain.

I spent an unholy amount of time trying to figure out just what was in a Yaris drivetrain. I saw some diagrams that made me whimper. But here’s the basics: the Yaris, like most front-wheel drive automatic-transmission cars, transmits power from the engine to the transaxle, which is a weird and complicated hybrid of transmission, differential, and axle. Being a four-speed, my transmission has the following four gear ratios: 1st = 2.847, 2nd = 1.552, 3rd = 1.000, 4th = 0.700. (If you don’t know: a gear ratio is [radius of the gear receiving the power] / [radius of the gear sending the power]. Gear ratio determines how fast the driven gear (that is, gear 2, the one being pushed around) turns relative to the drive gear. It also determines how much torque the driven gear can exert, for a given torque exerted by the drive gear. It sounds more complicated than it is. For simplicity’s sake: If a gear train has a gear ratio greater than 1, its output speed will be lower than its input speed, and its output torque will be higher than its input torque. For a gear ratio of 1, they remain unchanged. For a gear ratio less than one, its output speed will be higher than its input speed, but its output torque will be lower than its input torque.)

But as it turns out, there’s a scarily large number of gears in a modern drivetrain. And there’s other weird shit in there, too. On its way to the wheels, the engine’s power also has to pass through a torque converter. The torque converter transmits power from the engine to the transmission and also allows the transmission to change gears without physically disconnecting from the engine (which is how shifting works in a manual transmission). A torque converter is a bizarre-looking piece of machinery. It’s sort of an oil turbine with a clutch attached, and its operating principles confuse and frighten me. Here’s what it looks like:

(Image from dieselperformance.com)

Because of principles I don’t understand (It has something to do with the design of that impeller in the middle), a torque converter also has what amounts to a gear ratio. In my engine, the ratio is 1.950.

But there’s one last complication: the differential. A differential (for people who don’t know, like my two-months-ago self) takes power from one input shaft and sends it to two output shafts. It’s a beautifully elegant device, and probably one of the coolest mechanical devices ever invented. You see, most cars send power to their wheels via a single driveshaft. Trouble is, there are two wheels. You could just set up a few simple gears to make the driveshaft turn the wheels directly, but there’s a problem with that: cars need to turn once in a while. If they don’t, they rapidly stop being cars and start being scrap metal. But when a car turns, the inside wheel is closer to the center of the turning circle than the outside one. Because of how circular motion works, that means the outside wheel has to spin faster than the inside one to move around the circle. Without a differential, they have to spin at the same speed, meaning turning is going to be hard and you’re going to wear out your tires and your gears in a hurry. A differential allows the inside wheel to slow down and the outside wheel to spin up, all while transmitting the same amount of power. It’s really cool. And it looks cool, too:

(Image from topgear.uk.net)

(Am I the only one who finds metal gears really satisfying to look at?)

Anyway, differentials usually have a gear ratio different than 1.000. In the case of my Yaris, the ratio is 4.237.

So let’s say I’m in first gear. The engine produces 100 ft-lbs of torque. Passing through the torque converter converts that (so that’s why they call it that) into 195 ft-lbs, simultaneously reducing the rotation speed by a factor of 1.950. For reference, 195 ft-lbs of torque is what a bolt would feel if Clancy Brown was sitting on the end of a horizontal wrench 1 foot (30 cm) long. There’s an image for you. Passing through the transmissions first gear multiplies that torque by 2.847, for 555 ft-lbs of torque. (Equivalent to Clancy Brown, Keith David, and a small child all standing on the end of a foot-long wrench.) The differential multiplies the torque by 4.237 (and further reduces the rotation speed), for a final torque at the wheel-hubs of 2,352 ft-lbs (equivalent to hanging two of my car from the end of that one-foot wrench, or sitting Clancy Brown and Peter Dinklage at the end of a 10-foot wrench. This is a weird party…)

By this point, you’d be well within your rights to say “Why the hell are you babbling about gear ratios?” Believe it or not, there’s a reason. I need to know how much torque reaches the wheels to know how much drag force my car can resist when it’s in its highest gear (4th). That tells you, to much higher certainty, how fast my car can go.

In 4th gear, my car produces (100 * 1.950 * 0.700 * 4.237), or 578 ft-lbs of torque. I know from previous research that my car has a drag coefficient of about 0.29 and a cross-sectional area of 1.96 square meters. My wheels have a radius of 14 inches (36 cm), so, from the torque equation (which is beautifully simple), the force they exert on the road in 4th gear is: 495 pounds, or 2,204 Newtons. Now, unfortunately, I have to do some algebra with the drag-force equation:

2,204 Newtons = (1/2) * [density of air] * [speed]^2 * [drag coefficient] * [cross-sectional area]

Which gives my car’s maximum speed (at sea level on Earth) as 174 mph (281 km/h). As I made sure to point out in the previous post, my tires are only rated for 115 mph, so it would be unwise to test this.

I live in Charlotte, North Carolina, United States. Charlotte’s pretty close to sea level. What if I lived in Denver, Colorado, the famous mile-high city? The lower density of air at that altitude would allow me to reach 197 mph (317 km/h). Of course, the thinner air would also mean my engine would produce less power and less torque, but I’m ignoring those extra complications for the moment.

And what about on Mars? The atmosphere there is fifty times less dense than Earth’s (although it varies a lot). On Mars, I could break Mach 1 (well, I could break the speed equivalent to Mach 1 at sea level on Earth; sorry, people will yell at me if I don’t specify that). I could theoretically reach 1,393 mph (2,242 km/h). That’s almost Mach 2. I made sure to specify theoretically, because at that speed, I’m pretty sure my tires would fling themselves apart, the oil in my transmission and differential would flash-boil, and the gears would chew themselves into a very fine metal paste. And I would die.

Now, we’ve already established that a submarine car, while possible, isn’t terribly useful for most applications. But it’s Sublime Curiosity tradition now, so how fast could I drive on the seafloor? Well, if we provide compressed air for my engine, oxygen tanks for me, dive weights to keep the car from floating, reinforcement to keep the car from imploding, and paddle-wheel tires to let the car bite into the silty bottom, I could reach a whole 6.22 mph (10.01 km/h). On land, I can run faster than that, even as out-of-shape as I am. So I guess the submarine car is still dead.

But wait! What if I wasn’t cursed with this low-power (and pleasantly fuel-efficient) economy engine? How fast could I go then? For that, tune in to Part 2. That’s where the fun begins, and where I start slapping crazy shit like V12 Bugatti engines into my hatchback.

Standard
Uncategorized

# Nightmare Tongue: Minor Update

Feel free to ignore this one. This is just a condensed reference for the phonemes in The Nightmare Tongue. I promise I’ll get back to the ridiculous thought experiments next post!

Romanization: IPA/X-SAMPA

N: n / n (voiced alveolar nasal)

U: u / u (rounded close-back vowel)

Z: z / z  (voiced alveolar fricative)

A: ɑ / A (rounded open-back vowel)

E: e / e (rounded close-mid-front vowel)

S: s / s (voiceless alveolar fricative)

4: ð / D (voiced dental fricative)

8: θ / T (voiceless dental fricative)

R: ʀ / R\ or r / r (uvular trill or alveolar trill)

T: s’ / s_> (ejective voiceless alveolar fricative)

F: f / f (voiceless labiodental fricative)

V: v / v (voiced labiodental fricative)

K: k / k (voiceless velar plosive)

W: w / w (voiced labial-velar approximant)

7: t / t (voiceless velar plosive)

P: p / p (voiceless bilabial plosive)

X: ʃ / s’ (voiceless postalveolar fricative)

3: ʒ / Z (voiced postalveolar fricative)

?: ʔ / ? (voiceless glottal plosive)

D: d / d (voiced alveolar plosive)

G: g / g (voiced glottal plosive)

O: oʊ / oU (diphthong)

B: b / b (voiced bilabial plosive)

I: ɪ/I (near-close near-front unrounded vowel)

Standard
Uncategorized

# Nightmare Tongue 2: What Does it Sound Like?

In my (limited) experience, there are three ways you can start creating a language. 1) Focus on the grammar. This is what the creators of lojban and its predecessor loglan (mostly) did, basing their unambiguous language on mathematical predicate logic. 2) Focus on the sounds. I imagine this is what Tolkein did in creating Elvish, but I have no hard proof other than it’s a very good- and real-sounding language, which means at the very least that he paid a lot of attention to the phonology. 3) Focus on the alphabet. This is the route I usually took, because when I was young and impatient (well, more impatient), that was the only part fun enough to hold my interest. (Now that I’ve grown into the obsessive freak of nature that I am, I can focus on anything.)

For the Nightmare Tongue project, I’m taking Option 2: Start with the phonemes. Since I want this language to sound bizarre and creepy and evil, we need bizarre and creepy and evil phonemes. (Not that it’ll necessarily be evil and creepy; as I learned in German class, screaming Ich müss meine Hose finden! makes you sound like an irate and psychotic drill sergeant. Only in German (as far as I know) can “I need to find my pants!” sound threatening).

When you learn another language, you find out very quickly that speakers of different languages attach very different meanings to sounds and even to equivalent words. For instance, in German, saying “I ate…” translates (very roughly; for some reason, they didn’t think learning the past or future tenses was terribly important) to “Ich aß…” which, when you say it, sounds like Alfred Hitchcock, in his haughtiest possible British English saying “Eek! Ass!” Even as a supposedly mature adult college student, I had to force myself not to smile at that. And considering how wild and diverse languages are, it would seem like each would have its own independent set of grammars and meanings. For instance, I learned from the incomparable Dr. Ralf Thiede that there’s an Aboriginal Australian language in which you add meaning to sentences by adding prefixes and suffixes to words, which means most sentences are all of one word long.

That said, there do seem to be some common principles underlying most or all human languages. For one thing, the “deep structure” of the grammar (including things like the existence of nouns and verbs, et cetera) is almost invariant across language. Paraphrasing the late Sir Terry Pratchett (I’m sad that I have to add “late”…), you can’t have a language that has “No nouns and only one adjective, which is obscene.” That’s not how human languages work. This seems to be tied to the structure of the human brain and mind, and the way we recognize objects and people.

But on a deeper level, it’s possible that human languages don’t assign their sounds to meanings (and vice versa) completely arbitrarily. I’m going to put up a famous picture of two objects. One is called bouba and the other kiki. Or, if you prefer, keki and booba or booboo and keekee or boubou and keek decide which word names which thing:

(Source.)

Which one did you decide to call kiki? If you picked the spiky one on the left, you’re in the majority (the above-ninety-percent majority, according to one study). The study in question found that American college students and native speakers of Tamil in India called the spiky shape kiki over 90% of the time. (Fun fact: Tamil is among the longest-lived languages in everyday usage, its history going back to at least 100 BCE. Sanskrit, which today is almost exclusively used for religious studies and ceremonies among Buddhists, Hindus, and Jains, probably existed in a recognizable form before 1000 BCE. India’s cool.) Anyway–the bouba/kiki effect seems to hold across language barriers, and can even be identified among those who can’t read. Some say it might be related to synesthesia, a bizarre and awesome perceptual effect in which some people unconsciously and automatically experience certain stimuli (often numbers, particular letters of the alphabet, tastes, or days of the weak) as having qualities belonging to a different sense entirely. Famously, the mathematical savant Daniel Tammet (whom I’ve mentioned before) reportedly experiences colors, images, shapes, and movements, a specific one associated with every integer from 1 to 10,000. More frequently in synesthesia, the digits from 0 to 9 will each have their own color. This effect might be more common than we think, too: I’m not a synaesthete, but I find it difficult not to associate zero with black, one with white, two with blue, three with a red triangle, and four with a green square. And it’s been suggested that the bouba/kiki effect is a more universal example of the same phenomenon: a particular shape is automatically associated more strongly with one sound than another. I don’t know Mr. Tammet personally, but I imagine if you tried to ask him to imagine a beautiful white number 6 (a number he dislikes and whose image he finds hard to grasp), he’d get a little upset. It just doesn’t make sense to him, the way he sees numbers. And maybe that’s why so many people called the pointy thing kiki.

As a reader pointed out not too long ago, I ramble like an absentminded professor who’s had too much coffee. That’s because, apart from the Ph.D. and the status (and the coherence, and the chance to teach the next generation of scholars…) that’s pretty much what I am. But my rambling is never without purpose: my point is that there are some sounds that are going to fit better in a Lovecraftian nightmare language than others.

Speaking of Lovecraft, consider the famous incantation: Ph’nglui mglw’nafh Cthulhu R’lyeh wgah’nagl fhtagn. This is, of course, a poor mimicry of the language of the Elder Things, which human tongues cannot speak. But consider fhtagn. If you pronounce it “FFT-AGH-NNN,” it sounds scary. Like a wolf growling, almost. If you pronounce it “FT-AY-NNN,” it loses most of its teeth. And if you pronounce it “FFT-AG-EN,” you just make me think of this

(Source.)

which doesn’t exactly scream “cosmic horror whose mere presence brings reality-splitting madness.”

Or, returning to Tolkein, consider the name of the nine wringwraiths, the fearsome Black Riders: Nazgûl. That is fucking scary. I feel like I accidentally put a hex on my neighbors just by typing it. Something about that Z sound. You find it in a lot of scary names,

Beelzebub, for instance:

or Azazel, whose reference is uncertain but often used to refer to demons.

Speaking of demons, I think demon names are going to be my main source for phonemes. I’m not religious enough to be a Satanist, so don’t worry, I’m not tumbling into madness (or at least not that particular flavor of madness), but I am, after all, creating a Nightmare Tongue. Why not take its sounds from the names of the most horrible things in folklore and mythology? What follows is a reference more for my sake than anything else, so don’t feel obligated to read the whole thing. These are just some of the places I’ll be drawing my phonemes from. Incidentally, although I hate to do it since it might alienate the non-linguists out there, I’m going to have no choice but to start bringing International Phonetic Alphabet symbols (or rather, the X-SAMPA versions, which will always be the first item in the parentheses) into this. I’ll try to sound them out wherever necessary.

From Nazgûl: N (X-SAMPA: n), A (X-SAMPA: A, American and British English: father), U (u, American English: food)

From Azazel: Z (z), AZ (Az, A as in father), ZAZ (zez, in American and most British English, e = fate or crate)

From English slither, which is both my favorite word and my pick for English’s creepiest word: S (s), L (l or l`, think “love” as pronounced by a creepy villain in a horror movie), TH (D, English: then. I will, of course, be using the awesome Old English/Icelandic character eth (ð) for this sound, and thorn (þ) for the un-voiced th sound at the beginning of words like thorn and throw.)

From Spanish and French: R (R\, the rolled one; this is funny because this is one sound I can barely make even on a good day, despite being able to pronounce almost all of the IPA chart).

From my crazy-ass head: TS (this is the first of the “really weird” phonemes I’m adding; to pronounce it, press the tip of your tongue to the back of your upper teeth and make a quick “S” or “TS” sound, like you’re trying to warn a cat off clawing at the curtains; the X-SAMPA symbol for this one is s_>. Fun fact: Learning the International Phonetic Alphabet will give you spells of what look like Tourette’s Syndrome. I’d like you to imagine me, sitting at my computer, reading Wikipedia articles on consonant articulation, and every few seconds going “TS!” as I try to figure out where in the mouth the sound is articulated. This is why you should never do linguistics in public.)

From everywhere: F (f), V (v)

From English liquid: QW (kW, k is the standard English voiceless velar plosive as in kick and kill and kettle, and W is a breathy, voiceless approximant a little like a cross between hwa and fwa).

From everywhere and my crazy-ass head: T (t_>, a bit like the English t in tea and touch, but pronounced with an audible pop by curling back the tongue and pressing the tip against the hard palate, building up air pressure in the throat, and releasing).

From some dialects of British English and a few cool Eastern European languages like Armenian and Georgian: > (k_>, a velar ejective, like the K in kite and kick, a sort of cross between a regular K and a click).

From Xibalba, the awesome Mayan word for the underworld, the X which is really more like English SH (s`). Fun fact, with spoilers if you haven’t read the Popol Vuh, which you totally should: In Xibalba, there’s a Mayan handball court where the ball is somehow both spherical and razor-sharp. There’s a river of blood and a river of pus. There’s a demon dedicated to making people vomit blood. There’s a house that’s constantly full of flying daggers, a house full of decapitating screeching bats, and a house where you have to smoke cigars without burning them up, or else you die. One of the Mayan hero twins Hunahpu and Xbalanqe (Xbalanqe is pronounced very roughly “ZH-BALL-AN-KAY”) plays death-basketball with his brother’s severed head. And the skull of Hunahpu’s father One-Hunahpu sits in a tree and gets a girl pregnant by spitting in her hand. (Yes, I know there’s more to Mayan mythology than blood and death; the rest of the Popol Vuh has stuff like giant malevolent crocodiles, a group of two hundred boys that might be some sort of hive mind, and a fairly friendly creator deity called Q’uq’umatz whose name translates to the no less awesome “Sovereign Feathered Serpent.”) Also, the Mayan gods took three tries to create humanity. I may have the order wrong, but I think the first time, they tried making humans out of mud, and the results were horrible and deformed and most died before the gods mercy-killed the survivors. The second batch were made of wood and were terrifying fucking soulless automatons. That’s right: soulless wooden Mayan robots. Now there’s a sentence to make you sound like a delirious homeless dude on the bus. The third batch were made of clay (I think) and came out okay.

From everywhere: P (p)

From English words mixaxox, and hex: K (ks)

From everywhere: W ( w )

From English words like noodle and super and (roughly) from German words like über: U (u)

From a lot of places, including the sound between “u” and “oh” in “uh-oh”, the end of the Cockney pronunciation of “cat”, and the British and sometimes American button (the buh-un form): ? (?, the glottal stop)

From everywhere: D (d)

From everywhere: G (g)

From everywhere: O (o, American English gross, American and British English: boat)

From everywhere: B (b)

From English leisure ZH (Z)

From English pin: I (I)

From English keen: E (i)

I think I’ll make a master list that sits in its own post. For now, though, I need to go rest my brain and my tongue. I’ve pronounced more weird consonants in the last hour than a Polish man and his Welsh wife reading Larry Niven’s Man-Kzin Wars series to each other.

(I don’t know Welsh or Polish. I do know that there’s a Welsh town named Cwmbran, which I would pronounce “KOOM-BRAN.” There’s another Welsh town called Pwllheli (pronounced (very roughly) POO-KHELL-EE). And there’s the Czech city of Brno, which always looks odd to me when I write it.)

Standard
Uncategorized

# The Nightmare Tongue, Part 1

This series is going to be a little different. A sort of ongoing project. Don’t worry, I won’t let it derail my other bizarre ramblings.

Anyway, here’s the project: I’m going to construct a language. There’s a whole community dedicated to that, but it wasn’t for nothing that my grade-school teachers kept writing “Doesn’t play well with others” and “Is not very good at taking turns” and “Shithead” on my papers. I’m going to go at this more or less alone. (Unless any of my readers are compelled to hop on this loony rollercoaster with me.)

The premises and requirements of the Nightmare Tongue are simple. Not like lojban, a constructed language based on freakin’ mathematical logic which is so sprawling and complex that the language itself has its own Creative Commons License (I think). I want the premises to be simple, but there’s a reason I’m calling the language The Nightmare Tongue. I want it to be the kind of language demons or evil aliens or sentient hyenas would speak. I want a language that sounds scary. I want a language in which you can use a phrase to express weird thoughts James Joyce couldn’t express in English. (Re-reading that last sentence makes me realize I really need to get more sleep…) Why? Fun, mostly. Because I’ve dabbled in creating languages in the past, but I want to take a serious shot at it. This is something I’ve wanted to do ever since I learned just how much effort and love J.R.R. Tolkein put into Elvish. Tolkein is a famed and respected writer, and Elvish is a beautiful and nuanced language. I remember watching The Fellowship of the Ring on freakin’ VHS when it first came out, and how the actress playing Arwen said she loved speaking Elvish.

Tolkein is a famed and respected writer and scholar (if I remember correctly, he did his own translation of Beowulf). I’m a madman on the Internet with too much time on his hands. The Nightmare Tongue isn’t going to be nearly as pretty as Elvish. But here’s a list of the things I do want it to be:

• Pronounceable. I don’t want to turn this into some jackass art project where I deliberately try to be as dense as possible. Despite my sentient-hyena example from earlier, I want the Nightmare Tongue to be pronounceable by the human vocal tract. I do intend to stuff as many weird clicks and other bizarre consonants in there as I can, but I want it to be the kind of thing that a person can, with practice, speak fluently and with a nice rhythm.
• Weird-sounding. Icelandic is an infamously complicated language. Years ago, everybody panicked because an Icelandic volcano erupted and pretty much blocked the flyways through Europe for a week. The name of that volcano is, of course, Eyjafjallajökull. (It probably says something about me that I spelled that right on the first try, but that I still get the I and the E the wrong way around in “receive”…) Eyjafjallajökull is roughly pronounced (forgive me, Icelanders–even in text I’m going to mess it up) “EY-aff-yaht-lah-YO-kut-th.” Those double-Ls are a weird-sounding phoneme we don’t have in English: a voiceless alveolar lateral fricative. It’s (roughly) the kind of sound you make when you try to say the English letters “K” and “L” at the same time. You’ve probably seen this consonant before without realizing it. The name of the feathered serpent, the badass Aztec god Quetazalcoatl has one at the end, so if you want to pronounce it authentically, unless you speak Nahuatl (there it is again), you’re going to end up spitting on the person in front of you. Random fact: I used to work with a guy from Mexico who spoke Nahuatl fluently. It sounded awesome.
• Complex.  Once again, I don’t want to descend too far into navel-gazing (for one thing, navels are kinda gross). By which I mean I don’t want an impenetrable mess of a language that’s purposely too difficult for anybody to learn. It wouldn’t be hard to make a language like that. After all, as Lewis Caroll once pointed out (I’m paraphrasing), you and I are imperfect speakers of English and imperfect doers of arithmetic because it takes us a lot of effort to decipher the perfectly grammatical sentence “What is the sum of one plus one plus one plus one plus one plus one plus one plus one and the largest prime factor of one plus one plus one plus one all multiplied by one plus one plus one plus one.” I’m sorry you had to see that. My point is, I want the grammar to be bizarre, complex, and alien, but I don’t want some abstract-art nonsense that’s impossible and pretentious.
• Writeable. The Nightmare Tongue will have a written alphabet. When I first got interested in created languages (thanks to Tolkein), the invented writing was one of my favorite parts. Plus, one of my cousins gave me some sweet calligraphy pens for my birthday, so I’ll be able to write that alphabet in BLOOD RED. (I really need to get more sleep…)
• Complete. Or as close as I can get. This site is all about thought experiments, but it’s also about fleshing things out. I don’t want this to be an unfinished concept-art project like all the other languages I’ve tried to create. My goal, by the end of this, is to have a weird-sounding, twisted, evil language that you could write a competent dictionary for, and maybe a grammar reader for children. (I’ve seen The Exorcist. I know children can learn demon tongues.) Perhaps someday I’ll find a way to crowbar it into a novel or something.

Either way, that said, work will begin, with updates as developments warrant. If you’re not interested in this kind of thing, you won’t hurt my feelings by skipping these posts. Don’t worry, I’ll be getting back to my bread and butter–ludicrous thought experiments–as soon as my brain gets unstuck.

Be safe out there.

Standard
Uncategorized

# Sundiving, Part 2

(NOTE: After re-reading this post in 2021, I’m starting to doubt the validity of the math and physics here. I’m keeping the post up for posterity, but I’m warning you: read this with a very critical eye.)

In the previous post, I figured out how to get a spacecraft to an altitude of one solar radius (meaning one solar radius above the Sun’s surface, and two solar radii from its center). That’s nice and all, but unless we figure out how to get it the rest of the way down intact, then we’ve essentially done the same thing as a flight engineer who sends an astronaut into orbit in a fully-functional space capsule, but forgets to put a parachute on it. (Not that I know anything about that. Cough cough Kerbal Space Program cough…)

The sun is vicious. Anyone who’s ever had a good peeling sunburn knows this, and cringes at the thought. And anyone who, in spite of their parents’ warnings, has looked directly at the sun, also knows this. But I’ve got a better demonstration. I have a big 8.5 x 11-inch Fresnel lens made to magnify small print. I also have a lovely blowtorch that burns MAP-Pro, a gas that’s mostly propylene, which is as close as a clumsy idiot like me should ever come to acetylene. Propylene burns hot. About 2,200 Kelvin. I turned it on a piece of gravel and a piece of terra cotta. It got them both orange-hot, but that was the best it could do. The Fresnel lens, a cheap-ass plastic thing I bought at a drugstore, melted both in seconds (albeit in very small patches), using nothing more than half a square foot of sunlight.

Actually, the area of that magnifier is handy to have around. It’s 0.0603 square meters. On the surface of the Earth, we get (very roughly) 1,300 watts per square meter of sunlight (that’s called the solar constant). To melt terra cotta, I have to get the spot down to about a centimeter across. The lens intercepts about 80 Watts. If those 80 Watts are focused on a circle a centimeter across, then the target is getting irradiated with 770 solar constants, which, if it was a perfect absorber, would raise its temperature to 2,000 Kelvin. If I can get the spot is half a centimeter across, then we’re talking 3,070 solar constants and temperatures approaching 3,000 Kelvin.

And while I was playing around with my giant magnifier, I made a stupid mistake. Holding the lens with one hand, I reached down to re-position my next target. The light spot, about the size of a credit card, fell on the back of my hand. I said words I usually reserve for when I’ve hit my finger with a hammer. This is why you should always be careful with magnifying lenses. Even small ones can burn you and start fires.

The area of a standard credit card is about 13 times smaller than the area of my lens, so my hand was getting 13 solar constants. And even a measly 13 solar constants was more than enough to sting my skin like I was being attacked by a thousand wasps. Even at the limit of my crappy Fresnel lens, somewhere between 770 and 3,070 solar constants, we’re already in stone-melting territory.

At an altitude of 1 solar radius, our Sundiver will be getting to 11,537 solar constants. Enough to raise a perfect absorber to 4,000 Kelvin, which can melt every material we can make in bulk. Our poor Sundiver hasn’t even reached the surface and already it’s a ball of white-hot slag.

Except that I’ve conveniently neglected one thing: reflectivity. If the Sundiver was blacker than asphalt, sure, it would reach 4,000 Kelvin and melt. But why on Earth would we paint an object black if we’re planning to send it to the place where all that heat-producing sunlight comes from? That’s even sillier than those guys you see wearing black hoodies in high summer.

My first choice for a reflective coating would be silver. But there’s a massive problem with silver. Here are two graphs to explain that problem:

(Source.)

(Source is obvious.)

The top spectrum shows the reflectivities of aluminum (Al), silver (Ag, because Latin), and gold (Au) at wavelengths between 200 nanometers (ultraviolet light; UV-C, to be specific: the kind produced by germicidal lamps) and 5,000 nanometers (mid-infrared, the wavelength heat-seeking missiles use).

The bottom spectrum is the blackbody spectrum for an object at a temperature of 5,778 Kelvin, which is a very good approximation for the solar spectrum. See silver’s massive dip in reflectivity around 350 nanometers? See how it happens, rather inconveniently, right around the peak of the solar emission spectrum? Sure, a silver shield would be good at reflecting most of the infra-red light, but what the hell good is that if it’s still soaking up all that violet and UV?

Gold does a little better (and you can see from that spectrum why they use gold in infrared mirrors), but it still bottoms out right where we don’t want it to. (Interesting note: see how gold is fairly reflective between 500 nanometers and 1,000 nanometers, but not nearly as reflective between 350 nanometers and 500 nanometers? And see how silver stays above 80% reflectivity between 350 and 1,000? That’s the reason gold is gold-colored and silver is silver-colored. Gold absorbs more green, blue, indigo, and violet than it does red, orange, and yellow. Silver is almost-but-not quite constant across this range, which covers the visible spectrum, so it reflects all visible light pretty much equally. Spectra are awesome.)

Much to my surprise, our best bet for a one-material reflector is aluminum. My personal experiences with aluminum are almost all foil-related. My blowtorch will melt aluminum, so it might seem like a bad choice, but in space, there’s so little gas that almost all heat transfer is by radiation, so it might still work. And besides, if you electropolish it, aluminum is ridiculously shiny.

(Image from the Finish Line Materials & Processes, Ltd. website.)

That’s shiny. And it’s not just smooth to the human eye–it’s smooth on scales so small you’d need an electron microscope to see them. They electro-polish things like medical implants, to get rid of the microscopic jagged bits that would otherwise really annoy the immune system. So get those images of crinkly foil out of your head. We’re talking a mirror better than you’ve ever seen.

Still, aluminum’s not perfect. Notice how its reflectivity spectrum has an annoying dip at about 800 nanometers. The sun’s pretty bright at that wavelength. Still, it manages 90% or better across almost all of the spectrum we’re concerned about. (Take note, though: in the far ultraviolet, somewhere around 150 nanometers, even aluminum bottoms out, and the sun is still pretty bright even at these short wavelengths. We’ll have to deal with that some other way.)

So our aluminum Sun-shield is reflecting 90% of the 15.7 million watts falling on every square meter. That means it’s absorbing the other 10%, or 1.57 million watts per square meter.

Bad news: even at an altitude of 1 solar radius, and even with a 90% reflective electropolished aluminum shield, the bastard’s still going to melt. It’s going to reach over 2,000 Kelvin, and aluminum melts at 933.

We might be able to improve the situation by using a dielectric mirror. Metal mirrors reflect incoming photons because metal atoms’ outer electrons wander freely from one atom to another, forming a conductive “sea”. Those electrons are easy to set oscillating, and that oscillation releases a photon of similar wavelength, releasing almost all the energy the first photon deposited. Dielectric mirrors, on the other hand, consist of a stack of very thin (tens of nanometers) layers with different refractive indices. For reference, water has a refractive index of 1.333. Those cool, shiny bulletproof Lexan windows that protect bank tellers have a refractive index of about 1.5. High-grade crystal glassware is about the same. Diamonds are so pretty and shiny and sparkly because their refractive index is 2.42, which makes for a lot of refraction and internal reflection.

These kind of reflections are what make dielectric mirrors work. The refractive index measures how fast light travels through a particular medium. It travels at 299,792 km/s through vacuum. It travels at about 225,000 km/s through water and about 124,000 km/s through diamond. This means, effectively, that light has farther to go through the high-index stuff, and if you arrange the layers right, you can set it up so that a photon that makes it through, say, two layers of the stack will have effectively traveled exactly three times the distance, which means the waves will add up rather than canceling out, which means they’re leaving and taking their energy with them, rather than canceling and leaving their energy in your mirror.

This, of course, only works for a wavelength that matches up with the thickness of your layers. Still, close to the target frequency, a dielectric mirror can do better than 99.9% reflectivity. And if you use some scary algorithms to optimize the thicknesses of the different layers, you can set it up so that it reflects over a much broader spectrum, by making the upper layers very thin to reflect short-wavelength light (UV, et cetera) and the deeper layers reflect red and infra-red. The result is a “chirped mirror,” which is yet another scientific name that pleases me in ways I don’t understand. Here’s the reflection spectrum of a good-quality chirped mirror:

(Source.)

Was I inserting that spectrum just an excuse to say “chirped mirror” again? Possibly. Chirped mirror.

Point is, the chirped mirror does better than aluminum for light between 300 and 900 nanometers (which covers most or all of the visible spectrum). But it drops below 90% for long enough that it’s probably going to overheat and melt. And there’s another problem: even at an altitude of 1 solar radius, the Sundiver’s going to be going upwards of 400 kilometers per second. If the Sundiver crosses paths with the smallest of asteroids (thumbnail-sized or smaller), or even a particularly bulky dust grain, there’s going to be trouble. To explain why, here’s a video of a peanut-sized aluminum cylinder hitting a metal gas canister at 7 kilometers per second, 57 times slower than the Sundiver will be moving:

We have a really, really hard time accelerating objects anywhere near this speed. We can’t do too much better than 10 to 20 km/s on the ground, and in space, we can at best double or triple that, and only if we use gravity assists and clever trajectories. On the ground, there are hypersonic dust accelerators, which can accelerate bacterium-sized particles to around 100 km/s, which is a little better.

But no matter the velocity, the news is not good. A 5-micron solid particle will penetrate at least 5 microns into the sunshade (according to Newton’s impact depth approximation). Not only will that rip straight through dozens of layers of our carefully-constructed chirped mirror, but it’s also going to deposit almost all of its kinetic energy inside the shield. A particle that size only masses 21.5 picograms, so its kinetic energy (according to Wolfram Alpha) is about the same required to depress a computer key. Not much, but when you consider that this is a bacterium-sized mote pressing a computer key, that’s a lot of power. It’s also over 17,000 times as much kinetic energy as you’d get from 21.5 picograms of TNT.

As for a rock visible to the naked eye (100 microns in diameter, as thick as a hair), the news just gets worse. A particle that size delivers 110.3 Joules, twenty times as much as a regular camera’s flash, and one-tenth as much as one of those blinding studio flashbulbs. All concentrated on a volume too small to squeeze a dust mite into.

And if the Sundiver should collide with a decent-sized rock (1 centimeter diameter, about the size of a thumbnail), well, you might as well just go ahead and press the self-destruct button yourself, because that pebble would deliver as much energy as 26 kilos (over 50 pounds) of TNT. We’re talking a bomb bigger than a softball. You know that delicately-layered dielectric mirror we built, with its precisely-tuned structure chemically deposited to sub-nanometer precision? Yeah. So much for that. It’s now a trillion interestingly-structured fragments falling to their death in the Sun.

My point is that a dielectric mirror, although it’s much more reflective than a metal one, won’t cut it. Not where we’re going. We have to figure out another way to get rid of that extra heat. And here’s how we’re going to do it: heat pipes.

The temperature of the shield will only reach 2,000 Kelvin if its only pathway for getting rid of absorbed heat is re-radiating it. And it just so happens that our ideal shield material, aluminum, is a wimp and can’t even handle 1,000 Kelvin. But aluminum is a good conductor of heat, so we can just thread the sunshade with copper pipes, sweep the heat away with a coolant, and transfer it to a radiator.

But how much heat are we going to have to move? And has anybody invented a way to move it without me having to do a ridiculous handwave? To find that out, we’re going to need to know the area of our sunshade. Here’s a diagram of that sunshade.

I wanted to make a puerile joke about that, but the more I look at it, the less I think “sex toy” and the more I think “lava lamp.” In this diagram, the sunshade is the long cone. The weird eggplant-shaped dotted line is the hermetically-sealed module containing the payload. That payload will more than likely be scientific instruments, and not a nuclear bomb with the mass of Manhattan island, because that was probably the most ridiculous thing about Sunshine. Although (spoiler alert), Captain Pinbacker was pretty out there, too.

The shield is cone-shaped for many reasons. One is that, for any given cross-sectional radius, you’re going to be absorbing the same amount of heat no matter the shield’s area, but the amount you can radiate depends on total, not cross-sectional, area. Let’s say the cone is 5 meters long and 2 meters in diameter at the base. If it’s made of 90% reflective electropolished aluminum, it’s going to absorb 4.93 megawatts of solar radiation at an altitude of 1 solar radius. Its cross-section is 3.142 square meters, but its total surface area is 16.02 square meters. That means that, to lose all its heat by radiation alone, the shield would have to reach a blackbody temperature of 1,500 Kelvin. Still almost twice aluminum’s melting point, but already a lot more bearable. If we weren’t going to get any closer than an altitude of 1 solar radius, we could swap the aluminum mirror out for aluminum-coated graphite and we could just let the shield cool itself. I imagine this is why the original solar probe designs used conical or angled bowl-shaped shields: small cross-sectional area, but a large area to radiate heat. But where we’re going, I suspect passive cooling is going to be insufficient sooner or later, so we might as well install our active cooling system now.

Heat pipes are awesome things. You can find them in most laptops. They’re the bewildering little copper tubes that don’t seem to serve any purpose. But they do serve a purpose. They’re hollow. Inside them is a working fluid (which, at laptop temperatures, is usually water or ammonia). The tube is evacuated to a fairly low pressure, so that, even near its freezing point, water will start to boil. The inner walls of the heat pipe are covered with either a metallic sponge or with a series of thin inward-pointing fins. These let the coolant wick to the hot end, where it evaporates. Evaporation is excellent for removing heat. It deposits that heat at the cold end, where something (a passive or active radiator or, in the case of a laptop, a fan and heat sink) disposes of the heat.

Many spacecraft use heat pipes for two reasons. 1) The absence of an atmosphere means the only way to get rid of heat is to radiate it, either from the spacecraft itself, or, more often, by moving the heat to a radiator and letting it radiate from there; heat pipes do this kind of job beautifully; 2) most heat pipes contain no moving parts whatsoever, and will happily go on doing their jobs forever as long as there’s a temperature difference between the ends, and as long as they don’t spring a leak or get clogged.

On top of this, some heat pipes can conduct heat even better than solid copper. Copper’s thermal conductivity is 400 Watts per meter per Kelvin difference, which is surpassed only by diamond (and graphene, which we can’t yet produce in bulk). But heat pipes can do better than one-piece bulk materials: Wikipedia says 100,000 Watts per meter per Kelvin difference, which my research leads me to believe is entirely reasonable. (Fun fact: high-temperature heat pipes have been used to transport heat from experimental nuclear reactor cores to machinery that can turn that heat into electricity. These heat pipes use molten frickin’ metal and metal vapor as their working fluids.)

The temperature difference is going to be the difference between the temperature of the shield (in this case, around 1,500 Kelvin at the beginning) and outer space (which is full of cosmic background radiation at an effective temperature of 2.3 Kelvin, but let’s say 50 Kelvin to account for things like reflected light off zodiacal dust, light from the solar corona, and because it’s always better to over-build a spacecraft than to under-build it).

When you do the math, at an altitude of 1 solar radius, we need to transport 4.93 megawatts of heat over a distance of 5 meters across a temperature differential of 1,450 Kelvin. That comes out to 680 Watts per meter per Kelvin difference. Solid copper can’t quite manage it, but a suitable heat pipe could do it with no trouble.

But we still have to get rid of the heat. For reasons that will become clear when Sundiver gets closer to the Sun, the back of the spacecraft has to be very close to a flat disk. So we’ve got 3.142 square meters in which to fit our radiator. Let’s say 3 square meters, since we’re probably going to want to mount things like thruster ports and antennae on the protected back side. Since we’re dumping 4.93 megawatts through a radiator with an area of 3 square meters, that radiator’s going to have to be able to handle a temperature of at least 2,320 Kelvin. Luckily, that’s more than manageable. Tungsten would work, but graphite is probably our best choice, because it’s fairly tough, it’s unreactive, and it’s a hell of a lot lighter than tungsten, which is so dense they use it in eco-friendly bullets as a replacement for lead (yes, there’s such a thing as eco-friendly bullets). Let’s go with graphite for now, and see if it’s still a good choice closer to the Sun. (After graphite, our second-best choice would be niobium, which is only about as dense as iron, with a melting point of 2,750 Kelvin. I’m sticking with graphite, because things are going to get hot pretty fast, and the niobium probably won’t cut it. (Plus, “graphite radiator” has a nicer ring to it than “niboium radiator.”)

Our radiator’s going to be glowing orange-hot. We’ll need a lot of insulation to minimize thermal contact between the shield-and-radiator structure and the payload, but we can do that with more mirrors, more heat pipes, and insulating cladding made from stuff like like calcium silicate or thermal tiles filled with silica aerogel.

Of course, all the computations so far have been done for an altitude of 1 solar radius. And I didn’t ask for a ship that could survive a trip to 1 solar radius. I want to reach the freakin surface! Life is already hard for our space probe, and it’s going to get worse very rapidly. So let’s re-set our clock, with T=0 seconds being the moment the Sundiver passes an altitude of 1 solar radius.

Altitude: 0.5 solar radii

T+50 minutes, 46 seconds

Speed: 504 km/s

Solar irradiance: 28 megawatts per square meter (20,600 solar constants)

Temperature of a perfect absorber: 4,700 Kelvin (hot enough to boil titanium and melt niobium)

Total heat flux: 8.79 megawatts

Temperature of a 90% reflective flat shield: 2,700 Kelvin (almost hot enough to boil aluminum)

Temperature of Sundiver’s conical shield (radiation only): 1,764 Kelvin (still too hot for aluminum)

Radiator temperature: 2,600 Kelvin (manageable)

Required heat conductivity: 1,000 Watts per meter per Kelvin difference (manageable)

Altitude: 0.25 solar radii

T+1 hour, 0 minutes, 40 seconds

Speed: 553 km/s

Solar irradiance: 40.3 megawatts per square meter (29,600)

Temperature of a perfect absorber: 5,200 Kelvin (hot enough to boil almost all metals. Not tungsten, though. Niobium boils.)

Total heat flux: 12.66 megawatts

Temperature of a 90% reflective flat shield: 2,900 Kelvin (more than hot enough to boil aluminum)

Temperature of Sundiver’s conical shield (radiation only): 1,900 Kelvin (way too hot for aluminum)

Radiator temperature: 2,900 Kelvin (more than manageable, but the radiant heat would probably hurt your eyes)

Required heat conductivity: 1,400 Watts per meter per kelvin difference (manageable)

Altitude: 0.1 solar radii

T+1 hour, 5 minutes, 25 seconds

Speed: 589 km/s

Solar irradiance: 52 megawatts per square meter

Temperature of a perfect absorber: 5,500 Kelvin (tungsten melts, but still doesn’t boil; tungsten’s tough stuff; niobium is boiling)

Total heat flux: 16.34 megawatts

Temperature of a flat shield: 3,000 Kelvin (tungsten doesn’t melt, but it’s probably uncomfortable)

Temperature of our conical shield: 2,000 Kelvin (getting uncomfortably close to aluminum’s boiling point)

Radiator temperature: 3,100 Kelvin (tungsten and carbon are both giving each other worried looks; the shield can cause fatal radiant burns from several meters)

Required heat conductivity: 1,600 watts per meter per kelvin difference (still manageable, much to my surprise)

Altitude: 0.01 solar radii (1% of a solar radius)

T+1 hour, 8 minutes, 11 seconds

Speed: 615 km/s

Irradiance: 61 megawatts per square meter

Temperature of a perfect absorber: 5,700 Kelvin (graphite evaporates, but tungsten is just barely hanging on)

Total heat flux: 19.38 megawatts

Temperature of a flat shield: 3,200 Kelvin (most materials have melted; tungsten and graphite are still holding on)

Temperature of our conical shield: 2,100 Kelvin (titanium melts)

Radiator temperature: 3,200 Kelvin (tungsten and graphite are still stable, but at this point, the radiator itself is almost as much of a hazard as the Sun)

Required heat conductivity: 1,900 Watts per meter per Kelvin difference (we’re still okay, although we’re running into trouble)

The Sundiver finally strikes the Sun’s surface traveling at 618 kilometers per second. Except “strike” is a little melodramatic. The Sundiver’s no more striking the Sun than I strike the air when I jump off a diving board. The Sun’s surface is (somewhat) arbitrarily defined as the depth the sun’s plasma gets thin enough to transmit over half the light that hits it. At an altitude of 0 solar radii, the Sun’s density is a tenth of a microgram per cubic centimeter. For comparison, the Earth’s atmosphere doesn’t get that thin until you get 60 kilometers (about 30 miles) up, which is higher than even the best high-altitude balloons can go. Even a good laboratory vacuum is denser than this.

But even this thin plasma is a problem. The problem isn’t necessarily that the Sundiver is crashing into too much matter, it’s that it’s that the matter it is hitting is depositing a lot of kinetic energy. Falling at 618 kilometers per second, it encounters solar wind protons traveling the opposite direction at upwards of 700 kilometers per second, for a total velocity of 1,300 kilometers per second. Even at photosphere densities, when the gas is hitting you at 1,300 kilometers per second, it transfers a lot of energy. We’re talking 17 gigawatts per square centimeter, enough to heat the shield to a quarter of a million Kelvin.

This spells the end for the Sundiver. It might survive a few seconds of this torture, but its heat shield is going to be evaporating very rapidly. It won’t get more than a few thousand kilometers into the photosphere before the whole spacecraft vaporizes.

In fact, even at much lower densities (a million hydrogen atoms per cubic centimeter), the energy flux due to the impacts of protons alone is greater than one solar constant. (XKCD’s What-If, the inspiration for this whole damn blog, pointed this out when talking about dropping tungsten countertops into the sun.) At 1.0011 solar radii, the proton flux is more than enough to heat the shield up hotter than a lightning bolt. As a matter of fact, when the solar wind density exceeds 0.001 picograms per cubic centimeter (1e-15 g/cc), the energy flux from protons alone is going to overheat the shield. It’s hard to work out at what altitude this will happen, since we still don’t know very much about the environment and the solar wind close to the sun (one of the questions Solar Probe+ will hopefully answer when (if) it makes its more pedestrian and sensible trip to 8 solar radii.) But we know for certain the shield will overheat by the time we hit zero altitude. The whole Sundiver will turn into a wisp of purplish-white vapor that’ll twist and whirl away on the Sun’s magnetic field.

But even if heating from the solar wind wasn’t a problem, the probe was never going to get much deeper than zero altitude. Here’s a list of all the problems that would kill it, even if the heat from the solar wind didn’t:

1) This close to the Sun, the sun’s disk fills half the sky, meaning anything that’s not inside the sunshade is going to be in direct sunlight and get burned off. That’s why I said earlier that the back of the Sundiver had to be very close to flat.

2) The radiator will reach its melting point. Besides, we would probably need high-power heat pumps rather than heat pipes to keep heat flowing from the 2,000 Kelvin shield to the 3,000-Kelvin radiator. And even that might not be enough.

3) Even if we ignore the energy added by the proton flux, those protons are going to erode the shield mechanically. According to SRIM, the conical part of the shield (which has a half-angle of 11 degrees) is going to lose one atom of aluminum for every three proton impacts. At this rate, the shield’s going to be losing 18.3 milligrams of aluminum per second to impacts alone. While that’s not enough to wear through the shield, even if it’s only a millimeter thick, my hunch is that all that sputtering is going to play hell with the aluminum’s structure, and probably make it a lot less reflective.

4) Moving at 618 kilometers per second through a magnetic field is a bad idea. Unless the field is perfectly uniform (the Sun’s is the exact opposite of uniform: it looks like what happens if you give a kitten amphetamines and set it loose on a ball of yarn), you’re going to be dealing with some major eddy currents induced by the field, and that means even more heating. And we can’t afford any extra heating.

5) This is related to 1): even if the Sun had a perfectly well-defined surface (it doesn’t), the moment Sundiver passed through that surface, its radiator would be less than useless. In practical terms, the vital temperature differential between the radiator and empty space would vanish, since even in the upper reaches of the photosphere, the temperature exceeds 4,000 Kelvin. There simply wouldn’t be anywhere for the heat to go. So if we handwaved away all the other problems, Sundiver would still burn up.

6) Ram pressure. Ram pressure is what you get when the fluid you’re moving through is too thin for proper fluid dynamics to come into play. The photosphere might be, as astronomers say, a red-hot vacuum, but the Sundiver is moving through it at six hundred times the speed of a rifle bullet, and ram pressure is proportional to gas density and the square of velocity. Sundiver is going to get blown to bits by the rushing gas, and even if it doesn’t, by the time it reaches altitude zero, it’s going to be experiencing the force of nine Space Shuttle solid rocket boosters across its tiny 3.142-square-meter shield. For a 1,000-kilogram spacecraft, that’s a deceleration of 1,200 gees and a pressure higher than the pressure at the bottom of the Mariana Trench. But at the bottom of the trench, at least that pressure would be coming equally from all directions. In this case, the pressure at the front of the shield would be a thousand atmospheres and the pressure at the back would be very close to zero. Atoms of spacecraft vapor and swept-up hydrogen are going to fly from front to back faster than the jet from a pressure washer, and they’re going to play hell with whatever’s left of the spacecraft.

Here’s the closest I could come to a pretty picture of what would happen to Sundiver. Why do my thought experiments never have happy endings?

Standard
Uncategorized

# Sundiving, Part 1

Ever since I saw the bizarre, quirky, and entertaining film Sunshine, I’ve been mildly obsessed with the idea of spacecraft flying very close to the Sun. I must give a SPOILER ALERT, but the ship in Sunshine flew right into the Sun. We humans haven’t come anywhere near that close. There’s MESSENGER, the recently-deceased spacecraft that gave us our best-yet view of Mercury. And then there’s Helios 2, which came within 43 million kilometers of the Sun, which is closer to the Sun than Mercury ever gets. (Helios 2 holds another awesome record: the fastest-moving human artifact. At perihelion, it was going over 70 kilometers per second. If you fired an M-16 at one end of a football field at the moment Helios 2 passed the start line (I’m borrowing XKCD’s awesome metaphor here), the bullet would barely have traveled four and a half feet by the time Helios 2 got to the other end. Also, Helios 2 would be far beyond the finish line by the time your brain even registered that it had crossed the start line.)

There are plans to probe the Sun much closer, though. NASA is currently working on Solar Probe+, which I’m really hoping doesn’t get canned next budget cycle. Solar Probe+ will, after half a dozen Venus gravity assists, pass eight times closer to the Sun than Mercury: 5.9 million kilometers, or 8.5 solar radii. I must point out that the original design, which never got a proper name, was much, much cooler. It looked like this

(Source.)

and it was going to make one hell of a trip. It was going to fly out to Jupiter, get a reverse gravity assist to kill its angular momentum, and then plunge down within 4.5 solar radii. Here’s what a sunrise would look like if Earth orbited at 4.5 solar radii:

(Rendered, of course, with Celestia.)

See that tiny object in the top-right corner? That’s the Moon, for comparison. Hold me. I’m scared.

Both the original Solar Probe mission and Solar Probe+ had to solve all sorts of brand-new engineering problems. For instance: how do you design a solar panel that can operate hotter than boiling water? How do you pack instruments onto a spacecraft when the shadow of its shield is a cone not much bigger than the shield itself? What the hell do you even build a shield out of, when it has to operate well above 1,000 Kelvin, has to cope with sunlight  3,000 times as intense as what we get on Earth, and has to be mounted on a spacecraft that will, at closest approach, be traveling 291 times faster than a rifle bullet, and will therefore be crashing through solar wind protons and dust grains moving at least as fast as that?

But that’s nothing compared to what I have in mind (to nobody’s surprise). I don’t want to design a probe that can get within 4.5 radii of the Sun. I want a probe that can get closer than one solar radius. I want a space probe that can dive straight into the sun. Not only that, but I want it to be alive and intact when it hits the Sun’s “surface.” This is why NASA will never, ever hire me.

To my surprise, the first problem I have to solve has nothing to do with heat (which will be more than enough to boil a block of iron) or radiation (which will be more than enough to sterilize a cubic meter of that sludge that festers in un-flushed gas-station toilets). The first problem is: How the hell do we get there in the first place?)

I’ll refer you to Konstantin Tsiolkovsky (whose proper Russian name isКонстанти́н Циолко́вский. Why am I telling you that? Because I really like the look of Cyrillic.) Tsiolkovsky is one of those guys who was so ahead of his time he makes you half believe in time travel. He was imagining rockets and space elevators in the freakin’ late 1800s. Before there were even cars, he was thinking about flying to other planets. And he graced us frail mortals with one of the coolest equations in engineering

To put it in less mathematical (and far uglier) terms: the mass ratio R of your rocket (that is, its mass when it’s full of fuel divided by its mass when the tanks are all empty) must be equal to the exponential of your desired change in velocity (delta-V) divided by your effective exhaust velocity (v_e, which is a measure of how efficient your rocket is).

Believe it or not, there’s a reason I’m taking this insane roundabout route to my point. In its orbit around the Sun, the Earth travels about 30 kilometers per second. A spacecraft that just barely manages to escape Earth’s gravity well will be traveling very close to zero speed, relative to the Earth, which means it will also be traveling at around 30 kilometers per second. In order to know how big a booster we’ll need to kill 30 kilometers per second (which will let our probe drop straight down into the Sun), we use 30 km/s as our delta-V. But what’s our exhaust velocity?

Consider the awesome Rocketdyne F-1 engine, five of which powered the Saturn V’s first stage

That’s Wernher von Braun standing by the tail end of a Saturn V first stage, with the five amazing and terrifying F-1 engines behind him. This image fills me with childish glee, because I’ve actually stood exactly where von Braun stood without even knowing it. I’ve seen that very booster. I’ve had my picture taken standing by those mighty engine bells. That’s because that first stage is on display (or at least it was last time I checked) at the U.S. Space and Rocket center in Huntsville, Alabama, which was my favorite place to go on vacation when I was a kid. Suffice to say, those engines are every bit as impressive as they look.

The F-1 engines burned liquid oxygen and ultra-high-grade kerosene (which amuses me). They managed a specific impulse (another measure of efficiency which will be very familiar to my fellow Kerbal Space Program addicts) of 263 seconds, for an effective exhaust velocity of 2.579 km/s. Plugging that into our formula, we get a horrifying number: 112,700. That’s right: if we want to kill our orbital velocity relative to the Sun using Saturn V engines, our rocket is going to have to be over a hundred thousand times heavier when full than when empty. That means that, out of the total mass of our rocket when it reaches interplanetary space, only 0.0009% can be anything other than fuel. For comparison, the Saturn V itself had a mass ratio of somewhere around 25, and as far as rockets go, that’s ridiculously large. 112,700 is just dumb, like giving an RPG character a sword the size of an armor plate off a battleship (I’m looking at youFinal Fantasy…).

The problem is that damnable exponent. As we learned in a recent post, as soon as you start putting decent-sized numbers in an exponent, you get ridiculous numbers out the other end.

Lucky for us, there are engines with much higher exhaust velocities. If you have an afternoon to spare, have a wander around Winchell Chung’s unbelievably awesome website Project Rho, which also fills me with childish glee. He’s compiled an amazing compendium of all the facts and equations a lover of science, fiction, and science fiction could ever want. Everything from the exhaust velocities of all the engines physics allows to the number of cubic meters of living space a crewmember needs to stay sane.

According to Project Rho and some of my own research, the NERVA engine (which quite literally produced thrust by passing hydrogen gas over the extremely hot core of a nuclear reactor) managed an exhaust velocity of about 8 km/s in vacuum. (Once again, Kerbal Space Program players will be no stranger to the nuclear rocket’s excellent efficiency and terrible, mosquito-fart thrust.) Putting 8 km/s into the rocket equation, we get a mass ratio of 43. Let’s say our sun-diving spacecraft weighs 1000 kilograms, the miscellaneous equipment weighs 100 kilograms, and for every kilogram of liquid hydrogen, we need a kilogram of fuel tank (that’s a pretty low-ball estimate, surprisingly. I did the math, and now I feel like my brain is frying in my skull. I’ve gotta lay off these side calculations…) Then our spacecraft will mass 46,200 kilograms. That’s surprisingly manageable. Wolfram Alpha tells me you could carry that mass in a 747’s cargo hold. Of course, you have to get that whole mass to Earth escape velocity somehow, which means at least another 92,000 kilograms. Not unmanageable, but pretty out-there.

Besides, there are better options. We could, for instance, use an ion engine. Ion engines are infamous for being absurdly efficient (the one on the Dawn spacecraft that’s currently orbiting Ceres manages 30 km/s exhaust velocity), but having thrusts that make a mosquito fart look like an atom bomb. The thrust of Dawn‘s NSTAR engine is equivalent to the weight of a coin resting on your palm. Thing is, ion engines can keep this thrust up for years at a time. And they have, which Dawn proves (it’s been firing its engine on and off for almost eight years straight). Using an ion engine, we’d need a rocket with a surprisingly sensible mass ratio of 2.72. The NSTAR engine uses xenon as propellant, so let’s say you need 10 kilograms of tank per kilogram of xenon. Even so, we’re only looking at a 1,300 kilogram spacecraft, which is only slightly larger than Dawn itself. So far, Dawn holds the record for the most delta-V expended by any spacecraft engine, at 10 km/s. It’s not too much of a stretch to imagine our sundiver canceling its 30 km/s orbital velocity.

There’s a catch. Remember that mosquito-fart thrust I was talking about? That’s going to give us an acceleration of 70 microns per second per second. My calculus is rusty, so I’ll do the naive thing and just divide 30 km/s by 70 um/s^2. That gives us 13 years. It’s gonna take 13 years for our sundiver to stop. And then it’s still got to fall all the way to the Sun. I’m not that patient.

So why not use the most awesome propulsion system ever designed by human hands. I’m not joking, either. This is, in my opinion, the coolest practical space propulsion concept I’ve ever seen: Project Orion.

If you don’t know, Project Orion was a propulsion system studied in the ’50s and ’60s in the U.S. The propulsion would be provided by nuclear bombs. Nuclear bombs dropped out the back of the ship and detonated once a second. The weirdest part of all this is that, if you ask me (and many other science nerds), Orion actually falls into the “so crazy it might actually work) category. As Scott Manley said, Project Orion is the only interplanetary propulsion system that meets three vital criteria: 1) It provides a decent amount of thrust. 2) It provides that thrust at a reasonable efficiency. and 3) It is based on technologies we already understand. That last one is very important. Maybe we’ll figure out how to build a fusion reactor someday. But, for better or for worse, we already know how to build a nuclear bomb. Not only that, but we know how to make a nuclear bomb direct its energy preferentially in one direction (since, according to Stanislav Ulam, as quoted by Scott Manley, you need to be able to do that in order to build a hydrogen bomb).

An Orion-powered spacecraft has an effective exhaust velocity of 40 km/s. That means we need a spacecraft with a mass ratio of 2.1. There’s a catch, though: the pusher plate in that diagram has to be at least 20 meters across. So, no matter how large or small our spacecraft, we’re going to have to tow that building-sized nuclear shock absorber with us. Let’s say it masses 2,000,000 kilograms (which was about the mass of a fully-loaded space shuttle). We’re looking at 4,200 metric tons of spacecraft, and we have to get all that to escape velocity first.

But this is the impatient way to kill 30 km/s. This is the way I solve problems in Kerbal Space Program, which is always a good sign that it’s not a practical solution.

Funnily enough, the practical solution is very similar to the trajectory in that comic… Instead of trying to kill 30 kilometers per second, we’re going to reach Earth escape velocity, boost ourselves into an elliptical orbit that makes us arrive slightly ahead of Jupiter in its orbit, and use Jupiter’s deep gravity well to sling us backwards along its orbit. A transfer from 1 AU to Jupiter’s distance (5.2 AU) means we’ll only be going 17 km/s when we get there, and a gravity slingshot like I’ve described allows you to change velocity by up to twice the planet’s orbital speed (and for Jupiter, orbital speed is 13 km/s, so we can have an effective delta-V of up to 26 km/s from Jupiter alone (give or take)). We don’t want that much delta-V, since we only want to cancel our 17 km/s velocity, but we can adjust how much of a kick we get simply by changing how close we come to Jupiter. The important thing is that the kick available is at least 17 km/s, which it is, with room to spare.

So we’re getting 17 km/s for free. (Not really: the energy change is always balanced perfectly between the change in velocity of the spacecraft and the (infinitesimal, but nonzero) change in velocity of the planet, as a result of their mutual gravitation.) To put it better: we’re getting 17 km/s without having to fire our engines. But we do have to fire our engines to get to Jupiter in the first place. If we do a standard Hohmann transfer,

we’ll need a delta-V of 16 km/s. If we use a NERVA engine (which I’m choosing because it’s a sensible middle-ground between the pathetic efficiency of the NSTAR and the a-little-too-much awesomeness of Orion), we can do that using a spacecraft with a mass ratio of 7. If we use Project Rho’s mass for a NERVA engine and assume 10 kilograms of tank per kilogram of hydrogen, we end up with a 17,100-kilogram interplanetary rocket. You could get that in to low Earth orbit using either a Saturn V or the much cooler-looking (but, unfortunately, more deadly) Soviet N1. By the time you get to low Earth Orbit, you’re already traveling at 7.67 kilometers per second, and to reach escape velocity only takes 3.18 km/s more. The rocket involved in launching 17,000 kilograms’ worth of interplanetary stage plus 3.18 km/s worth of Earth-escape engine is probably going to be among the largest ever constructed, but it’ll probably be no bigger than the Saturn V, the N1, or the Space Shuttle.

But as I said, I’m not a patient man. How long is it going to take to get to the Sun? The time to launch and reach escape velocity are negligible. The Hohmann transfer to Jupiter is not, requiring 2 years and 8 months. The fall inwards from Jupiter needs another 2 years and 1 month, for a total of 4 years 9 months. A lot better than the 13 years it was going to take us just to stop from Earth orbit.

And that’s where I’m going to end Part 1. Our Sundiver has launched from Earth on a skyscraper-sized rocket a little bigger than a Saturn V, entered low Earth orbit, boosted to escape velocity with its upper stage, made the transfer to Jupiter, done its swing-by, and fallen the 780 million kilometers to the Sun. As it reaches an altitude of 1 solar radius from the Sun’s surface, it’s traveling at 438 kilometers per second, which is 0.146% of the speed of light and six times faster than Helios 2. Remember how, at the beginning, I said the heat shield and the radiation weren’t the first problem? Well, now that we’re only 1 solar radius above the Sun’s surface, we can no longer ignore them. But I’ll leave that for Part 2.

Standard
Uncategorized

# The Biology of Dragonfire

In a recent post, I decided that plasma-temperature dragonfire might be feasible, from a physics standpoint. There’s one catch: my solution required antimatter (and quite a bit of it). Antimatter does occur naturally in the human body, though. An average human being contains about 140 milligrams of potassium, which we need to run important stuff like nerves and heart muscle. The most common isotope of potassium is the stable potassium-39, with a few percent potassium-41 (also stable), and a trace of potassium-40, which is radioactive. (It’s the reason you always hear people talking about radioactive bananas. It also means that oranges, potatoes, and soybeans are radioactive. And cream of tartar is the most radioactive thing in your kitchen, unless you’ve got a smoke detector in there.)

Potassium-40 almost always decays by emitting a beta particle (transforming itself into calcium-40) or by cannibalizing one of its own electrons (producing argon-40). But about one time in 100,000, one of its protons will transform into a neutron, releasing a positron (the antimatter counterpart to the electron) and an electron neutrino. The positron probably won’t make it more than a few atoms before it attracts a stray electron and annihilates, producing a gamma ray. But that doesn’t matter, for our purposes. What matters is that there are natural sources of antimatter.

Unfortunately, potassium-40 is about the worst antimatter source there is. For one thing, its half-life is over a billion years, meaning it doesn’t produce much radiation. And, like I said, of that radiation, only 0.001% is in the form of usable positrons.

Luckily, modern medicine gives us another option. Nuclear medicine, specifically (which, by the way, is just about the coolest name for a profession). As you may have noticed by the fact that you don’t vomit profusely every time you go outside, human beings are opaque. We can shoot radiation or sound waves through them to see what their insides look like, but that usually only gives us still pictures, and it doesn’t tell us, for instance, which organs are consuming a lot of blood, and therefore might contain tumors. For that, we use positron-emission tomography (PET) scanners. In PET, an ordinary molecule (like glucose) is treated so that it contains a positron-emitting atom (most often fluorine-18, in the case of glucose). The positron annihilates with an electron, and very fancy cameras pick up the two resulting gamma rays. By measuring the angles of these gamma rays and their timing, the machine can decide if they’re just stray gamma rays or if they, in fact, emerged from the annihilation of a positron. Science is cool, innit?

One of the other nucleides used in PET scanning is carbon-11. Carbon-11 is just about perfect, as far as biological sources of antimatter go. It’s carbon, which the body is used to dealing with. It decays almost exclusively by positron emission. It decays into boron, which isn’t a problem for the body. And its half-life is only 20 minutes, which means it’ll produce antimatter quickly.

There’s one major catch, though. Whereas potassium-40 occurs in nature, carbon-11 is artificial, produced by bombarding boron atoms with 5-MeV protons from a particle accelerator. I may, however, have found a way around this. To explain, here’s a picture of a dragon:

No, those aren’t labels for weird cuts of meat. They’re to explain the pictures that follow.

Living things contain a lot of free protons. They’re the major driver of the awesome mechanical protein ATP synthase, which looks like this:

(The Protein DataBank is awesome!)

Sorry. I just really like the way PDB renders its proteins.

Either way, we know organisms can produce concentrations of protons. But in order to accelerate a proton, you need a powerful electric field. The first particle accelerators were built around van de Graaff generators, which can reach millions of volts. Somehow, I doubt a living creature can generate a megavolt.

Actually, you might be surprised. The electric eel (and the other electric fish I’m annoyed my teachers never told me about) produces is prey-stunning shock using cells called electroctyes. These are disk-shaped cells that act a little bit like capacitors. They charge up individually by accumulating concentrations of positive ions, and then they discharge simultaneously. The ions only move a little bit, but there are a lot of ions moving at the same time, which produces a fairly powerful electric current that generates a field that stuns prey. The fact that organisms can produce potential differences large enough to do this makes me hopeful that maybe, just maybe, a dragon could do the same on a nanometer scale, producing small regions of megavolt or gigavolt potential that could accelerate protons to the energies needed to turn boron-bearing molecules into carbon-11-bearing molecules. Here’s how that might work:

There’s going to have to be a specialized system for containing the carbon-11 molecules, transporting them rapidly, and shielding the rest of the body from the positrons that inevitably get loose during transport, but if nature can invent things like electric eels and bacteria with built-in magnetic nano-compasses, I don’t think that’s too big a stretch.

The production of carbon-11 is going to have to happen as-needed, because it’s too radioactive to just keep around. I imagine it’d be part of the dragon’s fight-or-flight reflex. Here’s how I imagine the carbon-11 molecules will be stored:

Note the immediate proximity to a transport duct: when you’ve got a living creature full of radioactive carbon, you want to be able to get that carbon out as soon as you can. Also note the radiation shielding around the nucleus. That would, I imagine, consist of iron nanoparticles. There might also be iron nanoparticles throughout the cytoplasm, to prevent the gamma rays from lost positrons from doing too much tissue damage.

Those positrons are going to have to be stored in bulk once they’re produced, though. This problem is the hardest to solve, and frankly, I feel like my own solution is pretty handwave-y. Nonetheless, here’s what I came up with: a biological Penning trap:

These cells are going to require a lot of brand-new biological machinery: some sort of bio-electromagnet, for one (in order to produce the magnetic component of the Penning trap). For another, cells that can sustain a high electric field indefinitely (for the electric component). Cells that can present positron-producing carbon-11 atoms while simultaneously maintaining a leak-proof capsule and a high vacuum in which to store the positrons. And cells that can concentrate high-mass atoms like lead, because there’s no way to keep all the positrons contained. That’s probably wishful thinking, but hey, nature invented the bombardier beetle and the cordyceps zombie-ant fungus, so maybe it’s not too out there.

The process of actually producing the dragonfire is very simple, by comparison. The dragon vomits water rich in iron or calcium salts (or maybe just vomits blood). The little storage capsules open at the same time, making gaps in their fields that let the positrons stream out. The positrons annihilate with electrons in the fluid (hopefully not too close to the dragon’s own cells; this is another stretch in credibility). The gamma rays produced by the annihilation are scattered and absorbed by the water and the heavy elements in it, and by the time they exit the mouth, they’re on their way to plasma temperatures.

This is not, of course, the kind of thing nature tends to do. Evolution is a lazy process. It doesn’t find the best solution overall (because if you wanna talk about dominant strategies, having a built-in particle accelerator is up there with built-in lasers). It just finds the solution that’s better enough than the competitor’s solution to give the critter in question an advantage. So, although nature has jumped the hurdles to create bacteria that can survive radiation thousands of times the dose that kills a human on the spot, and weird things like bombardier beetles, insect-mind-controlling hairworms, and parasites that make snails’ eyestalks look like caterpillars so birds will eat them and spread the parasites, the leap to antimatter storage is probably asking a bit too much, unless we’re talking about some extremely specific evolutionary pressures.

Which is not to say that nature couldn’t produce something almost as awesome as plasma-temperature dragonfire. Let’s return once again to the bombardier beetle. The bombardier beetle has glands that produce a soup of hydrogen peroxide and quinones. Hydrogen peroxide likes to decompose into water and oxygen, which releases a fair bit of heat (which is why it was used as a monopropellant in early spacecraft thrusters). But at the beetle’s body temperature, the decomposition is too slow to matter. When threatened, however, the beetle pumps the dangerous soup into a chamber lined with peroxide-decomposing catalysts, which makes the reaction happen explosively, spraying the predator with a foul mix of steam, hot water, and irritating quinone derivatives. Here’s what that looks like:

So if nature can evolve something like that, is it too much of a stretch to imagine a dragon producing hydrogen-peroxide-laden fluid, mixing it with hydrogen gas, and vomiting it through a special orifice along with some catalyst that ignites the mixture into a superheated steam blowtorch like the end of a rocket nozzle? Well, look at that beetle! Maybe it’s not as far-fetched as it seems…

Standard
Uncategorized

# Approaching Infinity

One of the cool (and terrifying) things about math is that it’s almost a trivial task to construct a number which is not only larger than any number a human being will ever be able to use, but is also larger than any number that occurs in the Universe, even if you measure its mass in electron masses or its volume in Planck volumes.

The average human’s mathematical circuits are not that hard to overload. If I give you a deck of one hundred photographs and give you one hour to memorize all of them, you might very well be able to do it, but odds are you’ll miss some details. If I ask you to remember a 500-digit number, unless you’re a savant (like Daniel Tammet, who once recited pi to over 22,000 digits from memory, and who allegedly has a distinct mental image for every integer from 1 to 10,000), you’ll need some sort of fancy technique to do it. When it comes to counting objects, human beings don’t need very many numbers. I am one person. You (the reader), and I are two people having a sort of conversation. When I’m talking to a friend and somebody annoying butts in, that’s three people. If I have three apples, and I can ford a river to get to a tree with four apples, at the cost of dropping the ones I already have, I’ll do it. Numbers like three, five, and seven show up in most of the world’s myths and superstitions. Occasionally, you’ll get to seven or nine or even eleven, but rarely much farther than that. On a basic hunter-gatherer level, one hundred is a bit excessive. It’s only writing, science, mathematics, and economy that have made a hundred of anything meaningful.

Take the number one trillion. That’s 10^12, or 1,000,000,000,000. (According to the American number scale, anyway.) It’s a big number. Draw a square. Divide it with ten vertical and ten horizontal lines. Divide each of those boxes with ten pairs of lines. Do this eight more times, and you’ve got a trillion squares. I should, of course, point out that, if you’re working on regular letter-size paper, by the time you get to a trillion boxes, the lines will be so close together that a virus will take up more than one square. Even if you drew your grid in the heart of Asia, where there’s a nice big squarish landmass 3,780 kilometers on an edge (stretching from the coast of China to the Caspian Sea along the east-west axis and from Siberia to the Himalayas on the north-south axis), the squares would be the size of a small closet.

But I already talked about a trillion at length in a previous post. A trillion green peas would just about fit on a football field (for most reasonable definitions of “football”). It’s a lot, but it’s a sensible, comprehensible number.

And a trillion is the largest number you’ll see mentioned frequently in serious astronomy, although there’s also the pleasant-sounding number “ten sextillion.” It sounds like something Lewis Carroll would’ve come up with. Ten sextillion is 10,000,000,000,000,000,000,000. That’s how many stars there are in the visible universe, according to Carl Sagan’s estimation. If you took the heart of Asia from before and divided it into ten sextillion squares, the lines would be separated by less than a hair’s breadth: about 30 microns. Cramped lodgings even for an amoeba.

But with nothing but digits and a few symbols, we can effortlessly construct numbers so massive that there’s no sensible way to describe how massive they are. Consider one trillion again. One trillion is 10^12. That’s the number 10 to the 12th power: 10 multiplied by itself 12 times. Here’s a number that will hurt your head: 10^(10^12). Simple: just take 10, and multiply it by itself one trillion times. I thought I’d be able to actually copy-and-paste that number, but it turns out that, in a 12-point font, I’d need almost 218 million pages (printed both sides). That’s a whole library’s worth of dictionary-sized books, just to hold the digits of a number I described using ten characters a second ago. If you divided the diameter of the observable universe into 10^(10^12) pieces, the distance between them would be 999,999,999,938 orders of magnitude smaller than the Planck length, which is just about the smallest length that makes sense, according to our current physics.

It’s easy as pie to create a scarier number. I’ll do it right now! (10^12)^(10^12). That is, (1,000,000,000,000)^(1,000,000,000,000). Multiply a trillion by itself a trillion times. This is where things not only get horrifying and migraine-inducing, but where they start to get strange: (10^12)^(10^12) isn’t really all that much larger than 10^(10^12). A trillion to the trillionth power is only 10^(12,000,000,000,000), or ten to the twelve trillionth power. That’s because of the way exponents work: the twelve in the first exponent gets multiplied by the trillion in the second exponent. Simple.

Don’t worry, though. With hardly any work, we can construct a function which will generate numbers as scary as you like with almost no effort.

Let’s say that the function M[1](a,b) is just a * b. Simple multiplication (which is really just adding a to itself b times, or vice versa). Let’s extend the concept by saying that M[2](a,b) is a^b, or a multiplied by itself b times. There’s no reason we can’t define M[3](a,b). It would just be a nested series of M[2] being applied over and over, exactly b times. For example, M[3](2,8) is 2^(2^(2^(2^(2^(2^(2^2)))))). You know you’ve wandered into the weird part of mathematics when you get a headache just from dealing with the damn parentheses…

There is no way to write M[3](2,8) out. As a matter of fact, there’s no way to write out its number of digits (which, after all, is only ten times smaller than the number itself). Here’s the closest I can get to writing M[3](2,8). Prepare for an absolutely horrific number-salad. I PROMISE this is the only time in this article that I’m going to do this:

[HUGE NUMBER GOES HERE]

If you’re mad at me right now, I understand. But if it makes you feel better, trying to work out that formatting has both given me a headache and made me physically nauseous. And I still screwed it up.

Nope. Couldn’t do it. It was just too damn hard to look at. Suffice to say, most of the article would have been digits, if I’d pasted that ugly bastard in.

But even the M[3](2,8) thing is unwieldy. We need better notation. Thankfully, Donald Knuth (who, at the age of nineteen, created an entire system of measurement based partly on the thickness of Mad magaizne, issue 26) provided a more elegant solution.

(I should, at this point, mention that the enormous number I copy-and-pasted above was so big that it was making the WordPress text editor lag, so I had to copy it into a Notepad file so that I can continue writing. You’ll never see it, but up above, I’ve written “[HUGE NUMBER GOES HERE]”. I have a headache.)

Knuth’s up-arrow notation is just like my M-notation, but it’s (slightly) easier on the eyes. No easier on the brain, though. In Knuth notation, a ^ b is replaced by a↑b, that is, a multiplied by itself b times. For example: 2↑8 is 256.

Things get scary very, very fast. a↑↑b is defined as a↑(a↑(…↑a)), where a is repeated a total of b times, with all the associated symbols. That’s too damn abstract for me, so let’s compute 2↑↑3. That’s 2↑2↑2, or 2^(2^(2)), which is only 16. 2↑↑4 is 2^(2^(2^(2))), or 65,536.

We can go further, though, although my headache is telling me to stop. a↑↑↑b is just a↑↑a↑↑a…↑↑a, with a repeated b times. 3↑↑↑2 is 3↑↑3 or 3^(3^(3)), which is about seven and a half trillion. 3↑↑↑3, on the other hand, is a number so large that I can’t express it in decimal notation. Hell, I can’t even express it using exponentials or up-arrows. It’s equal to 3↑↑3↑↑3, which is equal to 3↑↑(3^(3^3)), which is equal to 3↑3↑…↑3, where there are over seven trillion threes. That means a tower of exponents seven trillion threes tall. My word processor tells me that a superscript is 0.58 times the size of a regular letter, and by the time we get to the 7.6 trillionth three, it’ll be infinitely smaller than a proton.

That’s the level we’re at. Even trying to describe the typography of this ridiculous number is impossible.

What about a↑↑↑↑b? Well, 3↑↑↑↑2 is 3↑↑↑3, which we just saw was the most horrible thing in the world. 3↑↑↑↑3 is 3↑↑↑(3↑↑↑3). That is to say, 3↑↑↑3↑↑↑…↑↑↑3, with 3↑↑↑3 threes.

But I’m not letting you get off that easy. Let’s say that a↑[c]b means a↑…↑b, with c arrows in total. So a^b would be a↑[1]b. 3↑↑↑3 would be 3↑[3]3.

You know what I’m going to do. I can’t stop myself. If I knew any Medusas, I’d be a statue by now, because I wouldn’t be able to resist sneaking a peek.

There’s no turning back. It’s too late for you now. Too late for me.

Consider the number 3[3↑↑↑3]3. That’s 3↑…↑3, with seven trillion arrows. Think of the endless eternities of parentheses and arrows and evaluations, and that wouldn’t even get you close to the number of digits in this horror. Let’s call this horror X.

Now consider 3↑[X]↑3. Call it Y.

I imagine that my punishment in Number Hell will be evaluating 3↑…↑3, with Y arrows. And that’s infinitely smaller than 3↑[3↑…↑3 with Y arrows]↑3.

I’m not exaggerating for dramatic effect: I am genuinely smelling rotten eggs right now. I think I might have given myself a stroke. But before the aphasia sets in, let me introduce you to the Devil Incarnate: the Ackermann Function.

The Ackermann Function is the kind of thing they must’ve tortured Winston Smith with in 1984. It’s the reason some mathematicians walk around with that horrified thousand-yard stare. It’s an honest-to-goodness nightmare.

The Ackerman function is dead-simple. You write it A(a,b), for positive integers a and b. Here’s how you evaluate it.

If = 0, then A(a,b) = b+1

If a > 0 and b = 0, then A(a,b) = A(a-1,1)

If > 0 and b > 0, then A(a,b) = A(a-1,A(a,b-1)).

Simple rules. Not simple to apply. For instance, A(2,2) = A(1,A(2,1)) = A(1,A(1,A(2,0))) = … a horrifying mess of parentheses that ultimately gets you to 7. At least it’s a sensible number. So is A(3,2). It’s 29. A(4,2), on the other hand, is over 19 thousand digits long. When I typed Ackermann(4,4) into WolframAlpha, it actually told me “(too large to represent).” It’s always nice when a computation engine built by one of the masters of symbolic computation says “Hell with this. I give up.”

You know how evil I am. You know what I’m going to do. You know how psychotic and depraved I’ve become after looking at unfathomable numbers for an hour.

The Number of the Devil isn’t 666. It’s Ackermann(666↑↑↑↑↑↑666,666↑↑↑↑↑↑666).

Sleep well. I know I won’t.

Standard