If you’d prefer to listen to this as an audio essay at The Natural Curiosity Project, please click here.
I want to start this new year with a question.
Do the actions of a single individual matter?
If you subscribe to the fundamental tenets of negativism, pessimism, resignation, and snark, the answer to that question is decidedly ‘no.’ But let’s reconsider that.
I’m Steve Shepard with the NCP. Welcome. As a university undergrad, I studied two completely unrelated fields, Spanish and marine biology. Both were equally important to me, and both appealed to me as possible career vectors, in spite of the fact that I had no idea what I wanted to with the rest of my life. I graduated, wandered a bit, became a certified SCUBA diver, then a certified SCUBA instructor, then an owner in a diving business in the San Francisco Bay Area.
After five years of full-time diving I switched gears, married Sabine, joined the telecom industry, and had kids. I stayed with the telephone company for eleven years, then left California to join a Vermont-based consulting firm for ten years. I left them in 2000 to start my own company, where I wrote books, taught technology programs, delivered keynotes, wrote, directed and produced audio and video programs, and traveled nonstop to more than 100 countries over the course of 25 years, at which point, with the urging of the COVID lockdown, I decided to retire, having accumulated four million miles on United and a million points each with Marriott and Hilton. It was time.
To restate the obvious, I studied Spanish and biology in school, but spent my career in the world of bit-weenies and propellerheads. The funny thing is that my Spanish studies served me extremely well during my career, making it possible for me to write and teach technical training programs and deliver keynotes all over the world in Spanish. And marine biology? I never stopped diving, and I stayed connected to my biology roots. In my mind, that world is never far away.
I continue to write; I just published my 106th book, and my Podcast, The Natural Curiosity Project, just hit 300 episodes.
My newest novel, “The Sound of Life,” is a sequel to my first novel, “Inca Gold, Spanish Blood,” which was released in 2016. That first book centered around the search for a priceless treasure, and bounced back and forth between the 16th century and modern day. “The Sound of Life” builds on that, but as I worked the story arc, my passion for marine biology began to heat up again, and combined with my work as a wildlife sound recordist and my interest in bioacoustics, the story hit me like a lightning bolt. I’m proud of the book, and I think you’ll enjoy it. But, you’d be right to ask me a question right about now: What does all this have to do with the question I asked at the beginning of this essay: Do the actions of a single individual matter? Yes—they absolutely do. From my own personal experience, they matter immensely.
We’ve all had heroes in our lives. I’m not talking about the kind that appeared between the covers of DC and Marvel comics when we were kids, or who have made it onto the big screen today. I’m talking about real people, who did real things, and who, in the process, created real change by flipping the status quo on its complacent head. I’m talking about people like Rosa Parks. Jane Goodall. David Attenborough. Greta Thunberg. Barack Obama. Want more? Sure. Jacques Cousteau. James Cameron. Malala Yousafzai. Taylor Swift. The unknown man who stood in front of the tank in Tiananmen Square in 1989 with a grocery bag in his arms. The person who tore the first stone out of the Berlin Wall that same year.
These are the individuals who come to mind for me, people who inspired me to be more, to do more, to demand more, to think beyond the confines of my own mind, to be there for others, to be less selfish, to believe that every kind act is repaid a hundred times over. Each of those individuals changed my own life as well as the lives of thousands of other people. But let’s be clear about one thing. None of them—not a single one—created change by telling us what to do. They created change by showing us what to do. They shared their beliefs and motivated change through their own actions, not by waving signs and hanging banners. They led the charge. They led. They were leaders. They demonstrated what leadership is supposed to look like.
Here’s a quote for you that I love:
“Recognize that every out-front maneuver you make is going to be lonely and a little bit frightening. If you’re entirely comfortable, you’re not far enough ahead to do any good. That warm sense of everything going well is nothing more than the body temperature at the center of the herd.”
You can’t lead from the center of the herd, which is why I learned the power of righteous indignation, not by looking it up in the dictionary or reading about it, but through the bravery of Rosa Parks and other people like her. It’s why I became an ardent conservationist and environmentalist, not because it was trendy, but because Jacques Cousteau, Jane Goodall, David Attenborough, Greta Thunberg, and many others like them showed me what we stood to lose if I and others like me didn’t take a stand. I became a wildlife sound recordist and a passionate advocate for wild places and the creatures that live there because Bernie Krause, Gordon Hempton, Melissa Pons and other recordists showed me how impoverished the planet would be if the voices of the natural world were to be silenced forever. I don’t ever want my grandchildren to ask me what a magpie sounded like. Or a tree frog. Or a humpback whale. The past tense has no place here.
And writing? My writing gets better every time I read a book. I learned long ago that skilled authors wield a power that I call lexemancy, the singular magic of language. In a well-written book, authors turn lead into gold, transmuting ideas into words, and words into breathtaking, inspiring experiences for those who read them.
Each of these people, these individuals, have inspired others. When they started, no one knew who Rosa Parks or Greta Thunberg were. Cousteau was an officer in the French Navy with a passion for the sea; Attenborough was a radio broadcaster; and Jane Goodall was a wet-behind-the-ears anthropology novice who took a chance and followed a passion. And just look what their individual passions and advocacy have accomplished—individuals all.
So: Do the actions of a single individual matter? Yes, they do. In fact, the actions of a single individual are the ONLY thing that matters.
Do your actions matter? More than you will ever know.
If you would like to listen to this as an audio essay, complete with the calls of marine mammals, please go here.
In the inaugural issue of National Geographic in 1888, Gardiner Hubbard wrote, “When we embark on the great ocean of discovery, the horizon of the unknown advances with us and surrounds us wherever we go. The more we know, the greater we find is our ignorance.” Gardiner was the founder and first president of the National Geographic Society, the first president of AT&T, and a founder of the prestigious journal, Science. He knew what he was talking about.
104 years later, NASA scientist Chris McKay had this to say: “If some alien called me up and said, ‘Hello, this is Alpha, and we’d like to know what kind of life you have,’ I’d say, water-based. Earth organisms figure out how to make do without almost anything else. The single non-negotiable thing life requires is water.”
Several years ago I met Alaska-based sound recordist and author Hank Lentfer. During one of several conversations, he asked me to imagine that all the knowledge we each have is ‘inside of this circle,’ which he drew in the air. “Everything that touches the outside the circle is what we don’t know,” he told me. “But as we learn, the circle gets bigger, as more knowledge is added to it. But notice also, that as the circle gets bigger, so too does its surface area, which touches more of what we don’t know.” In other words, the more we know, the more we don’t. As Hubbard said in 1888, “The more we know, the greater we find is our ignorance.”
I love science, and one of the things about it that I love the most is how it constantly refreshes itself in terms of what’s new and fresh and exciting and worthy of exploration as a way to add to what’s inside Hank’s circle. Think about it: Over the last twenty years, scientists have mapped the entire human genome; developed CRISPR/CAS9 to do gene editing; created synthetic cells and DNA; discovered the Higgs Boson, gravitational waves, and water on Mars; developed cures or near-cures for HIV, some forms of cancer, and Hepatitis-C; and created functional AI and reusable rockets. And those are just the things I chose to include.
One of the themes in my new novel, “The Sound of Life,” is interspecies communication—not in a Doctor Doolittle kind of way—that’s silly—but in a more fundamental way, using protocols that involve far more listening on our part than speaking.
The ability to communicate with other species has long been a dream among scientists, which is why I’m beyond excited by the fact that we are close to engaging in a form of two-way communication with other species. So, I want to tell you a bit about where we are, how we got here, and why 2026 is widely believed to be the year we make contact, to steal a line from Arthur C. Clarke.
Some listeners may remember when chimps, bonobos and gorillas were taught American Sign Language with varying degrees of success. Koko the gorilla, for example, who was featured on the cover of National Geographic, learned to use several dozen American Sign Language symbols, but the degree to which she actually understood what she was saying remains a hotly-debated topic, more than 50 years later.
But today is a very different story. All the research that’s been done so far in interspecies communication has been based on trying to teach human language to non-human species. Current efforts turn that model on its head: researchers are using Artificial Intelligence to meet animals on their own terms—making sense of their natural communications rather than forcing them to use ours. Said another way, it’s time we learned to shut up and listen for a change. And that’s what researchers are doing.
There have been significant breakthroughs in the last few years, many of them the result of widely available AI that can be trained to search for patterns in non-human communications. Now, before I go any further with this, I should go on record. Anybody who’s a regular listener to The Natural Curiosity Project Podcast knows that I don’t take AI with a grain of salt—I take it with a metric ton of it. As a technologist, I believe that AI is being given far more credit than it deserves. I’m not saying it won’t get there—far from it—but I think humans should take a collective breath here.
I’ve also gone on record many times with the observation that ‘AI’ as an abbreviation has been assigned to the wrong words. Instead of being associated with Artificial and Intelligence, I think AI should stand for Accelerated Insight, because that’s the deliverable that it makes available to us when we use it properly. It makes long, slow, complex, and let’s face it, boring jobs, usually jobs that involve searching for patterns in a morass of data, enormously faster. Here’s an example that I’ve used many times. A dermatologist who specializes in skin cancers has thousands of photographs of malignant skin lesions. She knows that the various forms of skin cancer can differ in terms of growth rate, shape, color, topology, texture, surface characteristics, and a host of other identifiers. She wants to look at these lesions collectively to find patterns that might link causation to disease. Now: she has a choice. She can sit down at her desk with a massive stack of photographs and a note pad, and months later she may have identified repeating patterns. Or, she asks an AI instance to do it for her, and gets the same results in five minutes. It’s all about speed.
That’s a perfect application for AI, because it takes advantage of AI’s ability to quickly and accurately identify patterns hidden within a chaos of data. And that’s why research into interspecies communication today is increasingly turning to AI as a powerful tool—with many promising outcomes, and a few spellbinding surprises.
Let’s start with the discovery of the “Sperm Whale Phonetic Alphabet.” Project CETI, the Cetacean Translation Initiative, has produced what bioacoustics researchers are calling the “Rosetta Stone” for marine interspecies communication. Here’s what we know. Researchers have identified structural elements in the sounds generated by sperm whales that are similar to human vowels, like a, e, i, o, and u, and diphthongs, like the ‘ow’ in the word sound, the ‘oy’ in noise, and the ‘oo’ in tour. They’ve also identified a characteristic called “rubato,” which is measurable variation in tempo that conveys meaning, and “ornamentation,” which is the addition of extra clicks, which suggest that sperm whales may have what’s called a combinatorial grammar that could transmit enormous amounts of information. Combinatorial grammar: let me explain. “He had a bad day” is a perfectly acceptable statement. “He had a no-good, horrible, terrible, very bad day” is an example of combinatorial grammar. It adds richness and nuance to whatever’s being said.
This is the first time researchers have ever found a non-human communication system that relies on the same kinds of phonetic building blocks that human speech relies on. This is a very big deal.
So: Using machine learning, scientists have analyzed almost 9,000 codas, which are uniquely identifiable click sequences, and in the process have discovered that sperm whale communication is enormously more complicated and nuanced than we previously believed.
So, how did they do it? Well, in the same way that ChatGPT is trained on huge databases of human text, new models are being trained on the sounds of the natural world. For example, NatureLM-audio is a system that was launched by the Earth Species Project in 2025. It’s the first large audio-language foundation model specifically built for bioacoustics. Not only can it identify unique species, it can also determine the life stage they’re in and the emotional state of the animal when it was recorded—for example, whether the creature was stressed, playing, relaxed, and so on. And it can do this across thousands of species, simultaneously.
Then there’s WhAM, the Whale Acoustics Model. This is a transformer-based model that can generate synthetic, contextually accurate whale codas, which could someday lead to two-way real-time engagement with whales.
I should probably explain what a transformer-based model is, because it’s important. In bioacoustics, a transformer-based model uses a technique called the self-attention mechanism, which is part of natural language processing, to analyze animal sounds. The self-attention mechanism asks a question: To truly understand the context and meaning of this particular word (or in this case, sound), what do I need to know about the other words that are being used by the speaker at the same time? This allows the system to capture long-range patterns in audio spectrograms, which allow for highly accurate species identification, sound event detection (like bird calls or bat echolocation), and other identifiers, especially when the data to be analyzed is limited. Models like Audio Spectrogram Transformer and custom systems like animal2vec convert captured audio into small segments called patches, then process them to identify patterns.
In bioacoustics—such as studying the meaning and context of whale song, or in the case of animal2vec, the vocalizations of meerkats—the raw audio is converted into visual representations called spectrograms, which display the changing frequency of the recording against elapsed time. These are then broken into smaller patches. Each patch then gets a unique “position” tag so the model knows the order of the sounds in the sequence. This is called Positional Encoding.
Next, the system unleashes the Self-Attention Mechanism, which allows the model to weigh the importance of different sound patches relative to each other, which creates a better understanding of context and relationships across long audio segments.
The next step is Feature Extraction. The model learns deep, complex acoustic features, such as nuanced meaning in bird songs or bat calls, which can be tagged to different species or behaviors.
Finally, the model classifies the sounds, in the process identifying unique species, or detecting specific identifiable events, such as a predator call.
The implications of all this are significant. First, contextual understanding is created because the system captures long-range dependencies among the audio patches, which are crucial for understanding complex animal vocalizations. Second, the performance of these systems is better than any other model, including Convoluted Neural Networks, which are considered to be on the forefront of AI learning. Third, it works well in what is called a Few-Shot Learning Environment, which is an environment where the amount of labeled data that can be analyzed is limited. And finally, because of the use of the Self-Attention Mechanism, the system creates a high degree of interpretability.
These are tools which have utility well beyond the moonshot project of interspecies communication. They can be used to monitor wildlife populations through sound alone; they can detect and identify bird and bat calls; and they can even be used to identify bee species from the frequency and tonality of the buzz their wings create when they fly by a microphone. Remember— Karl von Frisch won a Nobel Prize for deciphering the complex dance of the honeybee and how that dance conveys complex, species-specific information to other members of the hive.
All of these are important in the world of ecology and habitat monitoring and protection.
Here’s another fascinating example that has gotten a lot of attention. Recent field studies have proven that elephants and carrion crows engage in a form of unique naming behavior for others of their own species. In the case of elephants, researchers have used machine learning tools to prove that wild African elephants use unique vocal labels to address each other. Unlike dolphins, who mimic other dolphins’ whistles, elephants appear to use arbitrary names for other elephants—a sign of advanced abstract thought.
In the case of crows, researchers using miniature biologgers—essentially tiny microphones and recorders about the size of a pencil eraser that are attached to wild animals—have discovered that carrion crows have a secret “low-volume” vocabulary that they use for intimate family communication, very different from the loud, raucous sounds that are used for territory protection and alarm calls.
Finally, we’re seeing breakthroughs in animal welfare practices in the farming and ranching industries because of bioacoustics. In the poultry business, for example, a “chicken translator” is now in use that can identify specific distress calls, which allow farmers to locate sick or stressed birds among thousands, significantly improving the welfare of the flock.
Before I continue with this discussion, let’s talk about why all this is happening now, in the final days of 2025, and why scientists believe we may be on the verge of a major breakthrough in interspecies communications. It has to do with three factors.
First, we have Big Data, both as a theory and as a hard practice. The idea that patterns can be found in massive volumes of data has been around for a while, but we’re just now developing reliable tools that can predictably find those patterns and make sense of them. Initiatives like the Earth Species Project are aggregating millions of hours of animal audio into a single database, which can then be analyzed.
Second, we have data aggregation techniques and mechanisms that allow for data to be collected around the clock, regardless of climate or weather. The tiny biologgers I mentioned earlier are examples, as are weatherproof field recorders that can record for weeks on a single memory card and set of batteries.
Finally, we have one of the basic characteristics of AI, which is unsupervised learning—the ability to find patterns in vast stores of data without being told what to look for.
I’m going to add a fourth item to this list, which is growing professional recognition that sound is as good an indicator of ecosystem details as sight. I may not be able to see that whale in the ocean, but I can hear it, which means it’s there.
Okay, let’s move on and talk about the nitty-gritty: how do those sperm whale vowel sounds that I described earlier actually work? And to make sure you know what I’m talking about, here’s what they sound like. This recording comes from Mark Johnson, and it can be found at the “Discovery of Sound in the Sea” Web site.
Amazing, right? Some scientists say it’s the loudest natural sound in the ocean. Anyway, to answer this question about how the sperm whale vowel sounds work, we have to stop thinking about sound as a “message” and start looking at its internal architecture. Here’s what I mean. For decades, researchers believed that the clicks made by sperm whales, the codas, were like Morse Code: a simple sequence of on/off pulses, kind of like a binary data transmission. However, in 2024 and 2025, Project CETI discovered that the clicks made by sperm whales have a sophisticated internal structure that functions exactly the way vowels do in human speech.
In the same way that human speech is made up of small units of sound called phonemes, whale codas are characterized by four specific “acoustic dimensions.” By analyzing thousands of hours of recorded whale song, researchers using AI determined that whales mix and match these dimensions to create thousands of unique signals. The four dimensions are rhythm, which is the basic pattern of the clicks; tempo, the overall speed of the coda; rubato, which is the subtle stretching or squeezing of time between clicks; and ornamentation, which are short “extra” clicks added at the end of a sequence, similar to a suffix or punctuation mark.
That discovery was a game-changer, and it really knocked the researchers back on their heels. But the most important discovery, which happened in late 2024, was the identification of formants in whale speech. In human language, formants are the specific resonant frequencies created in the throat and mouth that result in the A, E and O vowel sounds. Well, researchers discovered that whales use their “phonic lips,” which are vocal structures in their nose, to modulate the frequency of their clicks in the same way that humans do with their lips and mouth. For example, the a-vowel is a click with a specific resonant frequency peak. The i-vowel is a click with two distinct frequency peaks. Whales can even “slide” the frequency in the middle of a click to create a rising or falling sound similar to the “oi” in “noise” or the “ou” in “trout.” These are called diphthongs.
So, how does this actually work? It turns out that whale vocalization is based on what linguists call Source-Filter Theory. Compared to human language, the similarities are eerie. In human speech, air passes through vocal chords to create sound; in whales, it passes through what are called phonic lips. In human speech, variation is accomplished by changing the shape of the mouth and tongue; in sperm whales, it happens using the spermaceti organ and nasal sacs.
In humans, the result is recognizably-unique vowels, like A, E, I, O, U; in whales, the result is a variety of spectral patterns. And in terms of complexity, there isn’t much difference between the two. Humans generate thousands of words; whales generate thousands of codas.
So … the ultimate question. Why do we care? Why does this research matter? Why is it important? Several reasons.
First, before these most recent discoveries about the complexity of animal communication, scientists believed that animal “language”—and I use that word carefully—was without nuance. In other words, one sound meant one thing, like ‘food’ or ‘danger’ or ‘come to me.’ But the discovery of these so-called “whale vowels” now make us believe that their language is far more complex and is in fact combinatorial—they aren’t just making a sound; they’re “building” a meaningful signal out of smaller parts, what we would call phonemes. This ability is a prerequisite for true language, because it allows for the creation of an almost infinite variety of meanings from a limited set of sounds.
So: one of the requirements for true communication is the ability to anticipate what the other person is going to say before they say it. This is as true for humans as it is for other species. So, to predict what a whale is going to say next, researchers use a specialized Large Language Model called WhaleLM. It’s the equivalent of ChatGPT for the ocean: In the same way that ChatGPT uses the context of previous words in a conversation to predict what the next word will be in a sentence, WhaleLM predicts the next coda or whale song based on the “conversation history” of the pod of whales to which the individual belongs. Let me explain how it works.
Large Language Models, the massive databases used to train AI, rely on a process called ‘tokenization.’ A token is a unit of the system—like a word, for example, or in the case of sperm whales, the clicks they make. Since whale clicks sound like a continuous stream of broadband noise to humans, researchers use AI to “tokenize” the whale audio into unique, recognizable pieces. The difference, of course, is that they don’t feed text into the LLM, because text isn’t relevant for whales. Instead, they feed it the acoustic dimensions we talked about earlier: Rhythm, Tempo, Rubato, and Ornamentation.
Next comes the creation of a vocabulary. From analysis of the four acoustic dimensions, the AI identifies specific sound sequences, which are then treated as the vocabulary of the pod that uttered the sounds in the first place.
Next comes the creation of context, or meaning. WhaleLM made a critical discovery in late 2024, which was the identification of what are called long-range dependencies. These dependencies are described in what researchers call the “Eight Coda Rule.” Scientists determined conclusively that a whale’s next call is heavily influenced by the previous eight codas in the conversation, which is typically about 30 seconds or so of conversation time.
WhaleLM also has the benefit of multi-whale awareness. It doesn’t track the “speech” of a single whale; it tracks and analyzes the sounds uttered by all whales in the pod and the extent to which they take turns vocalizing. If Whale A says “X,” the model can predict with high accuracy whether Whale B will respond with “Y” or “Z.” But here’s a very cool thing that the researchers uncovered: Not only does WhaleLM predict a sound that will soon follow, it also predicts actions that the sounds are going to trigger. For example, researchers identified a specific sequence of codas, called the diving motif, that indicates with extreme accuracy—like 86 percent accuracy—that if uttered by all the whales in an exchange, the pod is about to dive to hunt for food. In other words, these sound sequences aren’t just noise—the equivalent of whales humming, for example—they’re specific instructions shared among themselves with some intended action to follow. I don’t know about you, but I find that pretty mind-blowing.
The natural next step, of course, is to ask how we might use this analytical capability to carry on a rudimentary conversation with a non-human creature. Because researchers can now predict what a “natural” response should be, they can use WhaleLM to design what are called Playback Experiments. Here’s how they work. Researchers play an artificial coda, generated by WhaleLM, to a wild whale to see if the whale responds the way the AI predicts it might. If the whale does respond, it confirms that the researchers have successfully decoded a legitimate whale grammar rule.
Let’s be clear, though. We don’t have a “whale glossary of terms” yet that we can use to translate back and forth between human language and whale language. What we have are the rules. We’re still in the early stage of understanding syntax—how words are constructed. We aren’t yet into the semantics phase—what words mean.
In the leadership workshops I used to deliver I would often bring up what I called “The Jurassic Park Protocol.” It simply said, just because you CAN make dinosaurs doesn’t mean you SHOULD. And we know they shouldn’t have, because there are at least six sequels to the original movie and they all end badly.
The same rule applies to interspecies communication. Just because we may have cracked the code on some elements of whale communication doesn’t mean that we should inject ourselves into the conversation. This is heady stuff, and the likelihood of unintended consequences is high. In 2025, researchers from Project CETI and the More-Than-Human Life Program at NYU, MOTH, introduced a formal Ethical Roadmap known as the PEPP Framework. PEPP stands for Prepare, Engage, Prevent, and Protect, and it treats whales as “subjects” with rights rather than “objects” to be studied.
So, PEPP stipulates four inviolable commitments that researchers must meet before they’re allowed to engage in cross-species conversations using AI-generated signals. The first is PREPARE: Before a sound is played back to a whale, researchers must prove they have minimized the potential for risk to the animal by doing so. For example, scientists worry that if they play an AI-generated whale call, they might inadvertently say something that causes panic, disrupts a hunt, or breaks a social bond. Similarly, PEPP requires that researchers use equipment that doesn’t add noise pollution that interferes with the whales’ natural sonar. We’ll talk more about that in a minute.
The next commitment is ENGAGE. To the best of our current knowledge, whales don’t have the ability to give us permission to engage with them, so PEPP requires researchers to look for any kind of identifiable behavioral consent. If the whale demonstrates evasive behavior such as diving, moving away, or issuing a coda rhythm that indicates distress, the experiment must stop immediately. The ultimate goal is to move toward a stage called Reciprocal Dialog, in which the whale has the right and ability to end the conversation at any time.
The third pillar of the PEPP protocol is PREVENT. This is very complicated stuff: researchers must take deliberate steps to ensure that they do not inadvertently become members of the pod. There is concern, for example, that whales might become “addicted” to interacting with the AI, or that it might change how they teach their calves to speak. A related concern is Cultural Preservation. Different whale pods have different “dialects,” and PEPP forbids researchers from playing foreign dialects to groups of whales—for example, playing a recording captured in the Caribbean to a pod of whales in the Pacific Ocean—because it could contaminate their own vocal culture.
The final commitment is PROTECT, and it has less to do with the process of establishing communication and more to do with what occurs after it happens. The PEPP protocol argues that if we prove whales have a language, then we’re ethically and morally obligated to grant them legal rights. And, since AI can now “eavesdrop” on private pod conversations, PEPP establishes data privacy rules for the whales, ensuring their locations aren’t shared with commercial fisheries or whaling interests.
There’s an old joke about what a dog would do if it ever caught the car it was chasing. The same question applies to the domain of interspecies communication. If we are successful, what should we say? Most researchers agree that first contact should not be a casual meet and greet, but should instead be what are called mirroring experiments. One of these is called the Echo Test, in which the AI listens to a whale and repeats back a slightly modified version of the same coda. The intent is not to tell the whale something new, but to see if they recognize that the “voice” in the water is following the rules of their grammar. It’s a way of asking, “Do you hear me?” Instead of “How you doin’?”
Researchers have identified three major risks that must be avoided during conversational engagement with whales. The first is the risk of social disruption. To avoid this, only “low-stakes” social codas can be used for playback, never alarm or hunt calls.
The second risk is human bias. To avoid this outcome, the AI is trained only on wild data to avoid “human-sounding” accents in the whale’s language.
Finally, we have the very real risk of exploitation. To prevent this from happening, the data is open-source but “de-identified” to protect whale locations from poachers.
The discovery of vowels in whale speech has given lawyers who advocate for whale rights significant power in the courtroom. For centuries, whales have been classified as property—as things rather than as sentient creatures. Recently, though, lawyers have begun to argue that whales meet the criteria for legal personhood. They base this on several hard-to-deny criteria. For example,lawyersfrom the More-Than-Human Life Program at NYU and the Nonhuman Rights Project are moving away from general “sentience” arguments to specific “communication” arguments. If an animal has a complex language, it possesses autonomy—the ability to make choices and have preferences. In many legal systems, autonomy is the primary qualification for having rights.
Another argument makes the case that by proving that whales use combinatorial grammar—the vowels we’ve been discussing—scientists have provided evidence that whale thoughts are structured and abstract. Lawyers argue that the law can’t logically grant rights to a human with limited communication skills, like a baby, while at the same time denying them to a whale with a sophisticated “phonetic alphabet.”
In March 2024, Indigenous leaders from the Māori of New Zealand, Tahiti, and the Cook Islands signed a treaty which recognizes whales as legal persons with the right to “cultural expression.” That includes language. Because we now know that whales have unique “regional dialects,” the treaty argues that whales have a right to their culture. This means that destroying a pod isn’t just killing animals; it amounts to the “cultural genocide” of a unique linguistic group.
Then, there’s the issue of legal representation of whales in a court of law. We have now seen the first attempts to use AI-translated data as evidence in maritime court cases. For example, in late 2025, a landmark paper in the Ecology Law Quarterly argued that human-made sonar and shipping noise amounts to “torture by noise” and is the acoustic equivalent of “shining a blinding light into a human’s eyes 24 hours a day.” And instead of relying on the flimsy argument that whales can just swim away from noise (clearly demonstrating a complete ignorance of marine acoustics and basic physics), lawyers are using WhaleLM data to demonstrate how human noise disrupts their vowels, making it impossible for whales to communicate with their families. And the result? We’re moving from a world where we protect whales because they’re pretty, to a world where we protect them because they’re peers.
Human-generated noise has long been a problem in the natural world. Whether it’s the sound of intensive logging in a wild forest, or noise generated by shipping or mineral exploration in the ocean, there’s significant evidence that those noises have existentially detrimental effects on the creatures exposed to them—and from which they can’t escape. The good news is that as awareness has risen, there have been substantial changes in how we design underwater technology so that it is more friendly to marine creatures like whales. Essentially, there is a shift underway toward Biomimetic Technology—hardware that mimics how whales communicate as a way to minimize the human acoustic footprint. These include the development of acoustic modems that use transmission patterns modeled after whale and dolphin whistles instead of the loud sonar pings used in traditional technology. Whales and other creatures hear it as background noise.
Another advance is the use of the SOFAR Channel. SOFAR is an acronym that stands for Sound Fixing and Ranging, and it refers to a deep layer in the ocean, down around 3,300 feet, where sound travels for great distances, much farther than in other regions of the ocean. The layer acts as a natural waveguide that traps low-frequency sounds, allowing them to travel thousands of miles and enabling long-distance monitoring of phenomena such as whale communication. Technology is now being designed to transmit over the SOFAR layer, allowing marine devices to use 80% less power by working with the ocean’s physics rather than against it, and at the same time being less disruptive to the creatures who live there.
Gardiner Hubbard said, “When we embark on the great ocean of discovery, the horizon of the unknown advances with us and surrounds us wherever we go. The more we know, the greater we find is our ignorance.” Interspecies communication is a great example of this. The more we learn, the more we unleash our truly awesome technologies on the challenge of listening to our non-human neighbors, the more we realize how much we don’t know. I’m good with that. Given the current state of things, it appears that 2026 may be the year when the great breakthrough happens. But the great question will be, when given the opportunity to shut up and listen, will we?
I wrote my first novel, Inca Gold, Spanish Blood, in 2015. By the time I really started to work on it, I’d been a dedicated writer—meaning, I knew that writing was who I am, not what I do—for decades. By then I’d written not only books but countless magazine articles, essays, white papers, poetry, training manuals, and short stories. I’d read every book on writing I could find, and every book recommended by people who write books about writing. I had favorite authors across many genres, and I knew why they were favorites. I had attended writing workshops; I was in numerous writing groups; and I wrote constantly—not in the self-flagellant ‘force yourself to get up at 5 AM every morning and write for two hours before going to work’ way, but whenever the mood struck—which was nearly every day. Sometimes all I wrote was a paragraph, or a meaningful sentence; sometimes I wrote 40 or 50 pages. All that matters is that I wrote.
I developed the Zen-like patience required to deal with the publishing world. I accepted the fact that the magic number for submitting an article or a manuscript or pretty much any new material to publishers is around 30, meaning, the number of publishers you must submit to, on average, before one of them takes the bait.
And, I learned the secrets of getting noticed by an editor. I learned that the phrase “Submission Guidelines” is a lie. It should say, “Don’t even THINK about straying from these iron-clad, inviolable, unwavering, universally-applied rules for submitting your work to the publishing gods if you want anyone to even consider looking at your submission.”
I developed a carefully-curated Council of Druids, my personal cadre of editors, each of which has the same fundamental characteristics: they’re voracious readers; they’re endlessly curious; and they’re willing to read what I write and provide detailed, brutally-naked feedback. Do you know what’s less-than-useless to a writer? Someone who provides a crazed smile, two thumbs-up, and the word ‘awesome’ as their feedback to a written piece. Empty calories. My Druids, on the other hand, are never afraid to say, “Steve, with all the love in my heart, you need to drop this back into whatever swamp you dredged it out of, and here’s why.” In other words, they actually provide feedback that’s meaningful and that can be acted upon. And as much as it hurts sometimes, I carefully read and consider, and usually incorporate, every single comment. Their reading makes my writing better.
As a result of all this, I learned my way around the English language. I became grammatically proficient. I paid close attention and learned how dialogue works—and why it often doesn’t. I found myself reading about 140 books every year, and because of that I developed an extensive vocabulary and an awareness of when not to use polysyllabic words, just because I know them (thank you, Mr. Hemingway). I paid careful attention to structure and flow. I began to realize that genre is merely a suggestion: that some of the best books have elements of romance, science fiction, history, travel, global affairs, poetry, and politics, in spite of the label they’re given by the bookstore.
I also trained myself to ignore the naysayers, the trolls who make it their mission to savage other peoples’ work because they can. They’re cowards, hiding behind the bastion of the Internet. Some reviewers give constructive or kind comments, and for those I’m grateful. But many don’t. Do NOT let their negative comments slow you down. You wrote a book, dammit. They didn’t. Ignore them for the miserable people they are.
I began to understand that I write so that others may read. When I drive my grandkids home after a day with my wife and me, I take the responsibility very seriously indeed. And when I take my readers on a journey, I take the responsibility no less seriously.
So, you can imagine how I felt when I found myself running into roadblock after roadblock as I tried to get a publisher to look at my novel. Here’s what was clattering around in my head, like a handful of marbles. I clearly knew how to write because I’d been doing it for a long time. I was published many times over by big, well-known houses, and I had two bestsellers to my name. I always met or exceeded deadlines. Yet time and again I submitted, and time and again I got back … nothing. Crickets. Even though I followed the submission rules, I didn’t even get rejection letters to add to my already impressive folder of same.
So, I called my editor at one of the big houses whom I had known for years and with whom I had created many successful books—and a genuine friendship. I explained my situation to him, knowing that he doesn’t publish fiction but hoping he could provide some insight. He did, and his response was blunt:
“Steve, here’s what you’re facing. The fact that you have had major success in the non-fiction realm is meaningless to editors in the world of fiction. The firewall that exists between the two domains is so thick that it’s as if you have never written or been published at all.”
And this was the clincher: “Your chances of getting this book published are roughly the same, whether you submit it or not.”
Bummer.
This glaring realization kicked off a new chapter in my writing. I ended up self-publishing the novel, and it did well. I then wrote a second, self-published it, and it became a number-one global bestseller on Amazon for a few weeks. I wrote two more, and they also did well—not bestsellers, but readers buy them and like them. And what I realized, and frankly, what I knew all along, was that in some ways, getting a book published was more important to me than writing one. That was a significant realization, and it changed how I think about why I write, because it was the wrong perspective for a writer. Yes, of course I want my work to be published, but first, I’m a writer. Writing is enormously creative; publishing is enormously mechanical. And when I write, I write for my readers and I take that responsibility seriously. But honestly, I write for myself. I write books that I would like to read. It makes me feel good. It challenges me, forces me to work hard to be better at it.
As writers—all writers, regardless of genre—our goal should be to write books that people want to read, and who then come back for more after they’ve done so. We shouldn’t write for the likes, or the thumbs-ups; those are more empty calories. We write because we have something to say that matters. If we do that, our audiences will find us.
I’m currently writing sequels to two of my novels: Inca Gold, Spanish Blood, and Russet. Russet is my most recent work, so the characters and plot line are still fresh in my mind. But Inca Gold came out in 2016 and I had forgotten some of the story’s details, and I’m embarrassed to say, the names of some of the characters. So, I put on my reader hat, picked up the book, and read it, ignoring the fact that I was its author. And I mean, I really read it. And you know what? I liked it. A lot. It didn’t waste my time, and it made me want to read more. And that’s all the motivation I need to keep going.
A small town in America, summer, 1959. Maple Street. An ice cream vendor pushes his cart up the sidewalk, ringing a bell; kids play stick ball in the street; a neighbor mows his grass with a push mower. Another lies under his car, tinkering with it. In the distance, a dog barks.
Suddenly, the power goes out—all power. Stoves and refrigerators stop working; the radio goes silent; cars won’t start. Neighbors gather in an uneasy group. They begin to speculate about what might be causing the outage, their voices growing strident as speculation turns to suspicion. Could it be the meteor that some of them heard pass overhead earlier?
While one man argues for a rational explanation—sunspots, perhaps—another points the finger at a neighbor who isn’t present, using his odd quirks to irrationally explain the widespread lack of electricity. Then, inexplicably, power returns to a single car in a driveway, and it starts with a rumble.
“It’s space aliens,” says a young comic book-obsessed boy. “They come to earth disguised to look just like us, and blend in. They’re different, but no one can tell because they’re identical to the rest of us.”
And the man who owns the car that mysteriously starts and stops? He’s as mystified as the other neighbors, but because it’s his car engaging in inexplicable behavior—the engine roaring to life when there’s no one at the wheel—he’s to blame. He must be the alien.
In the end, as the town tears itself apart through self-created fear, the real aliens look down on the town from their cloaked ship. One of them says to the other (and they look as human as the people in the streets below), “The pattern is always the same. They pick the most dangerous enemy they can find, and it’s themselves. Their world is full of Maple Streets. We’ll go from one to the next and let them destroy each other.”
Rod Serling wraps up the episode as only Rod Serling can do:
The tools of conquest do not necessarily come with bombs or explosives or fallout. There are weapons that are simply thoughts, attitudes, prejudices, found only in the minds of men. For the record, prejudices can kill, and suspicions can destroy. And a thoughtless, frightened search for a scapegoat has a fallout all its own—for the children, and the children yet unborn. And the pity of it is, these things cannot be confined to the Twilight Zone.
I want every living person in the United States to watch this episode, and then think about current events. Clearly, Rod Serling was correct: These things cannot be confined to the fantasy of the Twilight Zone, where they belong.
I was 13 years old, and I was standing with my childhood friends Bill Meadows, Peter Norris, and Gil DePaul in the frigid interior of the home-built observatory in Bill’s backyard. The four of us stood in a sloppy circle around the telescope, taking turns looking through the eyepiece and shivering in the late-night winter air.
I like to think that our collective friendship served as the model for the TV show, “Big Bang Theory,” because just like Leonard, Howard, Sheldon, and Raj, our world revolved around the wonder of science and was powered by our collective curiosity. The main difference was that in our cadre, the counterparts for Penny, Amy and Bernadette were conspicuously absent. Clearly, we had not yet been introduced to awe.
We loved electronics, and geology, and astronomy, and all the many offshoots of biology; we would often gather for electronic component swaps, or rock and mineral trades, or just to build things together or admire each other’s latest acquisitions of exotic reptiles or amphibians. At one point, my parents gave me a Heathkit electronics project board, pre-wired with capacitors and resistors and transistors and coils, each connected to little stainless-steel springs that allowed me to run jumpers between the components to wire the projects outlined in the manual. I will never forget the day I learned that by swapping between different components and by wiring the output to a variable resistor, I could make it play wildly oscillating sounds that would be great as the background music for a science fiction film. I had invented a version of the Moog Synthesizer, before anyone knew what that was.
I learned two of life’s important lessons from Bill Meadows: the immensity of the universe, and the immensity of personal grief. The first, my 13-year-old shoulders were prepared to carry; the second, not so much. One Christmas morning after all the gifts had been opened, I called Bill to see if he wanted to get together, probably to compare Christmas bounty. He couldn’t, he told me; his Mom had just died. Maybe tomorrow, he said, with infinite grace. I didn’t know how to process that level of profound loss, but he did, and the grace with which he carried the pain is something I still think about today.
As I said, we were the Big Bang Theory gang before there was a Big Bang Theory, and Bill was our Sheldon Cooper—not in the awkward, geeky way of the show’s lovable main character, but in the brilliant, quirky, knowledge-is-everything way of smart, driven, passionate people. He went on to become a gifted composer and musician, a semiconductor designer, and of course, a top-notch quasi-professional astronomer. We’re still very much in touch; recently, he guided us when Sabine bought me my own telescope. Yes, it’s true. She’s awesome.
Like teenage boys everywhere, a glimpse at a copy of Playboy was something to be whispered about for weeks, but the publication that really got our motors humming was the annual catalog from Edmund Scientific Company. Sad, I know, but have you ever SEEN a catalog from Edmund Scientific?
Bill, like the Edmund catalog, was an endless source of knowledge and information. I can still remember things I learned from him. Like, how many sides an icosahedron has (the answer is 20). What an ellipse is, and how to make one (slice the top off a cone at an angle). How to work a Foucault Tester. What ‘New General Catalog’ and ‘Messier Numbers’ mean (unique designators for star clusters, galaxies, and nebulae). Why it was appropriate to drool on a Questar Telescope if I ever found myself in the same room with one.
I even remember the night my Mom was driving us to a long-forgotten presentation at the junior high school. As a car went by us at high speed, the sound rose and fell with its passing. In unison, Bill and I said, “Doppler Effect,” then we laughed. But I was a bit awestruck. I was one with the dude.
Somewhere around 1967, Bill decided that something was missing in his backyard. Not a tomato garden, or a jungle gym, or a trampoline; not a picnic table, or a barbecue grill, or a weight set. No, this 13-year-old decided that what was missing, what would really round the place out, was an observatory. His Dad agreed, and they built one. We all helped a little bit here and there, but this thing was Bill’s baby. It looked like half of a shoebox with a giant tuna can sitting on top, and the whole thing sat at roofline level on top of four pieces of drilling pipe punched into bedrock. Coming up through a hole in the center of the floor was a fifth piece of pipe which ultimately became the telescope mount, isolated from the observatory structure so that our walking about didn’t vibrate the telescope when it was focused on whatever it was focused on. The top and side of the tuna can had a two-foot-wide slit that could be opened for viewing. Many were the nights that we had sleepovers at Bill’s house, curled up and freezing in the observatory as we focused the telescope on distant celestial objects, things Bill could casually name and describe from memory, having seen them many, many times with whatever telescope he used before he built the big one.
The big one: Edmund Scientific sold it all. But buy a ready-made telescope? Piffle, said Bill, or whatever the 1967 equivalent of piffle was in west Texas. Instead, he created a shopping list:
First, a large quartz mirror blank, which was 12 inches or so in diameter;
Assorted grits to hand-grind a parabolic surface into the blank;
A Foucault tester to ensure the mirror curvature was correct once the grinding was done;
The tube for the telescope body;
An adjustable mirror mount;
The eyepiece ocular;
Assorted eyepieces;
An equatorial mount to attach the finished telescope to the center drilling pipe, with an electric star drive;
And of course, various accessories: counterweights, a spotting scope, and assorted mounting hardware.
We all claimed some of the credit for building that telescope because all of us spent time hand-grinding the blank under Bill’s watchful eyes. But make no mistake: it was Bill who built that thing. He ground and ground and ground, week after week after week, starting with a coarse abrasive grit and grinding pommel, then onto a finer grit, and then finer still, until he was working with red polishing rouge at the end. I remember his pink-stained fingers at school. School: it was so fitting that we attended the brand-new Robert H. Goddard Junior High School in Midland, Texas, complete with rockets mounted on stands out front. Goddard, who invented the modern liquid-fuel rocket, was long dead, but his wife came to visit the school not long after it opened. I still have her autograph.
It’s interesting to me that Goddard designed and launched his rockets near Roswell, New Mexico, where my maternal grandparents lived, and where … well, you know.
Once Bill was done with the grinding and polishing, he shipped the mirror blank back to Edmund, and they put the mirrored surface on it and shipped it back, ready to be mounted in the telescope.
One of Bill’s goals was to do astrophotography. Keep in mind that this was 1968, and photography wasn’t what it is today. There was no such thing as a digital camera (mainly because there was no such thing yet, really, as digital anything), and there was no way to mount a standard camera on a telescope. So, Bill improvised in an extraordinary way. He took a one-gallon metal Prestone antifreeze can and cut the top off. He then flocked the inside of the can with a very dark, matte black paint to eliminate reflections. In the middle of the bottom of the can he cut a two-inch hole, and there he mounted a T-connector, which would allow him to attach it to the eyepiece holder of the telescope.
Now came the genius part. Using tin snips, he cut and bent the open top of the can so that it had two flanges, one on each side, which would neatly and securely hold a sheet film carrier plate. The plate was about five by eight inches, and once it was in the “Prestone camera” and the environment was dark, he could slide out the cover that protected the sheet film from light, and the image of whatever was in the viewfinder would be splashed on the film. Minutes later, Bill would slide the cover back in, and after sending it off to be developed, he’d have a time-lapse photograph. In fact, I still have a photograph he gave me of the Orion Nebula somewhere in my files, along with one of a long-forgotten star cluster.
It was cold in that observatory; a heater was out of the question, because the rippling heat waves escaping through the observatory’s viewing slit would ruin the image area—another thing I learned from Bill. So, cold it was.
We weren’t supposed to have the kinds of conversations we did at that age, but they made sense, which was why Bill’s explanation to all of us about what we were taking turns looking at was—well, normal. “A true binary star system,” he explained, “is two stars that are gravitationally bound together and therefore orbit each other.” I can still remember, all these years later, that we were looking at Sirius, sometimes known as the Dog Star, the single brightest star in the night sky, at least in the northern hemisphere. It’s part of the constellation Canis Major. “Sirius A is a bright star and Sirius B is a bit dimmer,” Bill told us, “but the ‘scope can resolve them.” Today, every time I look up and see Sirius, I think of Bill.
This essay is about the relationship between curiosity and awe and wonder, so let me ask you a question. First, when was the last time you can remember being genuinely curious about something, something new to you, something that made you curious enough to do a little reading or research about whatever it was—and to then be awed by it? Just yesterday, June 23rd, 2025, the very first images from the brand-new Vera C. Rubin Observatory in Chile were shared with the public. Within two days of its first scan of the night sky, the Rubin telescope discovered more than 2,000 new asteroids, and astronomers predict that over the next ten years it will capture images of 89,000 new near-Earth asteroids, 3.7 million new main-belt asteroids, 1,200 new objects between Jupiter and Neptune, and 32,000 new objects beyond Neptune. Doesn’t that make you just a little bit curious about what ELSE might be lurking out there? Doesn’t it make you feel a certain amount of awe and wonder, if for no other reason than the fact that humans have developed the scientific wherewithal to build this amazing machine?
Part 2
One of the first things I realized when I got my new telescope a few months ago and began to thaw out long-forgotten astronomy knowledge, was that a telescope is a Time Machine. Here’s why.
The night sky is filled with countless observable objects, other than the moon and stars. For example, on a dark clear night, chances are very good that if you lie down on a lawn chair in your backyard and turn off the porch light, within 15 minutes you’ll see at least one Starlink satellite sweep past. If you time it right and look just after sunset, you’re likely to see the International Space Station pass overhead, the light from the setting sun reflecting off its solar and cooling panels. There’s even an app for your phone to track its location.
Then there are the natural celestial bodies. Depending on the time of year, it’s easy to spot other planets in our solar system with the naked eye, especially Mercury, Venus, Mars, Jupiter, and Saturn. They, like the Earth, orbit our sun, which is, of course, a star. It is one star in the galaxy known as the Milky Way, a collection of stars, planets, great clouds of gas, and dark matter, all bound together by gravity. The Milky Way is made up of somewhere between 100 and 400 billion stars. And remember, that’s a single galaxy.
In the observable universe, meaning the parts of the universe that we can see from Earth with all our imaging technologies, there are between 200 billion and two trillion observable galaxies, each containing billions of stars.
So just to recap: the Earth orbits the Sun, which is one of 100 to 400 billion stars in the Milky Way Galaxy. But the Milky Way Galaxy is one of somewhere between 200 billion and two trillion galaxies in the observable universe. And the observable universe? According to reliable, informed sources—NASA, the Center for Astrophysics at Harvard, and the Smithsonian—we can observe five percent of it. 95 percent of the universe remains unknown and unseen.
Starting to feel it yet? It’s called awe and wonder, and that itch you’re feeling? That’s curiosity.
Part 3
If you look to the north on any given spring evening, you’ll easily spot the Big Dipper, a recognizable part of the constellation, Ursa Major—the great bear. Here’s an interesting fact for you: the Big Dipper isn’t a constellation. It’s an asterism, which is a pattern of stars in the sky that people have come to know. The Big Dipper is an asterism that’s part of the more complicated constellation known as Ursa Major.
Take a look at a photo or drawing of the Big Dipper. It consists of four stars that form the “bowl” of the dipper, and three stars that make up the dipper’s curving “handle.”
The handle forms the beginning of a celestial arc, and if you extrapolate it you can “follow the arc to Arcturus,” a very bright star in the constellation Boötes. From Arcturus you can “speed on to Spica,” a fairly bright star in the constellation Virgo. You can do all of this with your naked eye.
Now: go back to the bowl of the Big Dipper. Draw an imaginary line from Megrez, the star where the handle attaches to the bowl, through Phecda, the star just below it that forms a corner of the bowl, and keep going to Regulus, the brightest star in the constellation Leo.
If you now draw a line between Spica and Regulus and look slightly above the midpoint of that line, you are staring at a region of space called the Realm of Galaxies.
I love that name; it sounds like a place that Han Solo would tell Chewy to navigate the Millennium Falcon to. Nowhere else in the visible sky is the concentration of galaxies as high as it is here. Within this space, for example, is the unimaginably huge Virgo Cluster of galaxies. How huge? Well, the local cluster, to which our spiral-shaped Milky Way and Andromeda Galaxies belong, contains a mere 40 galaxies. The Virgo Cluster has more than a thousand, but those thousand are packed into an area no bigger than that occupied by our own local cluster with its 40 galaxies. And remember, each of those galaxies is made up of billions of stars.
Galaxies are active, often destructive behemoths. When a small spiral galaxy like our own Milky Way gets too close to a larger one, things happen. The Large and Small Magellanic Clouds, which are members of our local cluster, used to be much closer to the Milky Way, but the Milky Way’s gravity stripped away many of those galaxies’ outer stars, creating distance between them and radically changing their galactic shapes. But the Milky Way hasn’t finished its current rampage: it’s now in the process of dismantling the Sagittarius Galaxy.
These things are also big—far bigger than we’re capable of imagining, as are the distances between them, which is why I said earlier that a telescope is a fully functional Time Machine. Andromeda, for example, is 220,000 light years across. You need a wide-angle eyepiece to look at it through a telescope. For context, consider this. The speed of light is a known constant—it never changes. Light travels at 186,000 miles per second, or just over 671 million miles per hour. Think orbiting Earth’s equator 7-1/2 times every second. That means that in one year, light travels 5.88 trillion miles. We call that a light year. It’s not a measure of time; it’s a measure of distance. To fly from one end of Andromeda to the other would take 220,000 years, at 186,000 miles per second. Pack a lunch.
When you look up at Andromeda, which is our closest galactic neighbor, you’re looking at an object that is two-and-a-half million light years away. What that means is that the light striking your eye has traveled 14 quintillion, 700 quadrillion miles to get to you. That’s ‘147’ followed by 17 zeroes. More importantly, it means that that light left Andromeda on its way to your eye two-and-a-half million years ago. Two-and-a-half million years ago: the Pleistocene epoch was in full swing; Earth’s polar ice caps were forming; mammoths and mastodons roamed North America; the Isthmus of Panama rose out of the sea, connecting two continents; the Paleolithic period began; and Homo habilis, the first protohumans, emerged.
All that was happening when that light that just splashed onto your retina left its place of birth. And that’s the closest galaxy to us.
So, I’m compelled to ask: is Andromeda still there? Do we have any way of actually knowing? A lot can happen in two-and-a-half million years. And now, with the breathtakingly complicated telescopes we’re placing in deep space—the original Hubble got us started, and now with the James Webb Space Telescope, we’re capturing infrared light that is 13.6 billion years old. The universe is 13.8 billion years old, which means that we’re getting close to seeing as much light as it’s possible to see from the formative edge of the universe itself—what’s known as the cosmic event horizon. Which, of course, begs the question: what lies beyond the edge?
Part 4
Curiosity, awe, and wonder are amazing things. They feed, nourish, encourage, and drive each other, and in turn, they drive us. I love this science stuff, especially when it hits us with knowledge that is beyond our ability to comprehend. For me, that’s when curiosity, awe and wonder really earn their keep. Because sometimes? Sometimes, they’re the only tools we have.
I recently had a conversation about technology’s impact on the availability and quality of information in the world today. It’s an argument I could make myself—that tech-based advances have resulted in access to more data and information. For example, before the invention of moveable type and the printing press, the only books that were available were chained to reading tables in Europe’s great cathedrals—they were that rare and that valuable. Of course, it was the information they contained that held the real value, an important lesson in today’s world where books are banned from modern first world library shelves because an ignorant cadre of adults decides that young people aren’t mature enough to read them—when it’s the adults who lack the maturity to face the fact that not everybody thinks the same way they do in this world, and that’s okay. But, I digress.
Image of chained books in Hereford Cathedral. Copyright Atlas Obscura.
When moveable type and the printing press arrived, book manuscripts no longer had to be copied by hand—they could be produced in large quantities at low cost, which meant that information could be made available to far more people than ever before. To the general population—at least, the literate ones—this was a form of freedom.But to those who wanted to maintain a world where books were printed once and kept chained to desks where only the privileged few (the clergy) could read them, the free availability of knowledge and information was terrifying. Apparently, it still is. Knowledge is, after all, the strongest form of power. How does that expression go again? Oh yeah: Freedom of the Press…Freedom of Expression…Freedom of Thought…Sorry; I digress. Again.
Fast-forward now through myriad generations of technology that broadened information’s reach: The broadsheet newspaper, delivered daily, sometimes in both morning and evening editions. The teletype. Radio. The telephone. Television. The satellite, which made global information-sharing a reality. High-speed photocopying. High-speed printing. The personal computer and desktop publishing software. Email. Instant Messaging and texting. And most recently, on-demand printing and self-publishing through applications like Kindle Direct, and of course, AI, through applications like ChatGPT. I should also mention the technology-based tools that have dramatically increased literacy around the world, in the process giving people the gift of reading, which comes in the form of countless downstream gifts.
The conversation I mentioned at the beginning of this essay took a funny turn when the person I was chatting with tried to convince me that access to modern technologies makes the information I can put my hands on today infinitely better and more accurate. I pushed back, arguing that technology is a gathering tool, like a fishing net. Yes, a bigger net can result in a bigger haul. But it also yields more bycatch, the stuff that gets thrown back. I don’t care about the information equivalents of suckerfish and slime eels that get caught in my net. I want the albacore, halibut, and swordfish. The problem is that my fishing net—my data-gathering tool—is indiscriminate. It gathers what it gathers, and it’s up to me to separate the good from the bad, the desirable from the undesirable.
What technology-based information-gathering does is make it easy to rapidly get to AN answer, not THE answer.
The truth is, I don’t have better research tools today than I had in the 70s when I was in college. Back then I had access to multiple libraries—the Berkeley campus alone had 27 of them. I could call on the all-powerful oracle known as the reference librarian. I had access to years of the Reader’s Guide to Periodical Literature. I had Who’s Who, an early version of Wikipedia; and of course, I had academic subject matter experts I could query.
Technology like AI doesn’t create higher quality research results; what technology gives me is speed. As an undergraduate studying Romance Languages, I would often run across a word I didn’t know. I’d have to go to the dictionary, a physical book that weighed as much as a Prius, open it, make my way to the right page, and look up the word—a process that could take a minute or more. Today, I hover my finger over the word on the screen and in a few seconds I accomplish the same task. Is it a better answer? No; it’s exactly the same. It’s just faster. In an emergency room, speed matters. In a research project, not so much. In fact, in research, speed is often a liability.
Here’s the takeaway from this essay. Whether I use the manual tools that were available in 1972 (and I often still do, by the way), or Google Scholar, or some other digital information resource, the results are the same—not because of the tool, but because of how I use what the tool generates. I’ve often said in my writing workshops that “you can’t polish a turd, but you can roll it in glitter.” Just because you’ve written the first draft of an essay, selected a pleasing font, right and left-justified the text, and added some lovely graphics, it’s still a first draft—a PRETTY first draft, but a first draft, nonetheless. It isn’t anywhere near finished.
The same corollary applies to research or any other kind of news or information-gathering activity. My widely cast net yields results, but some of those results are bycatch—information that’s irrelevant, dated, or just plain wrong. It doesn’t matter why it’s wrong; what matters is that it is. And this is where the human-in-the-loop becomes very important. I go through the collected data, casting aside the bycatch. What’s left is information. To that somewhat purified result I add a richness of experience, context, skepticism, and perspective. Ultimately I generate insight, then knowledge, and ultimately, wisdom.
So again, technology provides a fast track to AN answer, but it doesn’t in any way guarantee that I’ve arrived at anything close to THE answer. Only the secret channels and dark passages and convoluted, illuminated labyrinths of the human brain can do that.
So yeah, technology can be a marvelous tool. But it’s just a tool. The magic lies in the fleshware, not the hardware. Technology is only as good as the person wielding it.
It’s a fundamental aspect of human nature, I believe, for each generation to criticize the generation that preceded it, often using them as a convenient scapegoat for all that’s wrong in the world. The current large target is my own generation, the Baby Boomers. I recently overheard a group of young people—mid-20s—complaining at length about their belief that the Boomers constitute a waste of flesh who never contributed much to society. Respectfully, I beg to differ; this is my response, along with a plea to ALL generations to think twice about how they characterize those who came before.
Millennials, sometimes called Gen-Y, and the Plurals, commonly referred to as Gen-Z, often blame Baby Boomers for the state of the world: the growing wealth imbalance, the violence and unpredictability of climate change, the multifaceted aftermath of COVID because of its impact on the supply chain, and the world’s growing political and cultural divisions—in essence, the world sucks and Boomers are to blame. They often proclaim Boomers to be a generation that contributed little of value to the world. This, of course, is a long-standing social convention: blame the old people, because they know not how dumb, useless and ineffective they are.
On the other hand, there’s a lot of admiration out there for the current Millennial über meisters of Silicon Valley—people like Mark Zuckerberg, Brian Chesky (AirBnB), Alexandr Wang (Scale AI), and Arash Ferdowsi (Dropbox). They deserve admiration for their accomplishments, but they didn’t create Silicon Valley—not by a long shot. The two generations that came before them did that.
But let’s consider the boring, stumbling, mistake-prone Boomers. You know them; they include such incompetent, non-contributing members of society as Bill Gates, the Steves, Jobs and Wozniak, Peggy Whitson, who recently retired as Chief Astronaut at NASA, Larry Ellison, who founded Oracle, Oprah Winfrey, creator of a breathtakingly influential media empire, Marc Benioff, founder of SalesForce, Reid Hoffmann, co-creator of LinkedIn, and Radia Perlman, the creator of the Spanning Tree Protocol, the rule set that the 25 billion computers on the Internet, give or take a few hundred million, use to talk to each another. And I won’t even bother to mention Tim Berners-Lee, the creator of the World Wide Web.
What a bunch of losers.
But there may be a reason for the dismissal of an entire generation’s contributions to the world that goes beyond the tradition of putting elders on a literal or figurative ice floe and shoving them off to sea. I find it interesting that the newest arrivals on the generational scene judge the value of a generation’s contributions based on the application that that generation created. All hail Facebook, X, Instagram, Uber, Amazon, AirBnB, Google, Tencent, AliBaba, TikTok, GitHub, and Instacart, the so-called platform companies. Those applications are the “public face” of massive and incomprehensibly complex technological underpinnings, yet rarely does anyone make time today for a scintilla of thought about what makes all of those coveted applications—ALL of them—work. In fact, none of them—NONE of them—would exist without two things: the myriad computers (including mobile devices) on which they execute, and the global network that gives them life and makes it possible for them to even exist.
The tail wags the dog here: without the network, these applications could not function. Want some proof? The only time the vast majority of people on the planet are even aware of the network’s existence is when it breaks, which is seldom. But when it does? When ice or wind bring down aerial transmission cables, when a car takes out a phone pole, when fire destroys critical infrastructure and people can’t mine their precious likes on Facebook, when there’s a long weekend and everybody is home downloading or gaming or watching and the network slows to a glacial crawl, technological Armageddon arrives. Heart palpitations, panting, sweating, and audible keening begin, as people punch futilely at the buttons on their devices. But consider this: the global telephone network has a guaranteed uptime of 99.999 percent. In the industry, that’s called five-nines of reliability. And what does that mean in English? It means that on average, the phone network—today, the Internet—is unavailable to any given user for eight-and-a-half minutes a year. In a standard year, there are 525,600 minutes. For about nine of those every year, the network hiccups. Take a moment to think about that.
When we think back on famous scientists and innovators, who comes to mind? Well, people like Alexander Graham Bell, of course, who invented the telephone, but who also invented the world’s first wireless telephone, called the photophone—and yes, it worked; or Thomas Edison, who became famous for the invention of the lightbulb, but actually invented many other things, and who was awarded 2,332 patents and founded 14 companies, including General Electric; the Wright Brothers, who flew successfully at Kitty Hawk; Watson and Crick, who discovered the DNA double helix and created a path to modern genetics and treatments for genetic disease; Bardeen, Bartain and Shockley, unknown names to most people, but names attached to the three scientists at Bell Telephone Laboratories who invented the transistor; Philo T. Farnsworth, the creator of television; and Marie Curie, who did pioneering research on radioactivity. These are all famous names from the late 1800s all the way through the 1960s. But then, there’s a great twenty-year leap to the 1980s, the time when Generation X came into its own. Movies were made about this generation, some of the best ever: Ferris Buehler’s Day Off. The Breakfast Club. Home Alone. Sixteen Candles. St. Elmo’s Fire. Clerks. The Lost Boys. Karate Kid. Gen-X was a widely criticized generation, an ignored, under-appreciated, self-reliant, go-it-alone generation of entrepreneurs that includes Jeff Bezos of Amazon fame, Cheryl Sandberg of Facebook, Sergey Brin of Google, Meg Whitman of Hewlett-Packard, Travis Kalanick, of Uber, and dare I say it, Elon Musk. All major contributors to the world’s technology pantheon, some as inventors, some as innovators. The power of the Internet to allow data aggregation and sharing made it possible for platform companies like Uber, eBay, Facebook and Google to exist. Those weren’t inventions, they were innovations (and to be sure, exceptional innovations!), built on top of pre-existing technologies.
Even the much-talked-about creations of Elon Musk aren’t inventions. Let’s look at StarLink, the SpaceX constellation of orbiting communication satellites. A satellite comprises radio technology to make it work; solar cells to power it; semiconductors to give it a functional brain; and lasers to allow each satellite to communicate with others. All of those technologies—ALL of them—were invented at Bell Labs in or around the 1940s. In fact, the first communications satellite, Telstar, was created at Bell Labs and launched into orbit in 1962—more than 60 years ago—to broadcast television signals.
That 20-year leap between the 60s and the 80s conveniently ignores an entire generation and its contributions to the world—not just techno-geeks, but content and entertainment and media people who redefined our perception of the world. This was the time of the Baby Boomers, and while you may see us—yes, I am one—as an annoying group of people that you wish would just go away, you might want to take a moment to recognize the many ways my generation created the lifestyle enjoyed by Millennials and Gen-Z—and took steps to ensure that it would endure.
The thing about Boomer researchers, scientists, and innovators was that with very few exceptions, they were happy to work quietly behind the scenes. They didn’t do great big things exclusively for money or power; they did them because they were the right things to do, because they wanted to leave the world a better place for those who came later. And they did, in more ways than you can possibly imagine.
Let’s start with the inventions and innovations that made possible, among other things, the devices on which you watch, listen or read, and the content they deliver. I know I’ve already mentioned some of these people, but they deserve a few more words.
Let’s start with the Steves—and no, I don’t mean me. I’m talking about Steve Wozniak and Steve Jobs who did quite a few things before inventing the iconic Macintosh. Both were born in the 1950s and grew up in the San Francisco Bay Area, and met while they were summer interns at Hewlett-Packard. In 1977, seven years before the Mac, they introduced the world to the Apple II personal computer, which included color graphics, a sound card, expansion slots, and features that made it the first machine that came close to the capabilities of modern PCs. Later, they introduced what many called the “WIMP Interface,” for windows, icons, mice, and pointy fingers, the hallmarks of what later became the Mac operating system—and ultimately, Windows 95 and the generations of that OS that followed. Incidentally, the incredibly stable, highly dependable Macintosh operating system is based on UNIX, an operating system first designed and developed at—you guessed it—Bell Laboratories.
Next we have Sir Tim Berners-Lee, born in London in 1955. He grew up around computers, because his parents were mathematicians who worked on the Ferranti Mark I, the first computer in the world to be sold commercially. He became a software consultant for the CERN Particle Physics Laboratory in Switzerland, which became famous for being the home of the Very Large Hadron Collider, which was recently used by astrophysicists to discover the Higgs Boson.
While at CERN in the 1980s, Berners-Lee took on the challenge of organizing and linking all the sources of information that CERN scientists relied on—text, images, sound, and video—so that they would be easily accessible via the newfangled network that had just emerged called the Internet. In the process he came up with the concept for what became the World Wide Web, which he laid out in a terrific research paper in 1989. Along the way he developed a software language to create web pages, called HTML, along with the first web browser, which he made available to everyone, free of charge, in 1991.
Most people think of the Internet and the World Wide Web as the same thing—but they aren’t. The Internet is the underlying transport infrastructure; the Web is an application that rides on top of that infrastructure, or better said, a set of applications, that make it useful to the entire world.
Next, let me introduce you to Ray Kurzweil, who decided he would be an inventor before he started elementary school. By the time he turned 15, he had built and programmed his own computer to compose music. After graduating from MIT with degrees in computer science and literature, he created a system that enabled computers to read text characters, regardless of the font.
Kurzweil invented many things, but he is perhaps best known for coining the concept of the Singularity, the moment when digital computers and the human brain merge and communicate directly with each other. It’s a fascinating idea. A good business PC easily operates at four billion cycles per second. The human brain, on the other hand, operates at about ten cycles per second. But: a digital PC has limited memory, whereas the human brain’s memory is essentially unlimited. So what happens if we combine the blindingly fast clock speed of a PC with the unlimited memory of the human brain? The Singularity. Cue the Twilight Zone music.
Now let me introduce you to Ajay Bhatt. Born in India, he received an undergrad degree in electrical engineering before emigrating to the U.S., where he earned a master’s degree in the same field, working on technology to power the Space Shuttle. After joining Intel in 1990, he had an epiphany while working on his PC one evening. What if, he wondered, if it was possible for peripheral devices to connect to a computer as easily as plugging an electrical cord into a wall socket? Not all that hard, he decided, and he and his colleagues invented the Universal Serial Bus, which we all know as USB.
And then we have one of my favorites, Bob Metcalfe. Another MIT grad with degrees in
engineering and management as well as a PhD from Harvard, he joined Xerox’s Palo Alto Research Center, better known as Xerox PARC, a well-respected facility that has been compared to the east coast’s Bell Labs. While he was there, Metcalfe and his colleagues developed a technique for cheaply and easily connecting computers so that they can share files at high speed. The technology that resulted is called Ethernet, the basis for nearly every connectivity solution in use today in modern computer networks, including WiFi. He went on to found 3Com Corporation, but for me, he will always be most famous for what has come to be known as Metcalfe’s Law: that the value of a mesh network, meaning a network in which every computer connects to every other computer in the network, increases as a function of the square of the number of devices that are attached. Want that in plain English? When a new computer loaded with data connects to a mesh network, the combined value of all that data and its shared access doesn’t increase in a linear way; it increases exponentially. Don’t believe it? Look at every one of the so-called platform companies that we discussed earlier: Apple’s App or music store, Uber, Amazon, every single social media company, and for that matter, the telephone network and the World Wide Web itself.
Dr. Robert Jarvik was a prodigy who invented a surgical stapler and other medical devices while he was still a teenager. But then he got serious. While he was an undergraduate student at the University of Utah in 1964, his father needed to have heart surgery. That ordeal influenced Jarvik to turn his curiosity, inventiveness and problem-solving skills—along with his medical degree— toward finding a method to keep patients with failing hearts alive until they could receive a transplant. While he wasn’t the first to develop an artificial heart, Jarvik’s 1982 creation, the Jarvik 7, was the first such device that could be implanted inside a person’s body. Today, Jarvik continues to work on a device that can serve as a permanent replacement organ.
Here’s another one, and this one fascinates me. Sookie Bang was born and raised in South Korea. She graduated from Seoul National University in 1974 and earned a Ph.D. in microbiology from the University of California at Davis in 1981. As a professor and researcher at the South Dakota School of Mines and Technology, her specialty is bioremediation—for example, using bacteria as an ingredient in a sealant to fix cracks caused by weathering and by freezing water that seeps into the concrete outer surfaces of buildings. Bang and her colleagues figured out how to speed up a naturally occurring process in which bacteria extract nitrogen from urea, which produces carbon dioxide and ammonia as byproducts. The CO2 and ammonia then react with water and calcium to form calcium carbonate, the chemical compound that we know as limestone. The patch created by the bacterial process seals the crack from the inside out and integrates with the porous concrete, repairing the crack. In essence, the concrete becomes self-healing.
Another Boomer name you need to know is Dean Kamen, who was born in Long Island, N.Y., in 1951. You may not know who he is, but I guarantee you know at least one of his inventions.
In the early 2000s, Kamen attracted media attention because investors were knocking each other over to be the first to fund “Project Ginger.” The project was highly secretive, but when the veil was finally lifted, the world was stunned when they were introduced to the Segway Transporter. The device incorporates sophisticated electronics and a gyroscope that allow it to self-balance, and moves, stops and turns based on subtle changes in the driver’s posture. Today, the Segway’s progeny include the ubiquitous “hover boards” that every kid seems to have. But Kamen’s invention also led to the development of an extraordinary device that has changed the lives of thousands of people: a remarkable wheelchair that, thanks to its gyros, can convert from a standard four-wheel chair to a two-wheel chair, in the process lifting the occupant up to eye level with an adult. It can even climb stairs.
But Kamen was an inventor long before he created the Segway. While he was still a college student at Worcester Polytechnic Institute in 1972, he invented a wearable device called the ambulatory infusion pump. It changed the lives of diabetics, freeing them from having to worry about injecting themselves with insulin. The pump did it for them.
But he didn’t stop there. After creating the ambulatory infusion pump, Kamen went after a solution for patients with severe kidney disease who had to travel to dialysis centers for the treatments they needed to survive. He invented a portable machine that allowed patients to give themselves dialysis treatments at home, while sleeping. In 1993, it was named Medical Product of the Year.
The list goes on: flexible foot prostheses, artificial skin grafts, innovative battery designs, and plenty of others, all created by experienced, gifted innovators and inventors—and dare I say it, with a small bit of pride, Baby Boomers.
The truth is, every generation yields its own crop of gifted people who make important contributions to science, engineering, the arts, medicine, and society at-large. But without the contributions of those who came before, nothing we enjoy today would exist. The Boomers stood on the shoulders of giants from the Greatest and Silent Generations, just as Gen-X, the Millennials and Gen-Z stand on Boomer shoulders, and just as the next generations to arrive will stand on theirs. It’s easy to criticize those who came before, but it’s also not much of a stretch to recognize that the current generations of any era wouldn’t be where they are or have what they have without them. So instead of looking for the failures of prior generations, maybe we all need to take a moment to recognize their successes—and how those successes benefit us. Of course, if you still want to blame the Boomers for the Internet, mobile telephony, and the commercial success of the global semiconductor industry that makes literally EVERYTHING work, I guess I’m good with that.
*A note before you begin to read: This is a long post; if you’d rather listen to it, you can find it at the Natural Curiosity Project Podcast.
Part I
LIFE IS VISUAL, so I have an annoying tendency to illustrate everything—either literally, with a contrived graphic or photo, or through words. So: try to imagine a seven-sided polygon, the corners of which are labeled curiosity, knowledge, wisdom, insight, data, memory, and human will. Hovering over it, serving as a sort of conical apex, is time.
Why these eight words? A lifetime of living with them, I suppose. I’m a sucker for curiosity; it drives me, gives my life purpose, and gives me a decent framework for learning and applying what I learn. Knowledge, wisdom, insight, and data are ingredients that arise from curiosity and that create learning. Are they a continuum? Is one required before the next? I think so, but that could just be because of how I define the words. Data, to me, is raw ore, a dimensionless precursor. When analyzed, which means when I consider it from multiple perspectives and differing contexts, it can yield insight—it lets me see beyond the obvious. Insight, then, can become knowledge when applied to real-world challenges, and knowledge, when well cared for and spread across the continuum of a life of learning, becomes wisdom. And all of that yields learning. And memory? Well, keep listening.
Here’s how my model came together and why I wrestle with it.
Imagine an existence where our awareness of ‘the past’ does not exist, because our memory of any action disappears the instant that action takes place. In that world, a reality based on volatile memory, is ‘learning,’ perhaps defined as knowledge retention, possible? If every experience, every gathered bit of knowledge, disappears instantly, how do we create experience that leads to effective, wisdom-driven progress, to better responses the next time the same thing happens? Can there even be a next time in that odd scenario, or is everything that happens to us essentially happening for the first time, every time it happens?
Now, with that in mind, how do we define the act of learning? It’s more than just retention of critical data, the signals delivered via our five senses. If I burn myself by touching a hot stove, I learn not to do it again because I form and retain a cause-effect relationship between the hot stove, the act of touching it, and the pain the action creates. So, is ‘learning’ the process of applying retained memory that has been qualified in some way? After all, not all stoves are hot.
Sometime around 500 BC, the Greek playwright Aeschylus observed that “Memory is the mother of all wisdom.” If that’s the case, who are we if we have no memory? And I’m not just talking about ‘we’ as individuals. How about the retained memory of a group, a community, a society?
Is it our senses that give us the ability to create memory? If I have no senses, then I am not sentient. And if I am not sentient, then I can create no relationship with my environment, and therefore have no way to respond to that environment when it changes around me. And if that happens, am I actually alive? Is this what awareness is, comprehending a relationship between my sense-equipped self and the environment in which I exist? The biologist in me notes that even the simplest creatures on Earth, the single-celled Protozoa and Archaea, learn to respond predictably to differing stimuli.
But I will also observe that while single-celled organisms routinely ‘learn,’ many complex multi-celled organisms choose not to, even though they have the wherewithal to do so. Many of them currently live in Washington, DC. A lifetime of deliberate ignorance is a dangerous thing. Why, beyond the obvious? Because learning is a form of adaptation to a changing environment—call it a software update if you’re more comfortable with that. Would you sleep well at night, knowing that the antivirus software running on your computer is a version from 1988? I didn’t think so. So, why would you deliberately choose not to update your personal operating system, the one that runs in your head? This is a good time to heed the words of Charles Darwin: It is not the strongest that survive, nor the most intelligent, but those that are most adaptable to change. Homo sapiens, consider yourselves placed on-notice.
Part II
RELATED TO THIS CONUNDRUM IS EPISTEMOLOGY—the philosophy that wrestles with the limits of knowledge. Those limits don’t come about because we’re lazy; they come about because of physics.
From the chemistry and physics I studied in college, I learned that the convenient, simple diagram of an atom that began to appear in the 1950s is a myth. Electrons don’t orbit the nucleus of the atom in precise paths, like the moon orbiting the Earth or the Earth orbiting the Sun. They orbit according to how much energy they have, based on their distance from the powerfully attractive nucleus. The closer they are, the stronger they’re held by the electromagnetic force that holds the universe together. But as atoms get bigger, as they add positively-charged protons and charge-less neutrons in the densely-packed nucleus, and layer upon layer of negatively charged orbiting electrons to balance the nuclear charge, an interesting thing happens. As layers of electrons are added, the strength with which the outermost electrons are held by the nucleus decreases with distance, making them less ‘sticky,’ and the element becomes less stable.
This might be a good time to make a visit to the Periodic Table of the Elements. Go pull up a copy and follow along.
Look over there in the bottom right corner. See all those elements with the strange names and big atomic numbers—Americium, Berkelium, Einsteinium, Lawrencium? Those are the so-called transuranium elements, and they’re not known for their stability. If a distant electron is attracted away for whatever reason, that leaves an element with an imbalance—a net positive charge. That’s an unstable ion with a positive charge that wants to get back to a stable state, a tendency defined by the Second Law of Thermodynamics and a process called entropy, which we’ll discuss shortly. It’s also the heart of the strange and wonderful field known as Quantum Mechanics.
This is not a lesson in chemistry or nuclear physics, but it’s important to know that those orbiting electrons are held within what physicists call orbitals, which are statistically-defined energy constructs. We know, from the work done by scientists like Werner Heisenberg, who was a physicist long before he became a drug dealer, that an electron, based on how far it is from the nucleus and therefore how much energy it has, lies somewhere within an orbital. The orbitals, which can take on a variety of three-dimensional shapes that range from a single sphere to multiple pear-shaped spaces to a cluster of balloons, define atomic energy levels and are stacked and interleaved so that they surround the nucleus. So, the orbital that’s closest to the nucleus is called the 1s orbital, and it’s shaped like a sphere. In the case of Hydrogen, element number one in the Periodic Table, somewhere within that orbital is a single lonely electron. We don’t know precisely where it is within the 1s orbital at any particular moment; we just know that it’s somewhere within that mathematically-defined sphere. This is what the Heisenberg Uncertainty Principle is all about: we have no way of knowing what the state of any given electron is at any point in time. And, we never will. We just know that statistically, it’s somewhere inside that spherical space.
Which brings us back to epistemology, the field of science (or is it philosophy?) that tells us that we can never know all that there is to know, that there are defined limits to human knowledge. Here’s an example. We know beyond a shadow of a doubt that the very act of observing the path of an electron changes the trajectory of that electron, which means that we can never know what its original trajectory was before we started observing it. This relationship is described in a complex mathematical formula called Schrödinger’s Equation.
Look it up, study it, there will be a test. The formula, which won its creator, Erwin Schrödinger, the Nobel Prize in 1933, details the statistical behavior of a particle within a defined space, like an energy-bound atomic orbital. It’s considered the fundamental principle of quantum mechanics, the family of physics that Albert Einstein made famous. In essence, we don’t know, we can’t know, what the state of a particle is at any given moment, which implies that the particle can exist, at least according to Schrödinger, in two different states, simultaneously. This truth lies at the heart of the new technology called quantum computing. In traditional computing, a bit (Binary Digit) can have one or the other of two states: zero or one. But in quantum computing, we leave bits behind and transact things using Qubits (quantum bits), which can be zero, one, or both zero and one at the same time. Smoke ‘em if you got ‘em.
The world isn’t neat and tidy where it matters: it’s sloppy and ill-defined and statistical. As much as the work of Sir Isaac Newton described a physical world defined by clear laws of gravity, and velocity, and acceleration, and processes that follow clearly-defined, predictably linear outcomes, Schrödinger’s, Heisenberg’s, and Einstein’s works say, not so fast. At the atomic level, the world doesn’t work that way.
I know—you’re lighting up those doobies as you read this. But this is the uncertainty, the necessary inviolable unknown that defines science. Let me say that again, because it’s important. Uncertainty Defines Science. It’s the way of the universe. Every scientific field of study that we put energy into, whether it’s chemistry, pharmacology, medicine, geology, engineering, genetics, or a host of others, is defined by the immutable Laws of Physics, which are governed by the necessary epistemological uncertainties laid down by people like Werner Heisenberg and Erwin Schrödinger, and codified by Albert Einstein.
Part III
ONE OF MY FAVORITE T-SHIRTS SAYS,
I READ.
I KNOW SHIT.
I’m no physicist, Not by a long shot. But I do read, I did take Physics and Chemistry, and I was lucky enough to have gone to Berkeley, where a lot of this Weird Science was pioneered. I took organic chemistry from a guy who was awarded a Nobel Prize and had more than a few elements named after him (Glenn Seaborg) and botany from the guy who discovered how photosynthesis works and also had a Nobel Prize (Melvin Calvin). I know shit.
But the most important thing I learned and continue to learn, thanks to those grand masters of knowledge, is that uncertainty governs everything. So today, when I hear people criticizing scientists and science for not being perfect, for sometimes being wrong, for not getting everything right all the time, for not having all the answers, my blood boils, because they’re right, but for the wrong reasons. Science is always wrong—and right. Schrödinger would be pleased with this duality. It’s governed by the same principles that govern everything else in the universe. Science, which includes chemistry, pharmacology, medicine, geology, engineering, genetics, and all the other fields that the wackadoodle pseudo-evangelists so viciously criticized during the pandemic, and now continue to attack, can’t possibly be right all the time because the laws of the universe fundamentally prevent us from knowing everything we need to know to make that happen. Physics doesn’t come to us in a bento box wrapped in a ribbon. Never in the history of science has it ever once claimed to be right. It has only maintained that tomorrow it will be more right than it is today, and even more right the day after that. That’s why scientists live and die by the scientific method, a process that aggressively and deliberately pokes and prods at every result, looking for weaknesses and discrepancies. Is it comfortable for the scientist whose work is being roughed up? Of course not. But it’s part of being a responsible scientist. The goal is not for the scientist to be right; the goal is for the science to be right. There’s a difference, and it matters.
This is science. The professionals who practice it, study it, probe it, spend their careers trying to understand the rules that govern it, don’t work in a world of absolutes that allow them to design buildings that won’t fail and drugs that will work one hundred percent of the time and to offer medical diagnoses that are always right and to predict violent weather with absolute certainty. No: they live and work in a fog of uncertainty, a fuzzy world that comes with no owner’s manual, yet with that truth before them, and accepting the fact that they can never know enough, they do miraculous things. They have taken us to the stars, created extraordinary energy sources, developed mind-numbingly complex genetic treatments and vaccines, and cured disease. They have created vast, seamless, globe-spanning communications systems, the first glimmer of artificial intelligence, and demonstrated beyond doubt that humans play a major role in the fact that our planet is getting warmer. They have identified the things that make us sick, and the things that keep us well. They have helped us define ourselves as a sentient species.
And, they are pilloried by large swaths of the population because they’re not one hundred percent right all the time, an unfair expectation placed on their shoulders by people who have no idea what the rules are under which they work on behalf of all of us.
Here’s the thing, for all of you naysayers and armchair critics and nonbelievers out there: Just because you haven’t taken the time to do a little reading to learn about the science behind the things that you so vociferously criticize and deny, just because you choose deliberate ignorance over an updated mind, doesn’t make the science wrong. It does, however, make you lazy and stupid. I know shit because I read. You don’t know shit because you don’t. Take a lesson from that.
Part IV
THIS ALSO TIES INTO WHAT I BELIEVE to be the most important statement ever uttered by a sentient creature, and it begins at the liminal edges of epistemological thought: I am—the breathtaking moment of self-awareness. Does that happen the instant a switch flips and our senses are activated? If epistemology defines the inviolable limits of human knowledge, then what lies beyond those limits? Is human knowledge impeded at some point by a hard-stop electric fence that prevents us from pushing past the limits? Is there a ‘there be dragons here’ sign on the other side of the fence, prohibiting us from going farther? I don’t think so. For some, that limit is the place where religion and faith take over the human psyche when the only thing that lies beyond our current knowledge is darkness. For others, it stands as a challenge: one more step moves us closer to…what, exactly?
A thinking person will experience a moment of elegance here, as they realize that there is no fundamental conflict between religious faith and hardcore science. The two can easily coexist without conflict. Why? Because uncertainty is alive and well in both. Arthur C. Clarke: Any sufficiently advanced technology is indistinguishable from magic.
Part V
THIS BRINGS ME TO TIME, and why it sits at the apex of my seven-sided cone. Does time as we know it only exist because of recallable human memory? Does our ability to conceive of the future only exist because, thanks to accessible memory and a perception of the difference between a beginning state and an end state, of where we are vs. where we were, we perceive the difference between past and present, and a recognition that the present is the past’s future, but also the future’s past?
Part VI
SPANISH-AMERICAN WRITER AND PHILOSOPHER George Santayana is famous for having observed that ‘those who fail to heed the lessons of history are doomed to repeat them.’ It’s a failing that humans are spectacularly good at, as evidenced by another of Santayana’s aphorisms—that ‘only the dead have seen the end of war.’ I would observe that in the case of the first quote, ‘heed’ means ‘to learn from,’ not simply ‘to notice.’ But history, by definition, means learning from things that took place in the past, which means that if there is no awareness of the past, then learning is not possible. So, history, memory, and learning are, to steal from Douglas Adams, the author of The Hitchhiker’s Guide to the Galaxy, “inextricably intertwingled” (more on that phrase later). And if learning can’t happen, does that then mean that time, as we define it, stops? Does it become dimensionless? Is a timeless system the ultimate form of entropy, the tendency of systems to seek the maximum possible state of disorder, including static knowledge? Time, it seems, implies order, a logical sequence of events that cannot be changed. So, does entropy seek timelessness? Professor Einstein, white courtesy telephone, please.
The Greek word chronos defines time as a physical constant, as in, I only have so much time to get this done. Time is money. Only so much time in a day. 60 seconds per minute, 60 minutes per hour, 24 hours per day. But the Greeks have a second word, kairós, which refers to the quality of time, of making the most of the time you have, of savoring time, of using it to great effect. Chronos, it seems, is a linear and quantitative view of time; kairós is a qualitative version.
When I was a young teenager, I read a lot of science fiction. One story I read, a four-book series by novelist James Blish (who, with his wife, wrote the first Star Trek stories for television), is the tale of Earth and its inhabitants in the far distant future. The planet’s natural resources have been depleted by human rapaciousness, so, entire cities lift off from Earth using a form of anti-gravity technology called a Gravity Polaritron Generator, or spindizzy for short, and become independent competing entities floating in space.
In addition to the spindizzy technology, the floating cities have something called a stasis field, within which time does not exist. If someone is in imminent danger, they activate a stasis field that surrounds them, and since time doesn’t exist within the field, whatever or whoever is in it cannot be hurt or changed in any way by forces outside the field. It’s an interesting concept, which brings me to a related topic.
One of my favorite animals, right up there with turtles and frogs, is the water bear, also called a tardigrade (and, charmingly by some, a moss piglet). They live in the microscopically tiny pools of water that collect on the dimpled surfaces of moss leaves, and when viewed under a microscope look for all the world like tiny living gummy bears.
Tardigrades can undergo what is known as cryptobiosis, a physiological process by which the animal can protect itself from extreme conditions that would quickly kill any other organism. Basically, they allow all the water in their tiny bodies to completely evaporate, in the process turning themselves into dry, lifeless little husks. They become cryptospores. Water bears have been exposed to the extreme heat of volcanos, the extreme cold of Antarctica, and intense nuclear radiation inside power plants; they have been placed outside on the front stoop of the International Space Station for days on end, then brought inside, with no apparent ill effects. Despite the research into their ability to survive such lethal environments, we still don’t really know how they do it. Uncertainty.
But maybe I do know. Perhaps they have their own little stasis field that they can turn on and off at will, in the process removing time as a factor in their lives. Time stops, and if life can’t exist without time, then they can’t be dead, can they? They become like Qubits, simultaneously zero and one, or like Schrödinger’s famous cat, simultaneously dead and alive.
Part VII
IN THE HITCHHIKER’S GUIDE TO THE GALAXY, Douglas Adams uses the phrase I mentioned earlier and that I long ago adopted as one of my teaching tropes. It’s a lovely phrase that just rolls off the tongue: “inextricably intertwingled.” It sounds like a wind chime when you say it out loud, and it makes audiences laugh when you use it to describe the interrelatedness of things.
The phrase has been on my mind the last few days, because its meaning keeps peeking out from behind the words of the various things I’ve been reading. Over the last seven days I’ve read a bunch of books from widely different genres—fiction, biography, science fiction, history, philosophy, nature essays, and a few others that are hard to put into definitive buckets.
There are common threads that run through all of the books I read, and not because I choose them as some kind of a confirmationally-biased reading list (how could Loren Eiseley’s Immense Journey, Arthur C. Clarke’s Songs of a Distant Earth, E. O. Wilson’s Tales from the Ant World, Malcolm Gladwell’s Revenge of the Tipping Point, Richard Feynman’s Surely You’re Joking, Mister Feynman, and Studs Terkel’s And They All Sang possibly be related, other than the fact that they’re books?). Nevertheless, I’m fascinated by how weirdly connected they are, despite being so very, very different. Clarke, for example, writes a whole essay in Songs of a Distant Earth about teleology, a term I’ve known forever but have never bothered to look up. It means looking at the cause of a phenomenon rather than its perceived purpose to discern its reason for occurring. For example, in the wilderness, lightning strikes routinely spark forest fires, which burn uncontrolled, in the process cleaning out undergrowth, reducing the large-scale fire hazard, but doing very little harm to the living trees, which are protected by their thick bark—unless they’re unhealthy, in which case they burn and fall, opening a hole in the canopy that allows sunlight to filter to the forest floor, feeding the seedlings that fight for their right to survive, leading to a healthier forest. So it would be easy to conclude that lightning exists to burn forests. But that’s a teleological conclusion that focuses on purpose rather than cause. Purpose implies intelligent design, which violates the scientific method because it’s subjective and speculative. Remember—there’s no owners manual.
The initial cause of lightning is wind. The vertical movement of wind that precedes a thunderstorm causes negatively charged particles to gather near the base of the cloud cover, and positively charged particles to gather near the top, creating an incalculably high energy differential between the two. But nature, as they say, abhors a vacuum, and one of the vacuums it detests is the accumulation of potential energy. Natural systems always seek a state of entropy—the lowest possible energy state, the highest state of disorder. I mentioned this earlier; it’s a physics thing, the Second Law of Thermodynamics. As the opposing charges in the cloud grow (and they are massive—anywhere from 10 to 300 million volts and up to 30,000 amps), their opposite states are inexorably drawn together, like opposing poles of a gigantic magnet (or the positively charged nuclei and negatively charged electrons of an atom), and two things can happen. The energy stored between the “poles” of this gigantic aerial magnet—or, if you prefer, battery—discharges within the cloud, causing what we sometimes call heat lightning, a ripple of intense energy that flashes across the sky. Or, the massive negative charge in the base of the cloud can be attracted to positive charges on the surface of the Earth—tall buildings, antenna towers, trees, the occasional unfortunate person—and lightning happens.
It’s a full-circle entropic event. When a tree is struck and a fire starts, the architectural order that has been painstakingly put into place in the forest by nature is rent asunder. Weaker trees fall, tearing open windows in the canopy that allow sunlight to strike the forest floor. Beetles and fungi and slugs and mosses and bacteria and nematodes and rotifers consume the fallen trees, rendering them to essential elements that return to the soil and feed the healthy mature trees and the seedlings that now sprout in the beams of sunlight that strike them. The seedlings grow toward the sunlight; older trees become unhealthy and fall; order returns. Nature is satisfied. Causation, not purpose. Physics, not intelligent design. Unless, of course, physics is intelligent design. But we don’t know. Uncertainty.
E. O. Wilson spends time in more than one of his books talking about the fact that individuals will typically act selfishly in a social construct, but that groups of individuals in a community will almost always act selflessly, doing what’s right for the group. That, by the way, is the difference between modern, unregulated capitalism and what botany professor Robin Wall Kimmerer calls “the gift economy” in her wonderful little book, The Serviceberry. This is not some left-leaning, unicorn and rainbows fantasy: it’s a system in which wealth is not hoarded by individuals, but rather invested in and shared with others in a quid pro quo fashion, strengthening the network of relationships that societies must have to survive and flourish. Kimmerer cites the story of an anthropologist working with a group of indigenous people who enjoy a particularly successful hunt, but is puzzled by the fact that they now have a great deal of meat but nowhere to keep it cold so that it won’t spoil. “Where will you store it to keep it fresh for later?” The anthropologist asks. “I store it in my friends’ bellies,” the man replies, equally puzzled by the question. This society is based on trust, on knowing that the shared meat will be repaid in kind. It is a social structure based on strong bonds—kind of like atoms. Bonds create stability; individual particles do the opposite, because they’re less stable.
In fact, that’s reflected in many of the science fiction titles I read: that society’s advances come about because of the application of the common abundance of human knowledge and will. Individuals acting alone rarely get ahead to any significant degree, and if they do, it’s because of an invisible army working behind them. But the society moves ahead as a collective whole, with each member contributing. Will there be those who don’t contribute? Of course. It’s a function of uncertainty and the fact that we can never know with one hundred percent assurance how an individual within a group will behave. There will always be outliers, but their selfish influence is always neutralized by the selfless focus of the group. The behavior of the outlier does not define the behavior of the group. ‘One for one and none for all’ has never been a rallying call.
Part VIII
THIS ESSAY APPEARS TO WANDER, because (1) it wanders and (2) it connects things that don’t seem to be connected at all, but that clearly want to be. Learning doesn’t happen when we focus on the things; it happens when we focus on the connections between the things. The things are data; the connections create insight, which leads to knowledge, wisdom, action, a vector for change. Vector—another physics term. It refers to a quantity that has both direction and magnitude. The most powerful vector of all? Curiosity.
Science is the only tool we have. It’s an imperfect tool, but it gets better every time we use it. Like it or not, we live in a world, in a universe, that is defined by uncertainty. Science is the tool that helps us bound that uncertainty, define its hazy distant edges, make the unclear more clear, every day. Science is the crucible in which human knowledge of all things is forged. It’s only when we embrace that uncertainty, when we accept it as the rule of all things, when we revel in it and allow ourselves to be awed by it—and by the science-based system that allows us to constantly push back the darkness—that we begin to understand. Understand what, you say? Well, that’s the ultimate question, isn’t it?
I’m standing on the front porch because a thunderstorm is passing through, and the sky is as dark and green as the back of a catfish. If there’s a more satisfying experience out there, I honestly don’t know what it is. The hiss of rain, the random chiming of leaves, downspouts, puddles, and flower pots as the raindrops fall, the crackle and crash of thunder—it’s nature’s best symphony. And the light—I’ve always believed that the light during a thunderstorm is something you can taste. It’s more than visible; thunderstorm light glows, from within, and it comes from everywhere and nowhere.
The best part of a thunderstorm, of course, is when it ends—not because it’s over, which I always regret, but because it leaves behind a scent trail, that amazing smell, the breath of the storm, that proves that it’s alive. That smell, which we usually call ozone, isn’t ozone at all, at least not totally. It’s a very different chemical compound that I’ll introduce you to in a minute. But first, because I brought it up, let me tell you a bit about ozone, because it is a pretty important chemical.
Ozone is a weird form of oxygen. Oxygen is normally a diatomic molecule, meaning that two oxygen atoms combine to form the gas that we breathe, O2. Ozone, on the other hand, is O3, a much less stable molecule.
Everybody knows about the ozone layer up there. Well, that layer exists because ultraviolet energy from space strikes the oxygen in the upper atmosphere, changing O2 to O3 and creating a layer or shell of ozone that does a very good job of shielding us from all that UV radiation that would otherwise fry us into little masses of melanoma. At least, it protects us until we do dumb human things, like release chlorofluorocarbons that chemically eat holes in the ozone layer and let all that nasty UV energy through.
The ozone layer sits about 30 kilometers above the surface of the planet, and in spite of its name, the concentration of ozone up there is only about eight parts-per-million, while the rest is mostly just regular oxygen. But it’s that oxygen that absorbs ultraviolet energy to become the ozone that protects the planet’s surface from most of the effects of harmful radiation. And while ozone has beneficial effects in the atmosphere, they’re not all that beneficial down here on earth. It’s known to reduce crop yields when there’s too much of it in the ground, and because it’s such a powerful oxidant, it can be extremely irritating to noses, throats and lungs. It can also cause cracks in rubber and plastics, and in at least one study, it’s been shown to make arterial plaque, the fatty buildup that can lead to heart attack and stroke, worse. Talk about a love-hate relationship.
So, let’s talk about what we were originally discussing before I diverted us—and that was the wonderful smell that takes over everything after a rainstorm, that smell that makes us inhale deeply and feel good about life in general.
As it turns out, that smell doesn’t come from ozone—at least not exclusively. Ozone may be in the air if there was lightning during the rainstorm, but the chemical you’re mostly smelling is called Geosmin. You smell it after a rain, or in wet dirt that you’re digging up in the garden. The smell is so recognizable, and so wonderful, that it even has a name—Petrichor. It comes from two Greek words that mean “the smell of the substance that flows in the veins of the Gods.”
So, where does Geosmin come from? Well, it turns out that it’s created as a by-product when three types of bacteria found in the soil, actinomycetes, streptomycetes, and cyanobacteria, have their way with organic material. As they break it down, Geosmin is released. So, it’s naturally occurring, and in fact contributes to the flavor of beets, spinach, lettuce, mushrooms, even that wonderful, earthy taste of catfish. Sometimes it can be overpowering when too much of it gets into water supplies, and while it isn’t harmful, it can temporarily give water a bitter taste.
Here’s one last, interesting thing about Geosmin and its Petrichor aroma. Human noses are extremely sensitive to the smell of Petrichor, in fact, more sensitive to it than just about any other compound. We can detect it in concentrations of five parts per trillion. To put that into perspective, for the human nose to detect methanol, a fairly pungent alcohol, it has to be present in concentrations of a billion parts-per-trillion. That’s quite a difference. And why are we so amazingly sensitive to it? Well, some scientists believe that that sensitivity has been genetically selected, because it allowed our distant ancestors to find water, even in the driest places on earth. No wonder it smells so good—it helped keep us alive.
I’m a writer, which means that I’m also a serious reader. I like to say that writing is my craft; reading is my gym. And one author whose books have meant a lot to me—in fact, I’d consider him a mentor, even though we’ve never met—is a guy named John McPhee. If his books are any indication, he’s a ferociously curious guy. They all fall into the genre that I love, which is called creative nonfiction. It includes writers like William Least Heat-Moon, Bill Bryson, Annie Dillard, and of course, John McPhee. Creative nonfiction means writing about subjects that are real, but that incorporate storytelling into the narrative. In creative nonfiction, adjectives are legal.
I first ran across McPhee’s work when I took a writing workshop back in the 90s from William Least Heat-Moon, the inspiring author of one of my all-time favorite books, Blue Highways. One of John McPhee’s books, Coming Into the Country, was required reading for the workshop. It’s about homesteaders in Alaska, back in the days when the Alaska government would give land to people in exchange for their agreement to homestead it. Boring, you say? Well, consider the story of the guy who drove an old school bus up there. When he got reasonably close to the land he had acquired as part of his homesteading agreement, he parked the school bus, took a cutting torch to it, and cut off the top. He then turned the former top upside down like an overturned turtle’s shell, and drove the school bus-turned-convertible onto it. Once there, he welded the two together, attached a long shaft with a propeller on one end to the drive shaft of the school bus, shoved his contraption into the river, started the engine, and motored a few hundred miles to his newly acquired homestead. See what I mean? Story. It’s everything.
McPhee has written about a breathtaking range of topics. He wrote Annals of the Former World, in which he took a series of road trips across the United States with a geologist, looking at freeway roadcuts to understand the dynamic geology of North America, and in the process, writing a magnificent book about the geology of the continent. He wrote The Pine Barrens, the story of the great pine forests that cover most of southern New Jersey, and the people who live there. He wrote Uncommon Carriers, about the world of cargo carriers—all kinds—that form the basis of the global supply chain. He wrote Oranges, about the business of growing and selling them in Florida. He wrote Encounters with the Archdruid, about the interactions between conservationists and those they see as the enemy. And he wrote The Curve of Binding Energy, the story of Theodore Taylor, an early nuclear engineer who was also an anti-nuclear activist.
By the way, here’s a quote from Annals of the Former World that shows what kind of a writer McPhee is: “If by some fiat, I had to restrict all this writing to one sentence (and by the way, the book is two-and-a-half inches thick), this is the one I would choose: “The summit of Mount Everest is marine limestone.” Think about that.
So far, John McPhee has written more than 30 books, and I’ve read them all. I can honestly say that each one has made me a measurably better writer and thinker. But the book that really stuck with me, more than of the others, is called The Control of Nature. That book has been in my head a lot lately as I watch what’s going on in California specifically with the damage caused by heavy rains and flooding, and in the country (or world in general), as climate change has its way with us.
The Control of Nature is divided into three sections: ‘Atchafalaya’; ‘Catching the Lava’; and ‘Los Angeles Against the Mountains’. Each section tells a story of human hubris, of our largely futile efforts to make nature do something that nature doesn’t want to do—like changing the direction of the Mississippi River, or trying to redirect lava flows in places like Hawaii and Iceland away from population centers (Iceland dumped cold water on one of their flows), or protecting Los Angeles infrastructure from damage caused by flooding by building flood canals, like the cement-bound LA River. How’s that working out?
Some of you may remember a quote that I toss out a lot. It’s from Loren Eiseley, another of my favorite writers. Back in the 60s, Loren said, “When man becomes greater than nature, nature, which created us, will respond.” Well, she’s responding. And one of the lessons we can choose to learn from her response is that this is not a time for head-to-head combat. I used to tell my SCUBA diving students that it doesn’t matter how strong a swimmer you are, or how good a diver you are, the ocean is always stronger. The ocean will win, every time. So don’t even try. Discretion is the better part of valor, and to ignore that fact can be fatal.
As I said, this is not a time for head-to-head combat. Nature vs. Humanity cannot be a boxing match, because the outcome is predetermined, whether we like it or not. News flash: We don’t win this one. This is more a time for martial arts, in which we use our opponent’s weight and strength to work in our favor. Nature is telling us what to do, every day. We just seem to have a problem listening. ‘You’re not the boss of me,’ we say. ‘No, actually, you have that backward,’ nature says. ‘Here—let me demonstrate.’
The other flaw in the logic is that we have this tendency to think in terms of ‘us vs. nature,’ of ‘humans vs. the natural world,’ when in fact, we’re as much a part of the natural world as blue whales and chickadees and earthworms and slime molds. We just don’t act like it. By viewing ourselves as something apart from nature, as something better than or superior to nature, we invoke Loren Eiseley again. Nature is responding to our abuse, to our attempt to dominate, and her response is swift, sure, and painful.
So, what’s the alternative? The alternative is to shift our thinking from ‘us vs. nature’ to ‘us as an integral part of nature.’ Nice words. But, what do they mean? How do they become real, or actionable, as people like to say in the business world?
The answer is simpler than most people realize, although it requires deliberate action. There’s that word again—deliberate. The answer isn’t one great, big thing, because if that were the case, nothing would ever change. Here’s an example for the techies. Think about it: What’s more powerful: a single mainframe computer, or hundreds of personal computers, or servers, networked together? The answer, of course, is the latter. Although instead of talking about computers here, we’re talking about one-person efforts on behalf of the environment of which we are a part, that, in aggregate, amount to enormously powerful results. The whole is greater than the sum of its parts. For example, if you live in a house, you probably have a yard, which means that you probably have grass, and shrubs, and trees, and flowering plants, and other things to make it look good. The problem is that most of those are non-native, which means that they’re not always good for local pollinators, like bees and moths and butterflies and even spiders, or other local wildlife. But if each of us were to set aside an area in the back corner of the yard the size of a typical walk-in closet, say, eight feet by ten feet, that’s eighty square feet that can be allowed to grow wild with local plants, which provide habitat, including food, for native pollinators. I guarantee that if you go down to your local nursery, or Audubon Center, you can buy a shaker bottle full of local plant seeds that you can take and shake over your designated area.
Here’s another one. We often use broad-spectrum insecticides to get rid of insect pests, which they do very well. But those nicotinoid-based compounds are indiscriminate—they also kill beneficial insects like bees, butterflies, moths, and spiders, and birds, and reptiles and amphibians, and potentially humans, if they leach into the water supply—and they do. So, why not switch to environmentally friendly compounds? They’re out there, and yes, they may cost a little bit more, but not enough to be a showstopper, especially when you consider the alternative. I don’t want to be yet another alarmist here—there’s more than enough of them already—but consider this: pollinators aren’t a nice-to-have thing. Bees, moths, butterflies, spiders, and even some birds move pollen from flower to flower, a process that’s required for the flower to give rise to fruit. No bees, no pollination. No pollination, no fertilization. no fertilization, no fruits or vegetables. So think twice, please, about using that insecticide.
Other things? There are lots of them. Buy soaps and detergents in bulk, and refill the same bottle over and over, to reduce plastic consumption. Buy one of those showerheads that allow you to turn down the water pressure to a warm trickle when you don’t need the full force of the blast. An efficient showerhead still puts out about two to two-and-a-half gallons of water per minute, which over the course of a year of showering can really add up, which means that any effort to conserve falls on the correct side of the environmental balance sheet. You don’t have to turn the shower off; just turn it down. It makes a huge difference.
What else? Set the thermostat in winter one degree cooler and buy a sweater or that cool hoodie you’ve been jonesing for. There’s your excuse! Think before you get in the car to run that errand. Are you close enough to walk instead? I do it every day, a few miles each way, and I feel so much better for it.
Another thing you can do is buy as much locally produced food as you can. I’m about to write a whole series of essays on the role that technology can play to help the environment, but just consider this. California can no longer feed the nation. They’ve depleted their deep-water aquifers to the point that the ground in the central valley is measurably sinking, and the drought is making it necessary for farmers to uproot fruit and nut trees and many crops, because of the great volumes of water they consume—water that’s no longer available, or if it is, it’s too salty to use. But even if California CAN ship produce across the country, we know that that takes its toll on the environment because of the trucks and planes required to do it, and freshness is a concern. We also know that there have been outbreaks of disease—salmonella and listeria—associated with large-scale farming.
Local produce, on the other hand, is much fresher, it tastes better, it’s safer, and it supports a local farmer. And yes, you’re probably going to pay a little more, but how much is your health worth?
I’m not channeling Chicken Little here. The sky isn’t falling, but it’s a lot lower than it used to be. And before the naysayers climb all over me, yes, I know that some of the current climate change effects we’re experiencing are happening as a matter of the natural course of things. But I also know, because the science proves it, that we’re doing a lot of things that are making it worse, things that, through minor but deliberate efforts, we could change without a whole lot of personal impact. But there’s that ‘deliberate’ word again—meaning, let’s stop talking, and wringing our hands, and putting the bumper sticker on the car that says ‘save the bees,’ or wearing the ‘May the Forest Be With You’ T-shirt. Those are all fine. But a bit more minimal effort combined with deliberate action would go a very long way.
In other episodes, and in my leadership workshops, I often talk about the danger and ineffectiveness of slogan leadership—you know, putting up those motivational posters that show a crew of people on a misty river at sunrise, in a rowing scull, with the word ‘teamwork’ across the bottom. Or a person standing on top of a mountain, arms raised in celebration, silhouetted against the sunset, with the word ‘commitment’ across the bottom of the poster. That’s slogan leadership, and while the pictures are pretty, it’s a form of responsibility abdication. So, let’s not abdicate—let’s do. It shows the other corners of the natural world that we’re willing to make an effort to play well with others, and it sends the right message to our kids and grandkids.
We can’t control nature, but we can harness her awesome power to help clean up our act, like a martial arts master does against a stronger opponent. As someone who spends an awful lot of time in the natural world, I’d much rather have nature as my ally than my enemy. It’s a choice. And it’s our move.