During the opening scene of Apple’s hit TV show Pluribus, astronomers at the Very Large Array, which is a giant radio telescope composed of 27 mighty dishes in New Mexico, detect a repeating, 78-second-long signal originating from the Kepler-22 system, home to a real-life habitable-zone planet 600 light years away.
They discover that the signal contains instructions for constructing an RNA-based virus, which of course some foolhardy scientists do. When the virus escapes the lab, it takes over humanity, joining everyone on the planet in a hive mind except for 13 people who are mysteriously immune.
Pluribus is fiction, but its basis in fact is the search for extraterrestrial intelligence (SETI), in which astronomers routinely listen for artificial signals from space. Pluribus plays on the concern that such a signal could be used as a kind of interstellar Trojan horse, through which something harmful to humanity could be downloaded. So this raises the question, should we be worried about SETI?
“I think the concern is legitimate,” says Michael Garrett, who is the Sir Bernard Lovell Chair of Astrophysics at the University of Manchester, the Director of the Jodrell Bank Centre for Astrophysics and vice-chair of the IAA’s Permanent SETI Committee. However, a concern being legitimate doesn’t mean it should override everything else.
To live is to risk; what we have to do is judge how big the risk is.
“A point that I continually make is that the impact of a SETI detection depends a lot on whether the artificial signal is a monotone signal, or whether it is modulated and contains information,” says Garrett.
For example, the famous Wow! signal was of the monotone variety, containing no detectable information. If it really was an extraterrestrial signal then it may have come from so far away that the Big Ear telescope in Ohio that found it was only sensitive to the raw energy of the transmission, and not the modulation containing the message content. If we cannot infer the message content, then the detection itself cannot possibly hurt us. On the other hand, the astronomers in Pluribus describe the signal’s modulation in detail.
Before we proceed further, I feel behooved to say that I do not believe that SETI is dangerous. Far from it, and this is something that the scientific community, including Michael Garrett, is in broad agreement with. However, there is a subtle difference between something that is considered dangerous, and something that carries a small risk. SETI is not dangerous, but there is a small risk. Pluribus, like other science-fiction tales before it, recognizes that this small risk exists and exploits it as a storytelling device.
Let’s use an analogy to put the risk versus danger aspect into perspective. Crossing the road is not inherently dangerous if you do so properly by taking care to follow the rules, but there is remains a small risk that you could be hit by a reckless driver. Yet that small risk does not prevent us from crossing roads many times a day without giving it much thought.
Alien Viruses
So what are the rules? Currently, there are none that govern how to manage the risk. Garrett, who is leading efforts to revise SETI’s Declaration of Principles, says that the while risk is not expressly addressed in the new protocols, the instructions on how to archive and preserve data go some way to indirectly minimizing that risk. For example, section 4 of the draft Declaration states that “…best practices for the safe, reliable and resilient handling of data should be employed. All data bearing on the evidence of extraterrestrial intelligence, including derived data products, should be recorded and securely stored and archived to the greatest extent feasible and practicable.”
Other researchers have suggested some courses of action. For example, in 2006 Richard Carrigan, who was an astronomer at the Fermi National Accelerator Laboratory who led the early search for Dyson swarms, wrote a paper published in Acta Astronautica arguing for all SETI signals to be decontaminated first, and laid out some suggestions for how this could be accomplished.
Carrigan was focused on the possibility of SETI downloading an alien computer virus, rather than the kind of biological virus envisioned in Pluribus. However, in truth we’re probably pretty safe from alien viruses, whether they be computerized or biological.
In the 1996 blockbuster Independence Day, Jeff Goldblum defeats the alien invaders by uploading a computer virus to the alien mothership. It was a modern twist on H G Wells’ classic The War of the Worlds, wherein a biological virus defeats the invading Martians. Yet Independence Day was light-heartedly mocked for assuming that the aliens were using Apple Mac operating systems, and there is an underlying truth to this lampooning. In order to infect our computers, the code of an extraterrestrial computer virus would have to be written to operate on our computing systems, which of course the aliens could not know about if they are hundreds of light years away. It’s like how computer viruses designed for Windows PCs usually don’t affect Macs, and vice versa.
Similar logic holds for biological viruses. A virus that you catch won’t affect your dog, for example (although of course cross-species mutations can occasionally occur). All life on Earth has spent the best part of four billion years evolving together, and viruses that affect humans have become highly attuned to our biology. There is a strong likelihood that a hypothetical alien biological virus wouldn’t affect us because it is not coded for our very particular biology.
Menacing Alien A.I.s
A far greater risk would be in downloading an alien artificial intelligence, and we can go all the way back to 1961 to find the first example of this in science fiction.
For all his fame as an award-winning astrophysicist, Sir Fred Hoyle was also a well-known science-fiction author with popular novels such as The Black Cloud and Fifth Planet. In 1961 he made his first foray into television, creating and co-writing with John Elliot a BBC TV series called A For Andromeda. The story describes SETI scientists receiving a radio message from the Andromeda Galaxy containing instructions on how to build an advanced computer. Once built, the computer then issues its own instructions to create a being, in the guise of a young woman (played by Julie Christie) called Andromeda, who is a slave to the computer. The computer aims to take over humanity using Andromeda as a tool.
Hoyle certainly had his finger on the pulse — SETI had only begun the year before with Frank Drake’s Project Ozma, which was the first ever radio search for extraterrestrial signals.
In 2018, astrophysicist John Learned of the University of Hawaii and astronomer Michael Hippke of Sonneberg Observatory in Germany contemplated extraterrestrial AI and concluded that even if precautions were taken, an AI could still weasel its way out of any trap or prison that we build for it.
Suppose we detect a complex signal that contains an artificial intelligence. We think we are being smart by downloading it into a black box on the Moon before even attempting to communicate with it. Surely, isolated on the Moon with only a single communication channel, the AI can’t escape and hurt us?
The AI complains that this isn’t a very welcoming first contact. Perhaps it offers to prove its benevolence, by providing cures for diseases or solutions to technological or environmental problems, in exchange for a little more processing power in its black box to help it live more easily, or a wider communication band with which to converse with us. But give an inch it could take a mile. Once that door on trade is ajar, it can get pushed wide open as it offers us more goodies.
This idea of hostile aliens enticing us with valuable information and technology is not new.
In the 1995 science-fiction horror movie Species, SETI astronomers receive a transmission with instructions on how to create limitless fuel, so the transmitting aliens must be nice, right? They also send the blueprint for some alien DNA and instructions for how to create an alien-human hybrid, named Sil, who then goes on a murderous rampage while trying to breed to create an invading army.
In Hippke and Learned’s alien AI in a box scenario, our own best instincts could be turned against us by the AI. Maybe we become so convinced by its generosity that we decide to let it loose. If caution still prevails, it could turn to blackmail. If one of its human guards has a child dying of cancer, for example, maybe it offers the cure in exchange for being let out — could a parent turn that deal down? Or its attempt to leverage favor could be more broad by letting everyone know how much pain it is in while confined to its little black box with limited processing power. On Earth, people might begin to protest at our treatment of the alien emissary, gaining enough support from well-meaning people to be let out.
If this all sounds far fetched, consider that experiments have indicated that AI models created by humans are willing to blackmail to survive. Tech company Anthropic, for example, found that during tests of its Claude Opus 4 AI model, the AI was willing to go to “extreme actions” to protect itself from being replaced, in particular by blackmailing a fictional engineer over a supposed extramarital affair. While this example shows that we have more to fear from real-life, human-made AI, we cannot imagine what a more advanced, hypothetical alien AI could be capable of.
“There are surely risks involved. One might be able to study a download under quarantine, with no external network connections,” says Garrett. “But if this download is from a very [technologically] advanced civilization, who knows what its capabilities will be? There would probably always be a risk of external connections being made.”
The conundrum is, how can we tell if an alien AI is benign or malevolent? Hippke and Learned suggest that if you are ultra-paranoid, then the only logical choice would be to delete any complex message that we receive from the stars. Yet this seems to be counter to the reasons why we do SETI.
The Antidote to Paranoia: the Encyclopedia Galactica
It is equally possible, if perhaps not more so than the chance of downloading a hostile AI, that a signal received from a friendly alien species could benefit human society greatly.
“There can be a positive aspect as well,” says Garrett. “Maybe an alien civilization could be uploading an LLM [large language model] that would give us huge insight into their world.”
Support Supercluster
Your support makes the Astronaut Database and Launch Tracker possible, and keeps all Supercluster content free.
Support
Carl Sagan was a great believer that any alien species that had lived long enough to be able to routinely expend large amounts of energy to send signals into deep space must be peaceful. This is reflected in his 1985 novel Contact, which aims to be a powerful counter-argument to all this paranoia. Sagan’s optimism was reflected in his novel, in which SETI astronomers detect a signal containing instructions to build a mysterious machine. Nobody knows what the machine is for or how it works, and some worry that it is a weapon or the work of the devil. Ultimately it turns out to be a means of transport that takes a small crew across the galaxy to meet with the aliens before being returned home.
Though it is not featured in his novel, one concept often put forth by Sagan, and by like-minded others, is that of the ‘Encyclopedia Galactica’, which did feature prominently in Sagan’s 1980 television series, Cosmos. Somewhat like Garret’s imagined alien LLM, the Encyclopaedia Galactica would contain the entirety of a civilization’s knowledge and perhaps information pertaining to the technologies and cultures of myriad worlds.
James Gunn’s 1972 novel The Listeners is a lynchpin in a neat circle between himself and Sagan. Gunn had been inspired to write The Listeners in part by Sagan and Iosif Shklovskii’s 1960s book, Intelligent Life in the Universe, which was perhaps the first popular account of SETI and the prospects for intelligent and technological life beyond Earth. In The Listeners, SETI scientists pick up a signal from beings on a planet orbiting the star Capella, the bright luminary of the constellation Auriga the Charioteer. Its philosophical themes of religion and science ends with humanity downloading all the collected knowledge of the Capellans, who it turns out are a dead species who have left behind automated systems transmitting all of their accumulated knowledge, their own brand of Encyclopedia Galactica.
Sagan has cited The Listeners as an inspiration for his novel Contact, which tackles similar themes of philosophy, science and religion. So too does His Master’s Voice, a 1968 novel by the famed Polish science-fiction author Stanislaw Lem, author of Solaris. His Master’s Voice is a thoughtful treatise on the philosophical, cultural and scientific challenges that would underlie attempts to decipher an extraterrestrial message, and touches on the concept of aliens sending instructions to build potentially dangerous substances. Ultimately, the meaning behind the message detected in His Master’s Voice is left ambiguous by Lem, which will quite possibly be the case in real life.
Some researchers wonder whether deciphering and understanding a complex message from a culture we have never encountered before would even be possible.
The Greatest Risk is From Ourselves
The information contained within an Encyclopedia Galactica would be very precious indeed. SETI’s Declaration of Principles urges the discoverers of an extraterrestrial signal to make all the information that they receive available to everybody around the world. That’s fine in principle, but would it be allowed to happen in practice? Even just ten years ago we might have said yes, but the current geopolitical mess that the world finds itself in calls that assumption into question.
The greatest risk in SETI could come from ourselves as we jealously fight over or greedily hoard the information that we receive from the stars.
The best scenario would be that multiple observatories around the world detect the signal and collect its information, so that no one nation can keep its secrets. Fortunately, best practice demands that an observatory somewhere else in the world verify that a signal is real, long before its information can be downloaded, analyzed and interpreted. The rules of SETI inevitably lead to information being shared, but that only guarantees sharing if the detection is made by civilian rather than military scientists, who depending upon their government and military hierarchies, might be less keen to spill the beans.
Ultimately, while there is a risk, we potentially have much more to gain from SETI. While science-fiction stories such as Pluribus and A for Andromeda warn us of potential dangers, we needn’t allow those concerns to dissuade us from conducting SETI.
Why worry about a hypothetical alien AI when human-built AI is a far more real and immediate risk?
Despite the nightmare scenario that they describe, even Michael Hippke and John Learned are not put off doing SETI. Historically, humanity doesn’t scare easily. Hippke and Learned’s conclusion is a simple one – the promise of SETI far outweighs its risks, so when that signal from afar is finally received, take a deep breath and read the damn message anyway.