This quick post resulted from some rumination last night during an hour spent with my new friend, who insists on going by the pseudo-moniker R. Osterling Pessoa.
The chat was sparked by the big new financial boost of US$100 million to the search for extraterrestrial intelligence (SETI), injected by Russian billionaire Yuri Milner. (Astrophysicist Stephen Hawking made the actual announcement, presumably because he is a globally-recognisable personality, in addition to being a leading scientist). The massive expansion of the SETI program because of the new funding, which will be used to buy time at two giant radio telescope facilities in the US and Australia, means we moving closer to picking up intelligent signals from elsewhere in the universe – if they exist – than ever before.
Our conversation veered towards the possibility that Milner might have an actual business interest in positioning himself as the first tycoon to potentially make contact with ET businesspeople, rather than merely being a nostalgic science philanthropist (he has admitted to being a failed astrophysicist). He might even be starting a trend, ROP suggested only half-jokingly.
Discounting the Fermi Paradox – which uses cosmic size, timespans, and the development of intelligence on Earth to conclude that if ET intelligence exists elsewhere in the universe it should already be here (and perhaps they are) – Milner, Hawking, and like-minded leading-lights appear sanguine about the possibility of making interstellar contact.
Such contact would be the biggest civilisational acceleration in history (or even pre-history). Cosmic aliens, especially intelligent ones, don’t even need to contact us – merely proof of their presence would be enough to drive an unprecedented race for being the first nation/corporation to decipher their communications (assuming they are indecipherable when first encountered), and move towards making further contact if deemed safe. However, it would require quite a massive amount of technological smarts to communicate meaningfully, considering we can’t even ‘communicate’ meaningfully with our own ancestors, such as the ancient Minoans and Cretans (Linear A/ Linear B scripts), or the Indian Subcontinent’s Harappan script from the Indus Valley civilisation, let alone communicating meaningfully with contemporary fellow large-brained mammals who appear to have elementary ‘languages’.
Evidence of unintelligent life, even microbial life, while not quite as exciting a prospect as the discovery of intelligent life on other celestial bodies (and not the focus of SETI programs, by definition), would still be a business propellant in multiple ways. Especially if such evidence is found within the solar system, nation-states might soon be investing heavily in research and infrastructure to work out how to reap economic benefits from the discovery. This would, in a way much bigger than the space program race stretching roughly 25 years, starting in the 1950s, have a cascading effect across many areas of the global economy by throwing out new technologies, materials, processes etc. that might be reusable spinoffs elsewhere.
The only other potential event comparable in the scale of its impact would be the creation of rapidly self-improving artificial intelligence on Earth.
Of course, as many commentators (including Prof Hawking, and Elon Musk of Tesla) have made us aware in recent years, both these possible events – contact with extraterrestrial intelligence, or creation of AI on earth – carry extinction risks for our species. The history of European contact with other cultures in the Americas, Australia and elsewhere has a potent lesson for the first eventuality. The second is comparable in some ways to the creation of ‘superintelligence’ here on earth when Homo Sapiens emerged, and, after aeons of relatively slow blundering along, began blundering along much faster over the past 200 years or so, scything through most other large life-forms, with the consequence that we are now in the thick of what is being described as an Anthropecene mass-extinction event. Why? Our hunger for resources to further our own ‘ends’. Any Extraterrestrial Intelligence, or earth-bound AI superintelligence, may well have similar consequences.
What we are betting on, it seems, is that it won’t. (See this post, especially point 8.)
That’s the risk-taking instinct built into our DNA, and which caused, in the dim dawn of our race, some of our ancestors to gamble playing with fire, move into dark dens for safety, and ride other animals to extend their hunting range or escape from predators.
Of course, as ROP ribs me again, there will also be the opportunity for MBA courses with the title “Developing an Interstellar Mindset”, to replace the “Global Mindset” courses that are now all the rage.
(ROP has asked me to write a longer post, to capture more of our chat. May do so at a later date.)