The Future Watch
|Home||Threat Board Archives||About The Future Watch||Why Donate||Contact Us|
-The Drake Equation
and what it may mean for us
Ideas for Utopia:
Despite all our problems, humans have never been better off then they are today, in the modern world. Life expectancies are longer, and a greater percentage of people have access to food, water, education, entertainment, news, and shelter. Free, democratic, capitalist societies have made these advancements possible by encouraging innovation and growth. As a result, these ideas have spread, and more and more countries are adopting them. It's tempting to imagine that things will continue along their current trend, and the world will get better and better for all of mankind. That said, its fallacious to imagine that trends in human society are permanent features, and this website demonstrates that there are many possible (and perhaps likely) disasters that could interrupt this trend.
Given the possible catastrophes that lie ahead, and the numerous possible negative effects of advancing technology, it's easy to believe that we are currently living near the end of the golden age of mankind. Its important to remember, however, that there is still reason to hope, and that a good future is still a possibility. This page is intended to describe possible positive futures for mankind. As always, we appreciate any thoughts or suggestions from our readers. Furthermore, we would like to link each idea to its own page that describes it in detail, and explains what steps humanity can take to make that future more likely. If anyone has given any of these ideas a great deal of thought, and would like to contribute, please write up an analysis of the idea and submit it to firstname.lastname@example.org.
Our Machines Become Benevolent Dictators: If computer technology continues advancing at its current rapid rate, computers will end up being much smarter than humans. It's hard to predict what will happen if this situation arises, but its very possible that, with or without us knowing it, our computers will make all important decisions. It's easy to imagine how this could be a bad thing. But, if we programmed the original smart computers wisely, and we get extremely lucky, then our computers might make much better decisions for us then we ourselves could. Computers might become so much smarter than us that they are able to manipulate events from the background - their actions unnoticed by most people. They could leave humans free to live and work - only intervening to nudge us toward solutions to urgent problems. The computers might be able to guide us past the catastrophes that could befall us while preserving much of the structure of a free society.
Our Successor Species is Mostly Human: Genetic engineering may allow us to produce "designer babies" with whatever traits we desire. If this becomes possible, then it probably can't be prevented by legislation - some doting parents will do anything to give their child an advantage. As a consequence, we will probably quickly create children who are intellectually and physically superior to all humans. Furthermore, we will probably be able to tinker with their emotional makeup, so they may not even have normal human emotions. If we go down this path, we will want to ensure that our successor species is compassionate as well as capable. With the right choices, our successor species might be more human than not - and much better able to avoid the catastrophes that may afflict an Earth run by humans. Furthermore, if the successor race is compassionate, the remaining humans might find that their lives improve under their rule.
There Are Hidden Limits to Technological Advancement: We have enjoyed a couple of centuries of remarkably rapid technological advancement. As a result, we frequently assume that this technological advancement will inevitably continue. That isn't certain, however. We may run into unforeseen physical laws, or unforeseen consequences of already known physical laws that limit advancement in any field. It's possible that the human brain is about as smart as anything can be, and that building a bigger machine introduces problems that mean the machine doesn't get much smarter. Genetic engineering may prove so incredibly complicated that it's impossible to make more than minor changes. Many other fields may similarly reach an endpoint. If such an endpoint occurs at just the right time in each field (after we have solved the problems we need to solve - for example, after we have protected ourselves against bioweapons - but before we create more problems - for example, before we build nano tech weapons), we might end up in a relatively utopian world - where technology has advanced to the point that we can all have food, water, entertainment, education, news, and shelter, but has not caused any extraordinary catastrophes. Of course, the odds of such perfect limits being written into the physical laws of the universe aren't all that high.
We All Become Virtual Reality Addicts: Given the number of hours people spend watching TV, it's obvious that TV is addicting. Virtual reality promises to be a much more realistic, interactive, TV experience, and if virtual reality seems as real as real life, and it's much more interesting and fun, then it's quite possible that a very large percentage of people could get hooked on it. It's hard to say whether this would be a good or bad thing, which is why this situation is dealt with on the dystopia page as well. That said, if everyone had their physical needs taken care of, and they spent 24 hours a day happily hooked into a virtual world that was perfectly designed to give them the most satisfying longterm experience possible, that would certainly have some good points. Furthermore, people could be prevented from committing violence against each other, kids wouldn't be abused, wars wouldn't occur, etc.