To All Geeks, Hackers And Sci-fi Enthusiasts : You Get What You Pay For
In the last few months I’ve spent about $100 on PC games and about $40 on sci-fi books. In the same period, I’ve spent exactly $0 on research that could actually bring about the future envisioned by those games and books. What about you?
In a recent article, IO9 paints a bleak picture of our future – no significant life extension within this century, advanced brain-computer interfaces forever out of reach, grim stagnation or only snail-like advancement in many fields. The fact that technological progress often takes longer than expected may cause some disillusionment, but do we really have to wait a hundred years? Do we even have a choice in the matter? After all, most of us geeks lusting after robot servants and night-vision contact lenses don’t really have the skills to do the research, or the power to decide who gets government funding. Right?
As you might have guessed from the first paragraph of this post, there is at least one age-old, proven way to speed up research and development. Money. Or, as as NextBigFuture eloquently put it, “you get what you pay for.”
People should consider diverting $100-150 per year in science fiction movies, DVD, books, toys and games towards actual scientific attempts at life extension and molecular nanotechnology. This does not include another average of $60-100 per person on cosmetic surgery, vitamins and dietary supplements. Why settle for imagination, illusion and fake procedures and invest in attempts at real solutions?
If you pay for science fiction you don’t get a hovercar. You get more and better science fiction. Of course, in real world economics the equation isn’t that primitive – for example, increased demand for better graphics in computer games stimulates the development of faster hardware, which in turn has some marginal benefits for things like AI. However, this is a very inefficient way to help AI research. If these indirect benefits were the only support a research project had it could really take a century before any significant breakthroughs would be achieved.
So, when you want a dream to come true, when you want that flying car, an exoskeleton, or extra 50 years of life, or whatever – do something about it. Don’t just buy more dreams. Donate to research institutes. Spread awareness of promising technologies. At the very least, write a rave blog post that offers actionable ideas (as opposed to utopian phantasies of how cool the future will be) 🙂 You might think that anything you do won’t make much of a difference. But even if that’s true – I don’t have the data to prove it one way or the other – it is certain that doing something is better than doing nothing.
I’m not going to list any particular research groups that accept donations, or attempt to proselytize (though I might in another post). That would dilute the central message – if you want sci-fi tech, consider donating some funds towards actual research. It doesn’t mean you must stop buying games, books or DVDs – lets face it, that would be a completely unrealistic, and, dare I say it, a very lame proposition indeed. But if you can afford a $20 game/DVD/book, you can probably afford a $20 donation towards making some of the cool tech seen in that game/DVD/book a reality.
As for myself, just before writing this post I set up a monthly PayPal donation to a certain AI-related institute. I will also look into several other possibilities during the next few days.
Related posts :
While I aplaud your willingness to get things moving, perhaps more thought is required.
For every utopian sci-fi concept there is one or more dystopian visions; for every benefit a possible downside. AI sounds like a good idea, as do self aware structures, etc, but on the whole the predictions for AI are not ideal for the human race.
It seems as though contributing to research may be accelerating the pace with which we travel toward the abyss and once we get there we may have no choice but to jump in and hope the robots save us.
I’m not sure I want to contribute toward a bookmaker that will ask us to stake the entire race on the outcome.
You are correct that utopia is by no means guaranteed. However, I have come to think that this is exactly the reason why we, as a civilization, should “hurry up” with the development of certain technologies, especially AI.
It seems that whether you want it or not, the entire race is at stake. It may take a few decades before we get to advanced nanotechnology, genetic engineering, AI, etc – but we’ll get there. And, if the current state of affairs in politics and technology is any indication, we won’t be ready to handle the technology. In particular, the persistent trend to use bleeding-edge research for military purposes, and the climate change problem, are very worrying.
So I say lets go ahead and develop the technologies that could “save” us before we develop technologies that will most likely kill us. And lets do it fast, since someone could create a genetically engineered plague tomorrow (unfortunately I can’t currently find the link that explained how this is quite doable).
That is why I focus mainly on AI, and maybe space travel. A beneficial AI could help us deal with most of the other dangerous technology, and space travel could let us limit the potential damage that any particular tech-catastrophe can cause (e.g. a genetically engineered plague wouldn’t kill off our entire species if we had self-sustaining colonies on other planets).
There are definitely dangers in AI, but I think it is even more dangerous to develop other potentially deadly technologies before creating AI.
@White Shadow – If we aren’t ready for potentially dangerous solutions though wouldn’t it be better not to fund alternative potentially dangerous solutions to combat the first set of potentially dangerous solutions?
I mean AI could save us, or it could kill us. I dare say that the first true use of AI will be military, as you suggest, and then where are we? Dare I mention the word ‘SkyNet’?
I would suggest the quickest route to self sustaining colonies would be to turn off air travel. Then we can develop AI slowly, with the time needed to make sure it is safe.
Doing it slowly and safely is an attractive idea, but does it work in the real world?
The problem is you can’t make everyone comply with such restrictions. Maybe a few countries, or some of the smartest researchers, would understand the need to move slowly, but technological progress would still trudge on. Gradually the funds, hardware & knowledge required to create an AI would become lower and lower. In time it would be within the reach of small countries, or even sufficiently educated individual researchers.
Exactly how much time we have before it comes to that is debatable. Kurzweil/Vinge famously predict The Singularity (which implies superhuman intelligence) to occur around 2030-2045. Some others are even more optimistic (I seem to remember Peter Voss claimed AI would be possible as soon as 2015). Preferably we should have the “good” AI done before it becomes too easy to create-AI-in-general.
Military AI is problematic. It’s even more dangerous since you don’t really need human-level or superhuman AI for effective killer robots. A robot with, say, a wolf-level intelligence and heavy weapons could hunt down people quite effectively. Since military research tends to get more funding than people doing something “for the good of all mankind”, I would specifically emphasize supporting companies/institutes that are at least trying to do the right thing.
I’m not sure I understand your last comment about turning off air travel.
Gah, why can’t I keep my comments shor(er)? 🙂
My point about air travel was that the difference between colonies in space, and protecting countries on earth is the speed of air travel. If we are really worried about a deadly virus then we should simply put a stop to air travel.
I can see that we could never agree on the solution because, it seems, that you are an optimist, and believe advance tech can be used for good without the bad overwhelming it, whereas I believe we are all doomed and I would rather it were later rather than sooner.
Heh, I think I may have started out as an eager optimist, but over the last year or so I’ve read so many pessimistic articles that I’ve transitioned to a significantly less cheerful state.
My comments may seem optimistic, but I actually think that, realistically, the most likely outcome is that the civilization turns to ash during this century. That is, unless
* people magically become impressively moral and intelligent OR
* aliens come and save everyone OR
* we create something superintelligent and benevolent that saves us.
The fact that I favor option #3 probably says something about my opinion of humanity.
see “the quantum sausage machine” at Kindle book store for a new perspective on ai