Kevin Kelly has an interesting comment on the technological singularity, vis a vis the assumption that given a sufficiently powerful digital computer you can accurately model the entire universe without needing correction from empirical “real world” evidence:
The notion of an instant Singularity rests upon the misguided idea that intelligence alone can solve problems.
As an essay called Why Work Toward the Singularity lets slip: “Even humans could probably solve those difficulties given hundreds of years to think about it.”
In this approach one only has to think about problems smartly enough to solve them. I call that “thinkism.”
No amount of thinkism will discover how the cell ages, or how telomeres fall off. No intelligence, no matter how super duper, can figure out how human body works simply by reading all the known scientific literature in the world and then contemplating it.
Kelly points out that AIs should be “embodied in the world.” Other topics to consider are the impact of non-human-intelligences, based on genetic algorithms, improved data-mining methods, and evolution-based design (video link, via BoingBoing). These kinds of non-human intelligences will have/have already had profound effects.
5 thoughts on “The perils of “thinkism””
Oh, light dawns (for me, that is). Faith in the Singularity and in the power of computers to eventially solve everything by processing enough knowledge is just like the medieval idea of Authority, in which there was no need to examine the real world because everything we needed to know was in existing literature.
Yes, the realization that understanding the universe requires the performance of experiments and making accurate observations, and that one cannot rely on philosophical reflection alone, is essential to the advancement of science. Science is strongest where its basis is clear, reproducible, consistent, and unambiguous experimental data. It is weakest when it is based on appeals to authority, popularity contests, superstition, religion, and either pollyannaish or apocalyptic beliefs. Although I’m sure some of those readign this will promptly leap down my throat for the 2nd of the two examples that follow, I assert here (just to annoy you, ok?) that neither creationism (or equivalently, “intelligent design”) nor Al Gore’s global warming alarmism actually meet the standards of “science.” Truth is not determined by popular vote. In physics, we especially like our theories to be elegant. We like to think that makes them more likely to be right. (Dirac was especially famous for linking elegance to correctness.) But we also know that, in the end, the final arbiter of truth is the set of all experimental data.
My hope for the singularity is more cyborg in nature, the problem with people is the moment you put a large bunch of them together and treat them as a group, they become easily mailable and predictable. This can lead to bad things, many bad things, worse even than you tube comments.
If personal AI can be used to nudge people when this is happening, encouraging attention to be focused on exactly the right place rather than diverted offstage. To make everyone smarter and more aware rather than be a large brain sitting in the center of the planet solving problems.
Then maybe this group problem can be avoided or lessened, we are not designed to intuitively understand the dynamics of such unnaturally large groups instead we treat them as we would smaller social structures. Having technology to enable us to operate better in these situations, situations it has also put us in, makes sense and is seemingly where we are heading.
It’s not about super intelligence, that really wont help much, it’s more about making the other end of the spectrum just that little bit smarter.
Well, that’s the best hope I can manage anyway 🙂
I shall end with my explanation of why I “waste” my life making toys and games. It’s in the form of a simple question and is singularity related, whatever that turns out to be.
Given that the two most likely areas for AI to arise from are entertainment and military. Would you prefer our future mechanical overlords to be primarily designed to :
A Kill you
B Amuse you.
I agree that it is impossible to figure out everything based solely on what we already know, but I do think it’s a little short-sighted to claim that even given infinite computing power we wouldn’t be able derive answers to many questions we haven’t yet answered. The human mind is limited in its ability to find and interpret patterns in information, a task which computers are ideal for. Given enough power and resources I am willing to bet that a computer system would be able to derive the answers to precisely those questions Kelly is offering as examples of the impossible.
The main thing that concerns me regarding the notion of the Singularity is that it will take a massive amount of money to make significant progress toward that goal, and whatever result comes from it will likely be in the hands of those providing that money. Absolute power…
Also, “thinkism” sounds kinda dumb.
Comments are closed.