The AI Singularity and Transhumanism
Yesterday I came across a newly-published 4-minute sci-fi comedy short about a self-aware computer.
It got me curious about subjects like transhumanism (technologically-aided evolution beyond physical and mental limitations) and the singularity (the hypothetical moment in time when AI will surpass human intelligence). There have been some intriguing developments recently, perhaps most notably Google's $500M acquisition of the British AI start-up DeepMind in 2014, which aims to solve general intelligence (and to utilize this knowledge to solve everything else). And just 3 days ago PBS aired a short video of Ray Kurzweil pledging that we will likely conquer death in the next 20 years. I'll be surveying these subjects in the days ahead, beginning with Kurtzwiel's book, The Singularity is Near. I'd welcome any suggestions for related literature or film - speculative fiction and non-fiction alike. I'd like to explore questions such as, - What visions of the world, both utopian and dystopian, have been proposed for life after the singularity? - What existing technological achievements are already blurring the line between man and machine? - And what are the philosophical, moral, and ethical implications of machine consciousness and eternal (but artificial) life? I'm going to start with this engaging 2-part article from waithbutwhy.com. |
Having read Kurtzweil myself I suggest you not embarrass yourself by giving him undue credence.
|
Quote:
So far I've read of the three calibers of AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI), and that our present-day world operates on a network of ANIs, such as airline flight systems, and Amazon/Pandora recommendations. Urban states upfront that the transition from ANI to AGI is a difficult one, but that it is inevitable at our current trajectory of advancement. The first key, he states, is increasing computational power. He briefly discusses China’s Tianhe-2 supercomputer, its power requirements, and its comparative processing capability as compared to the human brain, which he cites as being right in line with Moore's Law and Kurzweil’s cps/$1,000 metric, (insert pretty but overgeneralized infographic). http://i.imgur.com/blAG3Dql.jpg The real excitement seems to be the potential of software built with recursive self-improvement, resulting in what Irving John Good called an Intelligence Explosion - the ultimate example of The Law of Accelerating Returns. It is proposed that the transition from AGI to ASI will take a fraction of the time it took to achieve the previous milestone. All of this seems plausible. Perhaps it is the further propositions relating to AI consciousness that push Kurtzweil's credibility off the deep end? Are the basic principles of AI - the three calibers and their inevitability - are these concepts credible? If not, please enlighten me. |
Kurtzweil is fun and makes some interesting points but when he starts saying he's going to take supplements so he can live long enough to have his consciousness downloaded into a computer... you know...
Did you follow the story about DeepMind's AlphaGo beating the Go champion? The most interesting part of the story is that the computer program learned how to improve its game by playing itself over and over again. Essentially, it's self taught, making it very different from Deep Blue. |
Quote:
The Twilight Zone 044 The Lateness of the Hour [Original air date: December 2, 1960] Futureworld (1976) - Original Trailer Bicentennial Man (1999) - Trailer A.I. Trailer (Extended Version) [2001] Rupert Sheldrake, TEDx Lecture REMOVED BY TED |
Quote:
|
Quote:
|
Quote:
“Brain in a Vat” Argument, The | Internet Encyclopedia of Philosophy |
Quote:
So, to answer your question: yes. |
I guess I should do Kurzweil the dignity of spelling his name correctly. Sorry Ray.
|
All times are GMT -6. The time now is 09:58 PM. |
© 2003-2024 Advameg, Inc.