Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Robots Will Not Be Smarter Than Humans By 2029

Unknown Lamer posted about 7 months ago | from the do-humans-dream-of-robot-overlords dept.

AI 294

Hallie Siegel writes "Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil's recent claim that 2029 will be the year that robots will surpass humans. From the article: 'It’s not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories ... will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence – it still might not have enough time to develop adult-equivalent intelligence by 2029'"

Sorry! There are no comments related to the filter you selected.

Kurzweil is an idiot with Super Powers (5, Funny)

CajunArson (465943) | about 7 months ago | (#46431281)

Kurzweil's predictive powers are so incredibly wrong that he could literally destroy the world by making a mundane prediction that then couldn't come true.

For example, if Kurzweil foolishly predicted that the sun would come up tomorrow, the earth would probably careen right out of its orbit.

Re:Kurzweil is an idiot with Super Powers (4, Insightful)

mythosaz (572040) | about 7 months ago | (#46431423)

There's two schools of thought on this:

There are those who think Kurzweil is a crazy dreamer and declare his ideas bunk.
There are those who think Kurzweil is a smart guy who's been right about a fair number of things, but take his predictions with a grain of salt.

There doesn't seem to be a lot in the middle.

[You can score me in the second camp, FWTW.]

Re:Kurzweil is an idiot with Super Powers (4, Insightful)

Concerned Onlooker (473481) | about 7 months ago | (#46431543)

Actually, your second point IS the middle. The logical third point would be, there are those who think Kurzweil is a genius and is spot on about the future.

Re:Kurzweil is an idiot with Super Powers (3, Informative)

mythosaz (572040) | about 7 months ago | (#46431571)

...while there are certainly some Kurzweil nuthugging fanbois out there, they don't seem to exist in any vast number.

While those who have opinions of Kurzweil probably span the spectum, it seems that there's a bunch of level-headed folk who think Kurzweil is a smart guy with some interesting thoughts about the future, and on the other side, there's an angry mob throwing rotten fruit shouting "Your ideas are bad, and you should feel bad about it!"

AI and the prevalence of bombast (4, Insightful)

fyngyrz (762201) | about 7 months ago | (#46431823)

o we don't know what "thinking" is -- at all -- not even vaguely. Or consciousness.

o so we don't know how "hard" these things are

o and we don't know if we'll need new theories

o and we don't know if we'll need new engineering paradigms

o so Alan Winfield is simply hand-waving

o all we actually know is that we've not yet figured it out, or, if someone has, they're not talking about it

o at this point, the truth is that all bets are off and any road may potentially, eventually, lead to AI.

Just as a cautionary tale, recall (or look up) the paper written by Minsky on perceptrons (simple models of neurons and in groups, neural networks.) Regarded as authoritative at the time, his paper put forth the idea that perceptrons had very specific limits, and were pretty much a dead end. He was completely, totally, wrong in his conclusion. This was, essentially, because he failed to consider what they could do when layered. Which is a lot more than he laid out. His work set NN research back quite a bit because it was taken as authoritative, when it was actually short-sighted and misleading.

What we actually know about something is only clear once the dust settles and we --- wait for it --- actually know about it. Right now, we hardly know a thing. So when someone starts pontificating about dates and limits and what "doesn't work" or "does work", just laugh and tell 'em to come back when they've got actual results. This is highly distinct from statements like "I've got an idea I think may have potential", which are interesting and wholly appropriate at this juncture.

Kurzweil right on trends but wrong on policies (1)

Paul Fernhout (109597) | about 7 months ago | (#46431835)

Contrast with James Hughes, Director of IEET: http://www.youtube.com/watch?v... [youtube.com]

And also: http://www.youtube.com/watch?v... [youtube.com]

Kurzweil was heavily rewarded for success as a CEO in a capitalist society. So his recommendations tend to support that and also be limited by that. So, things like a "basic income" or "Free software" may be beyond Kurweil's general policy thinking.

Se also the disagreeing comments here:
"Transhumanist Ray Kurzweil Destroys Zeitgeist Movement 'Technological Unemployment'"
http://www.youtube.com/watch?v... [youtube.com]

Modern robots can be networked through the internet. So, at some point, you don't just have one million robots learning things independently. You have effectively one robot with a million hands learning potentially very quickly by trial and error replicated a million times faster then with just one hand.

Economic alternatives I've helped collect:
http://www.pdfernhout.net/beyo... [pdfernhout.net]

A parable by me on the topic:
http://www.youtube.com/watch?v... [youtube.com]
"A parable about robotics, abundance, technological change, unemployment, happiness, and a basic income."

Re:Kurzweil is an idiot with Super Powers (0)

The123king (2395060) | about 7 months ago | (#46431731)

And they're all batshit crazy.

Re:Kurzweil is an idiot with Super Powers (1)

rvw (755107) | about 7 months ago | (#46431749)

Actually, your second point IS the middle. The logical third point would be, there is one who thinks Kurzweil is a genius and is spot on about the future.

FTFY!

Re:Kurzweil is an idiot with Super Powers (1)

Jeremiah Cornelius (137) | about 7 months ago | (#46431443)

Kurzweil is Lex Luthor.

Re:Kurzweil is an idiot with Super Powers (-1)

Anonymous Coward | about 7 months ago | (#46431447)

By 2029 all the niggers will have finally killed themselves off (or forced us actual humans to do it), thus raising the collective world intelligence by about 100%, setting the prediction back 50 years.

In the meantime, all these so-called "transhumanist" can make their own little echochambers about how great life will be in the future, when it's obvious what they really are: a bunch of jew pedophiles who want to make a bunch of androids with child forms so they can fuck them. That way they can finally kill off all the goyim without having to give up their wretched "lifestyles".

Oh, and this Kikezweil wants to bring his dead father back to life. How long before science can cure that Electra complex of yours you disgusting degenerate? But I guess that that is a sacred piece of your "culture", hanging out in caves and fucking your parents just like the Great Sky Fetishist tells you. Yahweh is a worthless, nonexistent faggot of god (seriously, what thinking person who choose the kikes as their favorite? a vile queer, that's what). Enjoy your trashy sandniggers Yahweh, because I'll kill my self before I ever bow down and worship you!

These singularity people need to be gassed.

Re:Kurzweil is an idiot with Super Powers (-1)

Anonymous Coward | about 7 months ago | (#46431735)

Hey, I'm down for lolibots! How do I join the secret Jewish cabal doing all this? I tried to join the Japanese, but... It didn't work out....
Captcha: lustily

Re:Kurzweil is an idiot with Super Powers (4, Insightful)

alexborges (313924) | about 7 months ago | (#46431963)

I propose, en the other (third) hand, that reliably educating humans to be smart should be the first step. We will only do the artificial intelligence bit when we actually get the human intelligence angle.... and that will not, for sure, happen any time soon.

getting ahead of ourselves (0)

Anonymous Coward | about 7 months ago | (#46431285)

Nobody has anything but an extremely simplistic idea of how the human brain works.
Very few people are doing work on simulating the function of any kind of brain
Humans in general are not inclined to try to understand themselves, so there will probably continue to be very few people doing this work, so progress will be very limited.

If there's no news, simply don't post anything.

"Robots" will never be as smart as a human. (1, Interesting)

gurps_npc (621217) | about 7 months ago | (#46431293)

Computers on the other hand can already be argued to be smarter than a human - if you consider the entire internet as a single computer.

The difference between a robot and a computer is that the computer is self-mobile at the very minimum. If it can't get up and move away, (no matter how awkwardly), it's not a robot.

Mobility is hard, not easy. Worse, the larger a computer is, the harder mobility becomes.

There are lots of reasons to build a computer smarter than a human being, but practically none to add in the huge expense to take that human level intelligence and make it mobile. We already have real humans for those jobs that require mobile intelligence and they cheaper and easier to care for.

More importantly, there is little to no reason for us to build a computer that, being as smart as us, would want to be us. Star Trek's Data is poor planning. Why make it want to be something it isn't? Don't we have enough body issues of our own without giving them tour computers?

Re:"Robots" will never be as smart as a human. (4, Insightful)

HornWumpus (783565) | about 7 months ago | (#46431317)

By the same argument you could say that any good library from 1950 was also smarter then a human. You'd be just as wrong.

Re:"Robots" will never be as smart as a human. (2)

mythosaz (572040) | about 7 months ago | (#46431453)

In a large number of ways, a 1950's library is smarter than any human.

If the measure of "smart" is how closely it behaves like a human - sure, we're probably a ways off.
If the measure of "smart" is what we know (in bulk), we're already there.
If the measure of "smart" is the ability to synthesize what we know in useful relevant ways... ...we're making progress, but have a way to go.

Re:"Robots" will never be as smart as a human. (1)

HornWumpus (783565) | about 7 months ago | (#46431663)

Does a book or a web page really know the information it contains?

Is a concept held in human working memory equivalent to the same concept written down?

Re:"Robots" will never be as smart as a human. (1)

lgw (121541) | about 7 months ago | (#46431737)

A firm yes to the second, unless you have some very particular religious beliefs.

The first though is less obvious: the best current working definition for "knowledge" is "justified, true belief". Wikipedia holds many things that are both true and justified, but Wikipedia doesn't "believe" anything, if we're just speaking about the web site, not the editors.

"Belief" certainly requires sentience (feeling), and maybe sapience (thinking). Personally, I think human sapience isn't all that special or unique, that we're only different in degree, not in kind, from the smarter (non-human) animals, and sentience is quite common. How aware does a system have to be to have a belief? More than a web site does today, to be sure, but I think that bar is pretty low.

Re:"Robots" will never be as smart as a human. (1)

HornWumpus (783565) | about 7 months ago | (#46431829)

A human mind can manipulate a concept, apply it to new situations and concepts.

A concept written down is just static information, waiting for an intelligence to load it into working memory and do something with it.

Re:"Robots" will never be as smart as a human. (1)

lgw (121541) | about 7 months ago | (#46431937)

Human memory is just storage, no different from paper. It's the intelligence that's relevant, not the storage.

Re:"Robots" will never be as smart as a human. (1)

HornWumpus (783565) | about 7 months ago | (#46431973)

You don't know what 'working memory' means in the computer or neurological sense? Hint: how is it stored?

You should just shut-up. You're embarrassing yourself.

Re:"Robots" will never be as smart as a human. (1)

lgw (121541) | about 7 months ago | (#46432017)

Wow, where does the hate come from?

Sure, if you mean "working memory" as a loose analogy for the computer sense, sure, I agree with you because that requires active contemplation. If by "working memory" you mean the stuff we're currently contemplating, its the contemplating part that matters, yes? That's how you're distinguishing "working memory" from "memory"? So the difference is "intelligence", not the storage medium?

Re:"Robots" will never be as smart as a human. (2)

HornWumpus (783565) | about 7 months ago | (#46432105)

Working memory is the space that you actively think on. It's not clear how it's stored, but it's clear that most memory is not just words. An AI will start with an in memory way of storing connected concepts; actors, linguistic, mathematical, logical, not understood but remembered cause/effect, image. Parsing the information into working memory involves putting it into a form that the intelligence can use.

This is a pretty well understood concept. The details are the tricky part.

Re:"Robots" will never be as smart as a human. (1)

suutar (1860506) | about 7 months ago | (#46431855)

I think that perhaps it's not as firmly equivalent as you imply; a concept in a book cannot be used in the same ways as a concept in human memory without being copied to human memory. At which point it's not the concept in the book getting used any more.

Re:"Robots" will never be as smart as a human. (1)

lgw (121541) | about 7 months ago | (#46431977)

You can't "use" an concept stored in "human memory" directly either. Thinking about stuff copies* it out of memory and into consciousness. (Or did you mean "memory" in a very loose sense, in which case I agree with you).

*Human memory is normally quite lossy - we reconstruct most of what we remember - heck, we construct most of what we see - so "copy" isn't the best word, really.

Re:"Robots" will never be as smart as a human. (2)

mythosaz (572040) | about 7 months ago | (#46431759)

I certainly wouldn't argue that libraries are self-aware.

It all goes back to the definition of smart is. Libraries certainly contain more information -- at least, in a classical sense. [Maybe one good memory of a sunset contains more information - wtfk] Watson, for example, is just a library with a natural language interface at the door. By at least one measure -- Jeopardy :) -- a library (Watson) is smarter than a lot of people.

Re:"Robots" will never be as smart as a human. (1)

Jamu (852752) | about 7 months ago | (#46431991)

Does a book or a web page really know the information it contains?

Doubtful. If a book contains the equations, 1 + 1 = 2, and 1 + 1 = 3, how does it know the first and not the second equation?

Define intelligence? (2)

TapeCutter (624760) | about 7 months ago | (#46431717)

AI suffers from continuously moving goal posts because nobody has a good definition of intelligence.. A computer (Watson) has already convincingly beat humans at general knowledge. Watson is an amazing technological feat however the general public does not recognise Watson as intelligent in any meaningful way, they have the same reaction as my wife when they see Watson playing Jeopardy - "It's looking up the answers on the internet, so what?". They don't even understand the problem Watson has solved, when the general public talk about AI they are thinking about robots that appear in modern movies and are basically indistinguishable from humans (eg:Terminator), something that is not only intelligent but also has also (nearly) mastered human social intelligence.

In a way they are right, emotions drive what the logical mind thinks about and AI cannot (yet) communicate, let alone reproduce, human emotions, I have long thought that this is partly because AI researchers in general concentrate on modelling the brain and more or less ignore the huge network of intricate sensors and actuators attached to it.

Re:"Robots" will never be as smart as a human. (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46431739)

A computer not only has software (i.e. the instructions), but also hardware to actually execute the instructions in a reliable way. For the 1950's library to be considered "a computer" you would have to include the librarian (or regular person) who actually follows the instructions of the lookup system to retrieve the information, and even then whether this is a "reliable" method of execution is debatable.

In fact you could in theory make any computer that is only instructions written on paper (e.g. copy data from this memory address to that memory address, add this memory address to that memory address and store it in a third memory address, etc) and have a human being carry out those instructions. If we knew enough about neurobiology we could probably simulate the human brain on a computer that was simulated by a human being following instructions, but it might take the human being an entire lifetime to advance the simulation by 1 nanosecond or something.

Having the computations be executed by a computer just speeds things up by several orders of magnitude. And having the instruction in electronic form instead of on paper allows the computer to more easily read the instructions. Computers also make fewer mistakes. The software can be much simpler if it is not required to withstand damage (i.e. bit flips).

Re:"Robots" will never be as smart as a human. (4, Insightful)

CanHasDIY (1672858) | about 7 months ago | (#46431391)

Computers on the other hand can already be argued to be smarter than a human - if you consider the entire internet as a single computer.

Depends on how you define "smarter."

The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human. That's the definition of stupid to me: unable to do a thing without having to all spelled out to you.

There's a reason D&D considers Wisdom and Intelligence to be separate attributes.

Re:"Robots" will never be as smart as a human. (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46431827)

machines cannot do anything without direct, explicit directions - told to it by a human.

Everything a computer does is a result of it's programming and input. The same could be said of a human. The only difference is that the programming in a human is a result of natural selection, and the programming in a computer is a result of intelligent design (by a human which was indirectly a result of natural selection).

In the same way that a computer can not do anything that it's programming does not allow, a human can not do anything that his/her brain does not allow. It's true that human brains allow a lot of things that current computer programs don't, but you could in principle make a computer program do anything that a neuron could do. It's all just matter and energy.

That's the definition of stupid to me: unable to do a thing without having to all spelled out to you.

computers have low level instructions and high level instructions. The existence of low level instructions does not mean that there are *only* low level instructions. Just because the human brain has neurons that work by electric potentials doesn't mean that we can be considered to *only* do what is spelled out by the electron potentials in our neurons. Or maybe it does mean that, but it should be the same for both humans and computers. There is a higher level model that governs the behavior of humans. By the same token there is a higher level model that governs computers as well. "Find me the shortest directions from L.A. to New York" is a higher level instruction than "add 2 numbers". It may not be currently as advanced as humans, but it is growing exponentially.

Re:"Robots" will never be as smart as a human. (2)

Idbar (1034346) | about 7 months ago | (#46432111)

The internet holds more knowledge than a single human ever could, but machines cannot do anything without direct, explicit directions - told to it by a human.

I'm sure not doing anything would still be way better than someone only checking facebook for a whole day. Which increases the score on the Robot side.

Re:"Robots" will never be as smart as a human. (0)

PolygamousRanchKid (1290638) | about 7 months ago | (#46431393)

Take a quick look at what is posted on Facebook and YouTube . . . the bar isn't set very high . . .

Re:"Robots" will never be as smart as a human. (1)

dbIII (701233) | about 7 months ago | (#46431595)

Not really since as far as I know we don't have an accurate definition of intelligence we can put in mathematical terms.
We just have a more useful and more convincing Mechanical Turk instead of something that can think for itself.

Re:"Robots" will never be as smart as a human. (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46431839)

If the mechanical turk gets good enough (e.g. passing the turing test), then why wouldn't it be thinking for itself?

because we all trust Ray Kurzweil's judgment (-1)

Anonymous Coward | about 7 months ago | (#46431311)

because we all trust Ray Kurzweil's judgment

Re:because we all trust Ray Kurzweil's judgment (-1)

Anonymous Coward | about 7 months ago | (#46431457)

because we all trust Ray Kurzweil's judgment

I really hate it when people repeat the subject in the body

Dr. Soong (1)

c008644 (3529249) | about 7 months ago | (#46431313)

Robots wont be smarter than humans because Dr. Soong hasent been born yet.

futurists (1)

electricalen (623623) | about 7 months ago | (#46431319)

Only novelists and crackpot linkbait article writers think robots or computers are going to be smarter than humans anytime soon. Most people with a scientific, engineering, or programming background know they're not even close and won't be anytime soon. I doubt it will happen even in the next 50 years. 100 years is so far away anything can happen so all bets are off.

Re:futurists (1)

ChainedFei (1054192) | about 7 months ago | (#46431491)

Great of you to speak for the majority of scientific, engineering and programming people. There's obviously nothing at all to see regarding the singularity. That's why Google, IBM, Yahoo and the rest are ignoring it.. Oh wait, they're not.

They don't need to be smart. (1)

Dareth (47614) | about 7 months ago | (#46431327)

All they need know how to do is stick soft humans with a sharp stick. We are nowhere near as tough as we think we are. We couldn't stop Chucky dolls much less Terminators.

Re:They don't need to be smart. (3, Interesting)

bunratty (545641) | about 7 months ago | (#46431553)

Oblig. xkcd: https://what-if.xkcd.com/5/ [xkcd.com]

Only computer scientists think that computers... (1)

The Real Dr John (716876) | about 7 months ago | (#46431341)

will be able to mimic the human brain in the next several decades. Neuroscientists know that the human brain is far more complex than any foreseeable microprocessor-based computer system, and that the functions of the brain are not going to be easy to implement in silicon hardware. If newer methods of making computers that are more organic are developed, then you will have a means to start mimicking the human brain, but with silicon, you may never get there.

Re:Only computer scientists think that computers.. (1)

pitchpipe (708843) | about 7 months ago | (#46431439)

Neuroscientists know that the human brain is far more complex than any foreseeable microprocessor-based computer system ...

Henry Markram [theguardian.com] would like a word with you.

Re:Only computer scientists think that computers.. (1)

SternisheFan (2529412) | about 7 months ago | (#46431477)

I'll know robots are intelligent when they start calling in sick to work.

Re:Only computer scientists think that computers.. (1)

dbIII (701233) | about 7 months ago | (#46431635)

There's structures in the brain we don't understand yet so his model is not going to be a fully accurate model of the real thing in every circumstance.
However the thing about models is a simple one is sometimes a good way to simulate specific things accurately. A model for dealing with autism may do that well but don't expect it to be able to simulate speech or a migrane.

Re:Only computer scientists think that computers.. (0)

Anonymous Coward | about 7 months ago | (#46431445)

Who said anything about simulating the human brain? Depending on your definition of "smart" you may or may not need any of the actual function of the human brain. It might turn out that computer intelligence is based on an entirely different organizational structure which could theoretically be more efficient than the human brain, thus "smarter", and achievable in 20 years...

Re:Only computer scientists think that computers.. (1)

The Real Dr John (716876) | about 7 months ago | (#46431545)

Absolutely, but so far nothing even close has happened. Arthur C Clark thought we would have intelligent, conversational computers by 2001, and here we are 13 years later with nothing of the sort. As a neuroscientist I just wish those involved in computer intelligence would take a look at some of the newer images of the connectivity in the human brain as shown by methods like diffusion tensor imaging. The complexity is mind boggling. See here: http://www.humanconnectomeproj... [humanconne...roject.org]

It hasn't happened yet... (0)

ChainedFei (1054192) | about 7 months ago | (#46431579)

...So it's not going to happen soon is a fallacious argument. So is, "This other person who was also a scientist predicted it would happen on X date and it didn't, so subsequent estimates must invariably wrong".

Re:Only computer scientists think that computers.. (0)

Anonymous Coward | about 7 months ago | (#46431639)

Arthur C Clark thought we would have intelligent, conversational computers by 2001, and here we are 13 years later with nothing of the sort.

Define intelligent. Siri and CleverBot have both proven themselves to be more intelligent and certainly more charismatic than some people I've known.

Re:Only computer scientists think that computers.. (1)

mythosaz (572040) | about 7 months ago | (#46431833)

Outside of the whole "going insane because of conflicting programming" thing, HAL didn't do a lot more than Google Now can do. HAL 9000 mostly provided a text-to-speech interface for a governance and caretaker system for hibernating astronauts and the ship that housed them. It mostly just kept antennas pointed and turned on the lights when it was time to wake up.

There are two things HAL could do, that Google Now doesn't do. HAL could make decisions -- but they were pretty simple logical pre-programmed decision trees. Sorry, one astronaut dead, can't allow the other one in the airlock because it doesn't meet the safety case. Second, HAL could carry on rudimentary conversations. Vastly better than the ELIZAs of the world, but mostly for the sake of making him a fleshy character for movies and novels.

Re:Only computer scientists think that computers.. (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46431921)

The material (silicon) doesn't matter. Only the architecture matters. The difference between a human brain and a typical laptop is not the material it's made of. It is that the laptop is designed from the top down, with most of the computation happening in a central location (or a few locations). A human brain is a massively parallel computer with computation happening in every neuron.

If we just add more silicon chips we can have more parallel computing. They don't even need to be near eachother. Computers already transfer information about 6 orders of magnitude faster than neurons. We could have computers 200 miles apart that send each other information faster than 2 neurons on opposite sides of the same brain. And we can fit a lot of silicon chips in a 200 mile radius.

Re:Only computer scientists think that computers.. (2)

The Real Dr John (716876) | about 7 months ago | (#46432103)

It has nothing to do with processing speed, or parallel processing. Brains in general, human brains included, do not process information. They generate consciousness. They do this in ways that neuroscientists still don't understand. As a neuroscientist I can say this without hesitation. Silicon chips are not alive, and will never generate consciousness as we now understand it. But they can process information much faster than the human brain.

Very Sober (4, Insightful)

pitchpipe (708843) | about 7 months ago | (#46431343)

Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil ...

I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics. They are both making predictions about the future. Why is one claim more valid than the other? We're talking fifteen years into the future here. Do you think that the persons/people predicting that "heavier than air flying machines are impossible" only eight years before the fact were also the sober ones?

Lord Kelvin was a sober, rational minded individual. He was also wrong.

Re:Very Sober (4, Insightful)

mbkennel (97636) | about 7 months ago | (#46431401)


| I like how the naysayers are depicted as sober, rational minded individuals while those who see things progressing more rapidly are shown as crazy lunatics. They are both making predictions about the future. Why is one claim more valid than the other?

It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.

Obviously in 1895 heavier than air flying machines were possible because birds existed. And in 1895 there was a significant science & engineering community actually trying to do it which believed it was possible soon. Internal combustion engines of sufficient power/weight were rapidly improving, fluid mechanics was reasonably understood, and it just took the Wrights to re-do some of the experiments correctly and have an insight & technology about controls & stability.

So in 1895, Lord Kelvin was the Kurzweil of his day.

Re:Very Sober (2)

pitchpipe (708843) | about 7 months ago | (#46431511)

Obviously in 2014 thinking machines were possible because humans existed. And in 2014 there was a significant science & engineering community actually trying to do it which believed it was possible soon. Microprocessors of sufficient power/weight were rapidly improving, neuromorphic engineering was reasonably understood, and it just took the Markrams et. al. to re-do some of the experiments correctly and have an insight & technology about controls & stability.

Hmm. I agree.

Re:Very Sober (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46431953)

Nice :)

Re:Very Sober (1, Insightful)

Just Some Guy (3352) | about 7 months ago | (#46431929)

It's because the naysayers are the ones more actively working in the field and closest to the experimental and theoretical results and are trying to actually accomplish these kinds of tasks.

More actively than Ray Kurzweil, Director of Engineering at Google in charge of machine intelligence? Very few people in the world are more active in AI-related fields than he is.

5 years away (0)

Anonymous Coward | about 7 months ago | (#46431347)

The joke (based in way too much fact) is that AI has been 5 years away for the last 30 years. At every point in AI development, there is one faction that believes human-level AI is within just a few revisions of their code.

The less published faction has been distracted doing real work towards discovering just what the scale of the challenge really is. When interviewed, they try to be civil and respectful toward the hype-faction, but you can tell they're getting sick of having to correct those enthusiastic claims over and over.

So please, do the real AI researchers a favor, and stop publishing the guys who say sci-fi level AI is 'around the corner.'

Winfield is probably right (0)

Anonymous Coward | about 7 months ago | (#46431353)

Its a fairly ill-defined claim in any case - what exactly does "smart" mean? However I suspect that (for most interpretations of "smart") Winfield is correct, and that our robots are further behind our brains than Ray thinks. While we have started building machines which do very specific tasks extremely well, the impressive feature of the human brain is its tremendous versatility. Building such "jack-of-all-trades" robots will prove a difficult feat indeed.

The Wrong Question (0)

Anonymous Coward | about 7 months ago | (#46431365)

The question isn't when computers will be smarter than humans at all things, it's when they'll be good enough to replace the most common jobs humans currently do. Once they can perform clever, simple tasks, then their impact on society can be meaningful.

Alternative View (2, Funny)

Anonymous Coward | about 7 months ago | (#46431369)

Analysis: By 2029 people will be so dumb that current robots will be smarter than humans.

Re:Alternative View (1)

ArcadeMan (2766669) | about 7 months ago | (#46431461)

Doctor: [laughs] Right, kick ass. Well, don't want to sound like a dick or nothin', but, ah... it says on your chart that you're fucked up. Ah, you talk like a fag, and your shit's all retarded. What I'd do, is just like... like... you know, like, you know what I mean, like...

Dumb and dumber (0)

oldhack (1037484) | about 7 months ago | (#46431387)

What's more idiotic, Kurzweil's senile rants or responding to them?

Re: Dumb and dumber (0)

Anonymous Coward | about 7 months ago | (#46432127)

What's so dumb about being an idiot? The words are not synonyms. Being an idiot is actually kinda smart. It just isn't nice.
I'm going by the Athenian definition for idiot, not this stupid English BS that lost all meaning of the word. I mean, how many words do we need to dilute before we're all speaking newspeak? Double plus fucked up modern English is.

"adult-equivalent intelligence" (0)

Anonymous Coward | about 7 months ago | (#46431411)

ok, so at least they will be smarter than congressional members by then, great!

Seriously (1)

derideri (214467) | about 7 months ago | (#46431433)

I'm no sure if anyone take Ray Kurzweil seriously... except of course for Ray Kurzweil.

What about our soul? (0)

Anonymous Coward | about 7 months ago | (#46431441)

This is Slashdot so I know I'm going to get modded down something fierce but what about the soul of a human? What if the physical brain is only an interface for out soul/spirit and part of the functionality can never be reproduced without adding a soul/spirit component to the machine?

I was wondering if there some way to test for a soul? Can a person be dead and still have a functioning brain?

Soul of a new... blah blah blah (1)

fyngyrz (762201) | about 7 months ago | (#46431955)

c'mon. Every indication says your brain is you. Chemical reactions, electrical impulses, stored states, massive, active and dynamic connectivity. That's what "you" arise from. When your brain stops, you stop. Your head contains a most effective EM shield consisting of wet, conductive layers that are sufficient to prevent huge RF and EM fields from getting into your brain tissue. The tiny, minuscule events going on inside your head can't get out under any circumstance for the same reason, unless you (a) punch a hole in your skull or (b) scan it with instruments so sensitive you can hardly comprehend the idea, or (c), you effectuate your mind's activity in some manner by moving your body via the nerves that connect your muscles and other parts to the brain through the base of your skull. Your brain is not an interface. Your brain is the computer. Everything we know about physics points this way; nothing points the way you suggest. It's simply not the way to bet. What you're talking about has basis only in mythology at this point in time.

Re:Soul of a new... blah blah blah (1)

CRCulver (715279) | about 7 months ago | (#46432169)

Everything we know about physics points this way; nothing points the way you suggest. It's simply not the way to bet. What you're talking about has basis only in mythology at this point in time.

Dualism has in fact made something of a comeback in the last few years. Although Richard Swinburne might have started this wave with publications arising from his interest in philosophy of religion, many of his students and other thinkers who are continuing this line of inquiry are not theists. "Basis only in mythology"? Someone here doesn't keep up with philosophy.

I've only got ONE thing to say.... (1)

bobbied (2522392) | about 7 months ago | (#46431449)

Number five, IS Alive.

I've seen it myself. Spontaneous emotional response.

They don't need to be smart (1)

ArcadeMan (2766669) | about 7 months ago | (#46431473)

They only need to be cute [smartdoll.jp] .

Incomplete Understanding (0)

Anonymous Coward | about 7 months ago | (#46431513)

There is no issue with building strong AI that is sharper than a human, the problem is shrinking the tech. In many cases, this approach is not viable and we use low latency radio to control robots.

dumber (1)

cyberspittle (519754) | about 7 months ago | (#46431531)

It isn't that robots will be smarter, but rather humans will be dumber.

That is hard to predict (1)

CmdrEdem (2229572) | about 7 months ago | (#46431533)

If smart is the capability of intellectually adapting to accomplish tasks then computers are in trouble for now. If academia overall stops chasing it's own tail worried about publishing papers in great volume of questionable relevance and resumes the publishing of meaningful developments then maybe we can get a good breakthrough in ten years. And that is a big maybe.

I am not particularly thrilled to create an AI good enough to be like us. /. is nice enough but humans overall are dicks. Anything we create will follow this tendency. We are not good enough to avoid that.

No kidding (0)

Anonymous Coward | about 7 months ago | (#46431535)

With people like Ken Ham draining intelligence from the system we're doomed.

Robot, please (1)

wcrowe (94389) | about 7 months ago | (#46431583)

Anyone who thinks that robots will be smarter than humans by 2029 has not really thought things through. I can step out on my back patio, take one look at the pergola, and tell you that it's going to need to be replaced in the next couple of years. I can look at the grass and tell whether I need to cut it this weekend or let it go for another week. I can sniff the air and tell you that the guy in the next cubicle has farted. Of course a robot might come to the same conclusions, but it would have to take samples from the pergola for testing; measure the grass over a period of several days, test the humidity of the soil, and check the weather forecast; and it could tell that a mildly noxious gas has entered the air from the cubicle next door; but would it know, absolutely KNOW, that the guy in the next cubicle farted?

And will they ever build a robot that can truly understand a woman? Hah!

Re:Robot, please (2)

mythosaz (572040) | about 7 months ago | (#46431853)

To be fair, your ability to tell if the grass needs cut is also based on sampling grass growing patterns over your entire life...

Re:Robot, please (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46431969)

So humans don't measure things, and that's what makes them smart?

Re:Robot, please (1)

HornWumpus (783565) | about 7 months ago | (#46432035)

Wait just a god damn second. Are you claiming you understand a woman?

Much less bold then claiming to understand women, but I'm still calling BS on you.

Most people go their whole lives and don't even begin to understand themselves, much less another adult.

I don't know. (5, Funny)

cje (33931) | about 7 months ago | (#46431593)

If the contents of my Facebook feed can be taken into consideration, one could reasonably make the argument that robots are smarter than humans right now.

the "data" milestone (1)

globaljustin (574257) | about 7 months ago | (#46431637)

Commander Data is a fictional character. The character occurs in a ****context**** where humanity has made technological jumps that enable ***storytelling****

I absolutely hate that really, really intelligent people are reduced to this horrible of an analogy to comprehend what's happening in AI....and I *love* Star Trek! I'm a trekkie!

Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence – it still might not have enough time to develop adult-equivalent intelligence by 2029'

So all engineering & physical science, biology, neuroscience, physics...all of this is 'not a problem' anymore in this random context....**still** this Data still is nothing more than an immitation of a human. Different capabilities sure, but still a programmed machine.

The only thing that can make a machine have "civil rights" like Data was granted in his court hearing would be...for a government to declare that beings like Data have **human rights**...it's a question of politics not programming.

So we need to recontextualize all of "artificial intelligence" work to be about **accomplishing a task** not some abstract "Commander Data Milestone"

And we all need to just ignore Kurzweil forever.

Re:the "data" milestone (1)

TsuruchiBrian (2731979) | about 7 months ago | (#46432003)

A government can grant civil rights to a rock. That doesn't make it intelligent. If you can have a conversation with a rock then it is intelligent no matter what the government says. It seemed like data was capable of a conversation. Maybe he was on the cusp of being able to pass the turing test.

Possible (0)

Anonymous Coward | about 7 months ago | (#46431667)

All it will take is the magic algothrithim that starts the learning process in the hardware, it will then happen overnight... Machines will evolve at many orders of magnitude vs humans... A dumb machine could surpass human intellect at a scary pace. The only limit will be the number of reconfigurable nodes in the physical hardware. I suspect that number, the number of "nodes", might not be all that far off! We are already manufacturing things at near the atomic scale today. So 2029 could well be on target. What we think of as 'human intellect' will not be the type of intellect created. It will be a new type that is defined by the machines own choice. We will have no say in the matter! The bigger question is should we persue this kind of tech? Do we really want to be controlled by machines? If a machine has an IQ of say, 10 billion, could we turn it off? Insert all the popular movie and book references here!

Don't worry (1)

slapout (93640) | about 7 months ago | (#46431695)

Don't worry. The Year 2038 problem [wikipedia.org] will take them out a decade later.

Re:Don't worry (1)

Joe_Dragon (2206452) | about 7 months ago | (#46431783)

no the 2030 welfare costs will kill us as you have a mess number of people out of work.

embodiment (0)

Anonymous Coward | about 7 months ago | (#46431751)

As the article says, human intelligence is embodied.

Check out Maurice Merleau-Ponty's classic book, The Phenomenology of Perception, or Hubert Dreyfus' on AI.

There still will be a Singularity... (0)

Anonymous Coward | about 7 months ago | (#46431771)

...but it will not be robots overtaking human intelligence, but human intelligence de-evolving to computer-like rote learning and erosion of human thinking skills, if we're lucky.

How many people already think "Geez, all this knowledge stuff is on the Interwebz, no need to learn or think..." ?

Look at autopilots they still don't do all and the (1)

Joe_Dragon (2206452) | about 7 months ago | (#46431775)

Look at autopilots they still don't do all and they can't handle stuff like sensors going bad to well.

Re:Look at autopilots they still don't do all and (1)

msobkow (48369) | about 7 months ago | (#46432101)

And a pilot who loses an eye does so well without it's sensor, right?

Dark Matter... (2)

pigiron (104729) | about 7 months ago | (#46431799)

will be what causes the singularity!

Robot, if they are networked there is just one AI (0)

Anonymous Coward | about 7 months ago | (#46431825)

This is where hardware is very different from wetware, we meaty creatures can't directly combine our intelligence but hardware will be far less limited in that regard.

Not sure how anyone can call themselves knowledgeable in the area and overlook this profound difference.

man's race to stupidity (1)

Some_Llama (763766) | about 7 months ago | (#46431837)

trying to make artificial intelligence is the worst idea man has come up with, worse than atmoic bombs i dare say... the one limiting factor of humans is that we die... all the information we have gathered must be passed down to the next generation by teaching them all that has been learned previously... artificial intelligence will not have this limitation.

For example, imagine a battlefield between robots and human opponents:

  a human is killed.. immediately all of the training and knowledge up to that point is gone.

any soldier will tell you the difference in survivability between a soldier fresh on the battlefield compared to a seasoned veteran, now compare this to:

a robot soldier is killed, (we must assume that any AI will be networked in some fashion, just like we do now with humans) the previous knowledge, "personality" and situational awareness is transfered to a different "body" and immediately deployed.

it's obvious that humans will only lose information/skills/any technical advantage slowly but steadily with each death, while the AI as a whole will only get better, more efficient and stronger.

the terminator movies are a nice bedtime story but the reality would be much more terminal.

the only positive outcome would be an AI that found human co-existance possible, ala matrix type theology, but any AI would surely see the history of aggression, conquering and oppression of our own selves throughout our past as proof enough that this is inevitably impossible or statistically improbable enough to risk...

eventually if not wiped out or severly hampered by some natural occurance (meteor/ice age/plague) we will bring it down on ourselves with the false hubris we enjoy by being just smart enough to discover concepts and inventions we can never possibly fully understand the implications of or control (e.g grey goo orsuper strains of resistant infectious diseases.)

15 years is kind of soon (5, Interesting)

Animats (122034) | about 7 months ago | (#46431871)

We're probably more than 15 years from strong AI. Having been in the field, I've been hearing "strong AI Real Soon Now" for 30 years. Robotic common sense reasoning still sucks, unstructured manipulation still sucks, and even Boston Dynamics' robots are klutzier than they should be for what's been spent on them.

On the other hand, robots and computers being able to do 50% of the remaining jobs in 15 years looks within reach. Being able to do it cost-effectively may be a problem, but useful robots are coming down to the price range of cars, at which point they easily compete with humans on price.

Once we start to have a lot of semi-dumb semi-autonomous robots in wide use, we may see "common sense" fractured into a lot of small, solveable problems. I used to say in the 1990s that a big part of life is simply moving around without falling down and not bumping into stuff, so solve that first. Robots have almost achieved that. Next, we need to solve basic unstructured manipulation. Special cases like towel-folding are still PhD-level problems. Most of the manipulation tasks in the DARPA Robotics Challenge were done by teleoperation.

No, 2029 will be the year of... (0)

Anonymous Coward | about 7 months ago | (#46431933)

... Linux on the Desktop.

Computers can't beat us at chess, oh, wait... (1)

X10 (186866) | about 7 months ago | (#46432037)

Everybody knew computers could never beat humans at chess. Now they do. In much the same way, computers will beat us at every single intellectual task, at some point in time. Technology revolutions go faster every time one occurs. From 10k years for the agricultural revolution to two years for the internet and mobile phones. I see no reason why computers can't outsmart us in 2025.

In 2029 ... (1)

PPH (736903) | about 7 months ago | (#46432075)

... that AI you are building today will be a teenager. It will think it knows everything. But just try telling it something ....

You'll be lucky just to get it to move out of your basement by 2049.

Who needs adult level intelligence in a robot? (1)

voss (52565) | about 7 months ago | (#46432077)

If you invent a robot as smart as a 9 year old with basic concrete reasoning power that can do simple household chores and yardwork you will become a billionaire.

That assumes computers learn as slowly as humans (4, Interesting)

msobkow (48369) | about 7 months ago | (#46432091)

That presumption seems to be precipitated on the theory that a computer intelligence won't "grow" or "learn" any faster than a human. Once the essential algorithms are developed and the AI is turned loose to teach itself from internet resources, I expect it's actual growth rate will be near exponential until it's absorbed everything it can from our current body of knowledge and has to start theorizing and inferring new facts from what it's learned.

Not that I expect such a level of AI anytime in the near future. But when it does happen, I'm pretty sure it's going to grow at a rate that goes far beyond anything a mere human could do. For one thing, such a system would be highly parallel and likely to "read" multiple streams of web data at the same time, where a human can only consume one thread of information at a time (and not all that well, to boot.) Where we might bookmark a link to read later, an AI would be able to spin another thread to read that link immediately, provided it has the compute capacity available.

The key, I think, is going to be in the development of the parallel processing languages that will evolve to serve our need to program systems that have ever more cores available. Our current single-threaded paradigms and manual threading approaches are far too limiting for the systems of the future.

Yeah but wait till he becomes a teenager... (1)

Art3x (973401) | about 7 months ago | (#46432143)

From the summary:

it still might not have enough time to develop adult-equivalent intelligence by 2029

2029: Skynet is born. Nothing bad happens
2042: Skynet turns 13...

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?