By 2045 'The Top Species Will No Longer Be Humans,' and That Could Be a Problem 564
schwit1 (797399) writes Louis Del Monte estimates that machine intelligence will exceed the world's combined human intelligence by 2045. ... "By the end of this century most of the human race will have become cyborgs. The allure will be immortality. Machines will make breakthroughs in medical technology, most of the human race will have more leisure time, and we'll think we've never had it better. The concern I'm raising is that the machines will view us as an unpredictable and dangerous species." Machines will become self-conscious and have the capabilities to protect themselves. They "might view us the same way we view harmful insects." Humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses." Hardly an appealing roommate."
Now thats incentive (Score:5, Interesting)
To stay alive for the next 30 years.
Re:"machines will view us as an unpredictable" (Score:5, Interesting)
I beg to disagree. The typical human works toward stability in his/her life, wields (relatively puny) weapons only to protect him/herself (if at all), and is subject to attacks from computer viruses. Will intelligent computers make the mistake of defining the human species by the small percentage of psychopathic humans who believe they are demigods? Not if they are intelligent. Btw, no one will miss the subset of the species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses" when our new overlords wipe them out. (You know who you are!)
Intelligence (Score:4, Interesting)
I do not think that word means what he thinks it means.
As stated elsewhere, I see no indication of intelligence in computers and we're only thirty years from his mark of they're being intelligent enough to look down on us. Been hearing this hysteria since the '70s at least.
Re:AI is always (Score:4, Interesting)
The machine that learns can be considered an AI, but the ones derived from it don't learn anything new after they're programmed and so shouldn't be considered as part of the total machine intelligence.
Stephen Hawking fears the same thing... (Score:5, Interesting)
Just not necessarily within 35 years:
""Success in creating AI would be the biggest event in human history." Hawking writes. "Unfortunately, it might also be the last."
http://www.theregister.co.uk/2... [theregister.co.uk]
Re:AI is always "right around the corner". (Score:2, Interesting)
If you dig into the subject a bit, you will find a staggering lack of consensus on what intelligence is and is not.
Commander Data often tried to move outside of his "original programming". That is something AI researchers struggle to accomplish. There are some interesting experiments with genetic algorithms, but we don't always understand how the results work or how to make stable and repeatable results.
For me the scary thing about AI is not human level intelligence, or even super human intelligence. It is with AI we made create an intelligence so alien to us that we may have trouble relating to it. I wonder if we will even recognize it as intelligent initially.
Re:Well (Score:2, Interesting)
I've been actively working in the field for the past few years and I don't think he's incredibly off the mark. Google, for instance, has some pretty advanced tech in production and lots more in development. The 'new AI' (statistical machine learning and large-scale, distributed data mining) is getting pretty advanced and scary.
Re:AI is always (Score:5, Interesting)
Nope, not following instructions. I think all of those were based in machine learning.
I guess Google's car is following instructions too, like "drive me to New York", but most would still count that as AI.
Just because 'most' would count something as AI doesn't make it so, nor does it make it relevant. The fears raised on articles like this are based on the development of what we would term "sentient AI".
And frankly speaking calling what is out there right now "machine learning" is a joke. It's akin to scuffing your wool socks on the carpet to produce a static shock and then lumping that into the same category as advanced electrical engineering.
Cold fusion in your pocket, warp drives, antigravity vehicles (aka 'flying car'), planetary scale terraforming, and genetic/medical engineering which will turn us into undying superbeings are all "right around the corner". These types of alarmist articles are pure pigshit. These types of discussions need to be had, but not as a matter alarmist 'news' articles- this is the role that science fiction fulfills... and does a far better job of it.
Re:Well (Score:0, Interesting)
I would say things are different, but speed doesn't mean responsiveness.
20 years ago (1994), I was using a SGI Indigo or SGI Indy with a 3D card. Not much compared to today's standards, but functioned flawlessly with a decent UI, especially with FrameMaker, and a Trinitron display had a much faster response rate and better colors than almost any LCD made. I was using 1280 x 1024, but most laptop monitors are less than that.
Code quality? 20 years ago, the absolute worst were stoned CS students who would be on probation one semester, and gone the next. Now, with the el cheapo H-1Bs working for peanuts, even the worst quality I've seen in CS is leaps and bounds over the code quality I see getting churned out by the lowest-bidded workers and a lot of the offshore dev houses. 20 years ago, we actually had release quality code, not an early beta called "version 1.0". there were no "preview releases", back then. Hell, there were no public betas. Betas were kept confidential because the first contact the public would have with your code would be release quality code, and shitty code quality would be the end of your company.
20 years ago, support didn't suck. Got a copy of WordPerfect? You could call their 800 number and get help. Free. For the lifetime of the version. Now, if you want tech support, you pay up the ass for it.
20 years ago, the worst DRM were hardware dongles and copy-protected diskettes. Dongles got emulated, diskettes got patched. Games didn't suck, so one $49.99 game would mean $250 worth of content, even not factoring inflation, due to no gutting the core release and adding it in via DLC.
Actually, the worst DRM I saw back then was a game that had a dongle and a serial number mechanism... if the game thought it was tampered with, it had a series of capacitors and a cascade array to fry the parallel port it was plugged into. However, said game company went under fairly quick.
Of course, 20 years ago, there were no trolls, shills, people wanting to wage their country's prejudices on others, nor spammers at the degree present now. NSF owned the Internet and banned all commercial traffic. Spam, get disconnected, no ifs, ands, or buts. A troller would get their access to USENET and maybe entire net access yanked. This never happens these days unless it is a copyright violation. If a site didn't police their users, their EFNet access to IRC was K-lined swiftly. Same with USENET. A site misbehaving would have their feed pulled.
Compared to then, what do we have now that's better?
1: More useless content. Lots more cat pictures.
2: Better search engines. Archie and having to be specific in searches has yielded to "how do I do this" type of queries.
3: "Free" music. I'm sure people are happy that all their favorite bands are downloaded, but there are no new bands to replace them. There will not be a Freddie Mercury or groups like Pink Floyd, Nine Inch Nails, or other items. What you listen to in the mainstream is now dictated word for word, and note by note by corporate drones. The same formula for songs is repeated over and over again. Thanks to piracy, a vibrant, expressive form of art is completely dead, with only predigested stuff available now, or amateur hacks with their ironic beards and acoustic guitars crooning about their cat because everyone else is doing exactly that.
4: Backups. All machines of any importance had an 8mm or 4mm tape drive. Now, nobody gives a shit about backups, or stores it to an offsite provider who has unknown security and reliability. An IT person back then would laugh in your face if you told them you are storing backups on drive arrays and not moving critical data to archival media.
5: Host based security. The network was an issue, but security was based around hosts. With the advent of Windows, now one router or one firewall controls access... and when it gets hacked, the entire company is vulnerable. 20 years ago, an intruder would have to hack individual machines fo
Re:AI is always "right around the corner". (Score:5, Interesting)
Researchers once thought chess made a good proxy for intelligence. Not every smart person is good at chess, but it seemed every good chess player was also smart. They worked for decades to make chess programs that could beat good chess players. When that started happening, it was obvious that the programs had no general intelligence at all. They were good for chess, but had to be reprogrammed even for very similar games like checkers. When the ultimate triumph of beating the world chess champ happened, it was more of the same. No real intelligence, just faster hardware and refinements to the search algorithm.
The conclusion is that chess is not a good measure of intelligence after all. We don't have a good grasp of what intelligence really is, let alone how exactly to measure it. IQ tests have all kinds of problems, not least that the typical IQ test is very narrow. Maybe wealth or number of children or friends could correlate with intelligence, but there are lots of problems with that too. Is it smart to have wealth beyond one's present and future needs?
Re:AI is always "right around the corner". (Score:4, Interesting)
It's also rather hard to design a test which dosn't require "general knowlage" or which isn't "ethnocentric" in some way.
Re:Now thats incentive (Score:3, Interesting)
Louis Del Monte estimates that...
Who?
I don't like this kind of reasoning. Science should never be about authority.
With that said, the article doesn't appear to have any credible arguments, just the kind of contrived timeline you are familiar with from bad science fiction with Jean-Claude Van Damme in the lead.
Re:AI is always "right around the corner". (Score:4, Interesting)
Translation is like predicting the weather. If you want to do an okay job of predicting the weather, predict either the same as this day last year or the same as yesterday. That will get you something like 60-70% success. Modelling local pressure systems will get you another 5-10% fairly easily. Getting from 80% correct to 90% is insanely hard.
For machine translation, building a database of 3-grams or 4-grams and just doing simple pattern matching (which is what Google Translate does) gets you 70% accuracy quite easily (between romance languages, anyway. It really sucks for Japanese or Russian, for example). Extending the n-gram size; however, quickly hits diminishing returns. Your increases in accuracy depend on a corpus and when you get to the size of n-gram where you're really accurate, you're effectively needing a human to have already translated each sentence.
Machine-aided translation can give huge increases in productivity. Completely computerised translation has already got most of the low-hanging fruit and will have a very difficult job of getting to the level of a moderately competent bilingual human.
Re:AI is always (Score:5, Interesting)
if you think a self driving car is an AI then you know nothing about intelligence.
A self driving car is about as smart as a worker ant. it can move around obstacles, it can move heavy loads(like a fat arse). It has taken 50 years for computers to replicate an ant. And to do it we need 100,000 times the power requirements. Oh sure the self driving car follows GPS instead of sent trails. but no self driving car can follow a trail that doesn't exist.
Re:AI is always (Score:3, Interesting)
And how long did evolution take to make an ant? How long from there to a human?
Re:AI is always (Score:4, Interesting)
Googles car has been programmed to know how to drive. It can not learn how to fly. It can not learn how to build a new copy of itself. It can not learn to bake a loaf of bread.
It is in no way AI.
Re:AI is always (Score:5, Interesting)
It's not going to change it's mind half way to New York and go somewhere else.
Right - it's not like direction finding devices can't find construction and route you around them.
Until a machine can come up with an idea of it's own, it's not intelligent.
You've just invalidated at least half of the human race.