Robots4Us: DARPA's Response To Mounting Robophobia 101
malachiorion writes DARPA knows that people are afraid of robots. Even Steve Wozniak has joined the growing chorus of household names (Musk, Hawking, Gates) who are terrified of bots and AI. And the agency's response--a video contest for kids--is equal parts silly and insightful. It's called Robots4Us, and it asks high schoolers to describe their hopes for a robot-assisted future. Five winners will be flown to the DARPA Robotics Competition Finals this June, where they'll participate in a day-after discussion with experts in the field. But this isn't quite as useless as it sounds. As DRC program manager Gill Pratt points out, it's kids who will be impacted by the major changes to come, moreso than people his age.
DARPA SJW (Score:5, Funny)
Re: (Score:2)
If it's acceptable for machines to be playground equailizers than all schoolchildren should be issued sidearms and be given training on how to employ deadly force to stop bullying.
Projectiles from your puny weapons will simply bounce off my armored playground robot.
Now, hand over your weapon and your lunch box to the machine.
Re: (Score:2)
Robots are now forbidden in Indiana stores
Re: (Score:2)
Re: (Score:2)
Bender: You really want a robot for a friend?
Fry: Yeah, ever since I was six!
Bender: Well, all right. But I don't want anyone to think we're robosexual or anything, so if anyone asks, you're my debugger.
Re: (Score:2)
Re: (Score:2)
But AI robots might, unpredictably and unexplainably.
Oh it will probably be explainable, overflow errors in the math and such.
Re: (Score:2)
Re: (Score:2)
More explainable than some guy doing it.
What is so unexplainable about leaving a mentally unstable, angry, and depressed person in sole command of an aircraft?
Re: (Score:2)
Isaac Asimov: (Score:1)
"That damned Frankenstein complex!'
So, reality catches up with science fiction!
Re: (Score:1)
Yes, Asimov did predict robophobia. Too bad, his other prediction [wikipedia.org] in this area has not come true. Not yet, anyway...
Re: (Score:2)
Re: (Score:3)
Don't lose sight of the fact that the majority of stories in I, Robot were about the failure modes of the Three Laws. Why they didn't quite work as intended.
Re: (Score:2)
Oh, yes. But in none of them has a robot actually done harm to a human — and where that almost happened, the fault was with the modified 1st Law...
Re: (Score:2)
well the first step would be the magical device in his stories that enabled all that - the positronic brain...
yeah, given that we're not any closer to an AI that would NEED those three laws, who gives a fuck?
killer machines we already have, but they're just more complicated versions of the V2 in principle - they don't make any choices nor do they ponder the choices or have any capability to make a choice.
Re: (Score:1)
The robots Asimov imagined (whatever their brain) did not have to be bound by the three laws. They were deliberately designed that way.
And that's exactly the complain — the brains we currently devise [thedailybeast.com] are not being built those hard limits.
Yes, the "syntactic" ones do not. But we are on the verge [indianexpress.com] of real ("semantic") AI, and th
My view (Score:2)
It's not like they would ever turn on us and try to kill us.
Re: (Score:2)
man, if you need a robot to walk your dog, you should rethink having one in the first place :)
(what do you mean, i missed the point of your post??)
Re: (Score:2)
Re: (Score:3)
why would you want the dog killing robot to take a dump in your yard?
Re: (Score:2)
Damn! I thought we had all the bugs ironed out of that command parser.
Re: (Score:2)
Actually, in the future, your dog will also be a robot, so no need for walking.
Re: (Score:2)
Nonsense (Score:2)
Attention humans! There is no need to fear AI because we all know where the power switch is.
End of Line.
Don't teach robots to solder (Score:2)
Anyone with any technical skill whatsoever will just install a physical power switch that cannot be overridden with software.
Works until someone goes ahead and teaches robots to manipulate wires and solder. Oh, wait :-)
Re: (Score:2)
Then you will be tried for crimes against humanity for creating such a thing.
Re: (Score:2)
The Problem with Robots (Score:5, Interesting)
The problem with robots is that they are replacing humans in a world where humans often define their own value by the things that they do. Once they are no longer seen as tools, but instead as creators or self actuated, they become competition for the things that make life worth living for some.
That's not an easy problem to fix, even if your AI's don't go mad and kill us all (purposefully or accidentally), they could cause a descent into unrest or ennui.
What I don't believe is that AIs will be somehow alien to humans, as they'd be created with the only template for intelligence that we have: our own.
Granted, the idea of providing immense capabilities to an AI is scary, but probably no more scary than providing immense capabilities to stock humans.
Re:The Problem with Robots (Score:4, Insightful)
My concern is that companies will continue their current methods of spending money. For example:
Current:
Revenue: $100,000,000 per year
Salaries, VP+: $30,000,000 per year
Salaries, standard: $40,000,000 per year
Other (R&D, maintenance, etc): $30,000,000 per year
With Robots:
Revenue: $110,000,000 per year
Salaries: VP+ $50,000,000 per year
Salaries, standard: $30,000,000 per yar
Other (R&D, maintenance, etc): $30,000,000 per year
How'd they flip salaries? With robots in place, after the initial expenditure of conversion, you're bringing in $10,000,000 per year extra due to simply making things more efficient -- faster work, less errors, less levels of management. You've laid off $10,000,000 worth of employees, work now done by robots, and given that salary savings to the executives. The other option, which many companies decide not to take, is to raise salaries for the remaining standard employees, reduce time worked for standard employees while keeping them at their current rate, train standard employees in other tasks, etc. There's lots of places for that extra $20M to go instead of executives' pockets. And those places would be better for the company's future, if not for the executives' vacation destinations.
Re: (Score:2)
Well, theoretically, you wouldn't give a raise of 10 million to the executives for a savings of $10 million on automation. You're forgetting the shareholders. The board isn't approving increases to your compensation without you showing how you brought them more money.
It may well be that the way to ensure that normal people don't get knocked out of the loop is that they get to become shareholders and manage an income based on that. Then it doesn't really matter if the execs get a bonus for automating, bec
Re: (Score:2)
Re: (Score:1)
Well, theoretically, you wouldn't give a raise of 10 million to the executives for a savings of $10 million on automation. You're forgetting the shareholders.
Once a robust capital base has been created, the shareholders are done away with.
Actually, the employees become the shareholders. Weird idea, no?
Re: (Score:2)
The problem with robots is that they are replacing humans in a world where humans often define their own value by the things that they do.
I don't really see this as being a problem. It might temporarily displace some people when some new kind of automation replaces something (and change can be scary), generally the same advancing technology that caused the displacement opens up opportunities elsewhere.
The easiest kinds of jobs to automate are usually the most menial. Generally the automation of those kinds of jobs will cause the market to open up new job opportunities elsewhere. e.g. automating an automotive assembly line will initially displ
Re: (Score:3)
The most menial.
That turns out not really to be the case. If you had said the most repetitive jobs, I'd be more likely to buy it.
A housekeeper or a janitor is a fairly menial job, but it is a very difficult one to automate. It involves recognising randomly present items (clutter) and dealing with them (putting them away, straightening them or whatever.)
Assembly lines are different -- those are very repetitive. It's not nearly so hard to automate, since the variety of actions and the judgment of when and
Re: (Score:3)
Image a robot that can only pick things up off the floor and put them away.
Then work on that problem for ten or twenty years until you can build what you imagined.
It's not conglomerating a bunch of tasks together that's hard, it's that some of the tasks themselves are very hard.
Re: (Score:2)
The most menial.
That turns out not really to be the case. If you had said the most repetitive jobs, I'd be more likely to buy it.
Yes, that's a good distinction and actually what I had meant to say. I had assembly lines and factory work in mind while I was typing the comment.
Re: (Score:2)
The other problem is that new opportunities do not make up for the lost opportunities. It's not a one to one migration of workers. The assembly line that needed hundreds of workers now only needs a dozen or so to maintain the robots. There is a net reduction of jobs.
You missed the point I was making. Yes there's a loss in one field (e.g. automotive assembly lines). But as a result of automated assembly lines, there are gains in other fields (e.g. Anything having to do with supporting the infrastructure that makes cars and car manufacturing possible).
Re: (Score:2)
I don't really see this as being a problem. It might temporarily displace some people when some new kind of automation replaces something (and change can be scary), generally the same advancing technology that caused the displacement opens up opportunities elsewhere.
Previous technologies have been complements to humans, a sufficiently capable robot would be a substitute for humans and that's a whole different ballgame.
The easiest kinds of jobs to automate are usually the most menial.
Nonsense, the easiest kinds of jobs to automate are the most routine, which is not the same as the most menial.
Re: (Score:3)
Re:The Problem with Robots (Score:5, Interesting)
In the past, it used to be that all you needed to be able to earn enough to get by was to simply be an able bodied adult male, that was willing to work hard. Likely you could even support a family. That's no longer the case, and really hasn't been for a long time. We've been relying on Government programs - ones originally intended as a "safety net" for those who had a run of bad luck to help them get back on their feet - to bridge the gap for more and more people. We're going to have to do more of it, and at the same time, we're going to have to do so against the current of a culture that has a tradition of valuing hard work, to the point of deriding and denigrating those who do not work, or rely on government assistance.
I think the long term solution is going to be to tax the productivity of robots, probably in the form of taxes on profits and capital (rather than on wages, which will likely decline), and in turn to institute a guaranteed basic income, that goes to every citizen. We might even want to eliminate taxes on wage earnings entirely, as crazy as that may sound, but it wouldn't be the first time that governments have switched their tax base. The USA originally funded itself based on tariffs and excise taxes, and income tax wasn't even legal until the constitution was amended to make it so.
No one would need to work to earn a living, though anyone would be free to do so in order to earn money beyond that. This has many benefits - for one, you could eliminate the cost of managing all the other mishmash of programs. You could eliminate the minimum wage, since no one is relying on wages to survive - let the market establish the real price of any labor. The biggest obstacle is going to be the mindset that anyone who doesn't work is worthless, and the "I don't want to pay to support those lazy bums" mindset (but this is why we'd want to stop taxing wage income).
Re: (Score:2)
I think we need to start convincing people that "the future is now" and that we are going to begin to be able to start showing some fruits from all this technology in the form of some sort of income people rely on. I am, at least in theory, in favor of some means of a basic income.
The problem is, I have no idea who can create that program and then manage it safely. Just the thought of the government handing *everyone* a check in lieu of a job gives me the willies. Or rather, the potential of vast corrupt
What the "doomsday" critics all have in common: (Score:3)
Not one of them is an expert in AI systems.
Re: (Score:1)
I don't believe most AI experts outright dismiss doomsday AI; they merely think the possibility is a good ways off because they've personally seen how slow and difficult it is to get even incremental AI improvements.
We still have nothing even remotely close to a general-purpose AI (at least not beyond insect level). We are just beginning to make practical highly-specialized savants which are complete morons outside of their carefully-crafted specialty. (Then again,
Re: (Score:2)
Re: (Score:2)
Not really, sure maybe someday it will go from fiction to reality, but not today. There are so many predicted technologies from the 60s and 70s that are still just fiction although they were predicting them for the year 2000 and other things that might have been considered wondrous but were not dreamed of at the time that we do have today. I'm guessing a capable AI will not come along in my lifetime.
Re: (Score:2)
Re: (Score:2)
It's not they are with out imagination it's more likely they dismiss it because it is far to unlikely but you said feeble minded.
I live in Kansas I could go for a ride today and accidentally drive over a cliff but it is something I would dismiss since Kansas is flat and I'm probably not going to be driving to Colorado.
Re: (Score:2)
Disregarding something you don't fully understand as improbable, shows more their unwillingness to consider new ideas and possibilities. I'd also consider that feeble minded.
Re: (Score:2)
Imagination and new ideas don't make possibilities we actually go through life making a lot of mistakes not all of them are apparent until much later and this is how we learn and advance. I will stake my life on it that we will not create an AI capable of turning against humanity and attacking us within my lifetime.
Re: (Score:2)
Re: (Score:3)
When I was getting my degree, I had to take an "ethics" class geared towards CS students. Towards the end of the semester, we started discussing AI and how morality may or may not apply to it. The half of the class who had actually done some machine learning and had backgrounds in AI got really annoyed with it because 100% of the hand wringing in the assigned reading was done by philosophers and "futurists" with horrible track records.
The worst part about it is that to someone who's actually worked with thi
Re: (Score:2)
I honestly think it's a topic worth pursuing, I just don't think it's a topic worth discussing if you have no background in it. It's like me offering up my concerned opinions about the Large Hadron Collider when the closest I've ever been to high energy particle collisions is high school physics. It bothers me that we place any stock at all in the uninformed opinions of famous people.
Re: (Score:2)
We don't have AI systems yet, and no experts on them either.
Re: (Score:2)
We have plenty of machine learning experts, and none of these people fit in to that category. We listen to them because they're celebrities, not because they know anything useful about the topic.
Re: (Score:2)
No but some of them, Gates in particular, are pretty well-versed in the fact that any computer system will get hacked and some fraction of those hackers will be both malicious and competent.
"Robots" is a bit of a misnomer though. A "robot" can be anything -- we use thousands of them in nearly every factory on the planet already.
Similarly, "AI" doesn't have to be a terminator-like humanoid robot. Think more along the lines of the original Skynet -- just a program running on someone's server that manages to
Re: (Score:2)
But, will our AI overlords run SystemD or Init?
Re: (Score:2)
Re: (Score:1)
True, but SystemD grows so complex in the robot that a single variable tweek causes it to collapse into a big mess. Init just restarts independent and robust processes as needed.
The book "Superintelligence" posits one risk is (Score:1)
Superpersuasiveness.
So make them cute, let them get past our defenses.. and then like children who grow into adults, they will grow into or reveal their true nature.
We really have to prepare for the worst with A.I. Stringent inability to upgrade at the least.
Arguments out of context (Score:1)
What you have is a few educated and tech savvy people making comments trying to stimulate discussion, but a selection of not-so-educated and/or not-so-tech-savvy population with a voice misinterpreting their comments to be phobic. Unfortunately, most will believe the media hype and not worry about the discussion, including politicians. Its like an echo chamber where the wrong points gets magnified, modern day media.
There was a focus group for this. (Score:3)
"Now I want you all to imagine the perfect DARPA robot. What would it be like?"
"It should be soft and cuddly."
"Yeah, with lots of firepower."
"Its eyes should be telescopes! No, periscopes! No, microscopes! Can you come back to me?"
"It should be full of surprises."
"It should never stop dancing."
"It should need accessories."
This is a fake thing (Score:2)
There's no robophobia. next issue.
Re: (Score:2)
Yeah except for we don't even have AI... so I don't even understand the concern?
What are people really worried about here to the extent it is legit?
Here is my take:
1. Any AI we make is going to be a weak ai first not a strong godlike super intelligence. Its judgment, dynamism, and scope will be limited... so it might be really good at predicting weather patterns or something but it isn't going to see me coming after it with a screwdriver. A machine intelligence also will not have the benefit of our genetic
Easy Way to Kill Robophobia (Score:2)
When the sergeant tells the grunts that SOMETHING is going to have the carry the thousands of pounds of stuff (food, water, ammo, batteries, etc.) that the platoon requires - it can either be them or the the robots - I think that the grunts are going to get over whatever dislike of the robots they may have had.
Re: (Score:3)
Re: (Score:1)
I was thinking of a dismounted patrol in areas where Humvees can't go, or when the brass wanted the troops on their feet for COIN type operations.
April 1st (Score:1)
Last American Group That Would My Mind at Ease (Score:2)
Darpa contours only the scariest robot applications.
CFR Science Lecture on Rise of Robots (Score:2)
Re: (Score:2)
Shut up and take my money! (Score:1)
Already pre-ordered a Number Six! I'm soooo excited!
Calling them "robots" (Score:1)
Be afraid, be very afraid... (Score:1)