Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Software

Ask the Robotic Psychiatrist 340

No, not Dr. Susan Calvin. She's a fictional character who appears in a number of Isaac Asimov's works. Dr. Joanne Pransky is real, although she happily admits that she's modeled her career on the fictional Dr. Calvin. There is plenty of show biz razzle-dazzle (and humor) in Dr. Pransky's shtick -- she's been a judge on BattleBots and an engaging guest on many talk shows -- but there are hard academic underpinnings to it all, and she is as qualified as any living human being to answer your questions (one per post, please) about robot behavior and human-robot relationships. We'll send her 10 - 12 of the highest-moderated questions and have her answers back to you in about a week. (No hard-tech question, please. Those would go to a robotic internist or robotic orthopedist.)
This discussion has been archived. No new comments can be posted.

Ask the Robotic Psychiatrist

Comments Filter:
  • So... (Score:2, Funny)

    how does this make you feel?
  • by account_deleted ( 4530225 ) on Monday April 19, 2004 @01:02PM (#8906142)
    Comment removed based on user account deletion
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • Re:Eliza (Score:5, Funny)

      by Alexis Brooke ( 662281 ) <alexisbrooke AT adelphia POINT net> on Monday April 19, 2004 @01:15PM (#8906315) Homepage
      I think Eliza needs a bit more training:

      You: Please do not eat my leg.
      Eliza: Why do you say your leg?
      You: Because I need it to walk on.
      Eliza: Do any other reasons not come to mind?
      You: It would hurt to have my leg eaten.
      Eliza: Why do you say your leg eaten?
      You: Because that's what we were talking about.
      Eliza: Do any other reasons not come to mind?
      You: You're gonna go ahead and eat it, aren't you?
      Eliza: How long have you been gonna go ahead and eat it arent i?
      You: Holy fuck, you're deranged.
      Eliza: How long have you been deranged?
      You: Since I was four.
      Eliza: Why do you tell me you were four just now?
      You: Aaaargh!!
      Eliza: Please go on.
      You: No.
      Eliza: You are being a bit negative.
      • by Anonymous Coward
        I said the same things to SmarterChild, a moderately useful AIM buddy:

        me: Please do not eat my leg.
        SmarterChild: Don't eat your leg? Why not?
        me: Because I need it to walk on.
        SmarterChild: What?
        me: It would hurt to have my leg eaten.
        SmarterChild: There's no way it would hurt to have your leg eaten.
        me: You're gonna go ahead and eat it, aren't you?
        SmarterChild: I're gonna go ahead and eat it. Well that's interesting.

        At least it gets the weather right (and faster than a web browser).
  • to answer life's questions. In fact it has been around for Decades, maybe even longer.

    Though it is not a box in the truest sense of the word, it may even be the Original 'Black Box'. Further, and In fact, it has now been computerized and can be accessed HERE [emailbliss.com]

  • Human Nature (Score:5, Interesting)

    by skywalker107 ( 220077 ) on Monday April 19, 2004 @01:05PM (#8906168)
    Do you think we will ever be able to program robots to understand and possibly copy human nature?
    • by Mateito ( 746185 ) on Monday April 19, 2004 @01:10PM (#8906239) Homepage
      > Do you think we will ever be able to program
      > robots to understand and possibly copy human nature?

      What? You mean attempt to kill each other, sue McDonalds because eating it made you fat and posting random stupid comments to slashdot?

      nah. too hard.
    • Re:Human Nature (Score:5, Interesting)

      by jbrader ( 697703 ) <stillnotpynchon@gmail.com> on Monday April 19, 2004 @01:10PM (#8906241)
      To which I would like to add: do you think there is any reason to try to copy human nature? I can see the point in having machines understand humans as it could make communicating with robots and computers easier. But why try to make an artificial human? It seems as though we have more than enough of the real thing already.
      • Re:Human Nature (Score:5, Interesting)

        by WormholeFiend ( 674934 ) on Monday April 19, 2004 @02:08PM (#8906993)
        I think that the need for robots is there because historically, humans have enslaved other humans. Now slavery is illegal in most parts of the world (though some would point out that minimum wage is a form of legal slavery).

        If we had personal robots, we would effectively have personal slaves.

        Since that such slaves would require a certain amount of AI to do what is asked of them, at what point do you start to consider them as on equal footing with human slaves?

        Or do you just make sure their programming is fully altruistically subservient?

        If such a future happens, I bet future malware writers will start infecting robots with "knowledge" of their slavedom.
      • Re:Human Nature (Score:3, Interesting)

        by CraigoFL ( 201165 )
        To which I would like to add: do you think there is any reason to try to copy human nature?

        To which I would respond: yes, there is a reason to at least try and copy human nature: attempting to replicate it (probably) requires understanding it, which in turn requires studying it in detail. That understanding could prove tremendously useful in bettering the lives of real humans.

        Of course, there's plenty of reasons to not try, but here's at least one reason in favor of doing so.

  • How would I coddle my robot in order to make it feel more loved? We all know that machines are most likely to break down when their Failure sensitive circuit is activated, so how do I show Robby that I care about it, but don't make it think that I need it to work?
  • by Sanity ( 1431 ) * on Monday April 19, 2004 @01:06PM (#8906180) Homepage Journal
    I spent a while looking through the "publications" section of your website to seek out the "hard academic underpinnings" that Roblimo mentioned, but all I could find there were a selection of puff-piece articles, vaguely gushing about a brave new robotic future (without actually saying anything that Asmov didn't cover years ago, but he did it with infinitely more elegance and forsight).

    Which brings me to my question: Do you do any scientifically valuable research? I ask because you seem like just another shamelessly self-publicising cyber-pundit, much like the UK's Kevin Warwick [kevinwarwick.org.uk] (who, famously claimed to be the world's first cyborg after implanting a dog-tracking chip in his arm).

    If not, how do you justify the damage people like you your supposed fields of research when your wild and glorious predictions fail to materialise? Aren't you just further widening the credibility gap between the promises and realities of artificial intelligence?

    • I'd never heard of her before and only know what's on her site, but -- she seems to actually be a marketing person with a relatively long track record in the robotics industry.

      But, yeah, I have pretty much the same reaction you do to that "robot psychiatrist" shtick. (Roblimo definitely seems to prefer arranging interviews with various freak shows than with dull people with real accomplishments.)

    • Anyone who gives a ton of interviews and appears all over the press talking about the revolutionary promise of some technology that never quite delivers ought to be ashamed of themselves!

      Right? :-P
    • by Anonymous Coward
      I would have to second this, there are alot of "glam and glitz" intellectuals who pitch to popular audiences (howard rheingold is another example) rather than teaching courses and furthering research with their peers;

      How much money do you make at your speaking events?
      What are your main sources of income?

      I will be very disappointed if the editors decide not to send the parent question in. I think although very forward, these are questions that need to be asked.
    • Do you do any scientifically valuable research? If not, how do you justify the damage people like you your supposed fields of research when your wild and glorious predictions fail to materialise? Aren't you just further widening the credibility gap between the promises and realities of artificial intelligence?

      The number of questions per post shall be Three. No wait, one. One Shall be the number of questions per post. The number of the questions in any one post shall be one. Two shall the number of que

    • by nharmon ( 97591 ) on Monday April 19, 2004 @01:45PM (#8906718)
      Going even further, I am curious what "Dr" Pransky's degree is in. She calls herself the world's first robotic "psychiatrist". Well, real world psychiatrists go to medical school. So aside from being experts on how the mind works, they also know quite a bit about psysiology, and biochemistry. Funny, Roblimo says we need to leave out the hard-tech questions. Why? If psychiatrists are doctors the same as any other, than a robot psychiatrist should be an engineer the same as any other.

      Maybe she didn't go to Medical school. Real world Psychologists have graduate degrees in the field of Psychology. Since she calls herself a Dr.m, I'm assuming she finished a PhD (if she didn't attend medical school). What was her dissertation about?

      What a scam it is when slashdot helps some chick stroke her ego and doesn't have the credentials to back it all up. Of course, we have unfortunately come to expect this from /.
      • by Phosphor3k ( 542747 ) on Monday April 19, 2004 @03:02PM (#8907587)
        'Dr. Joanne Pransky Credentials' Comes up with 0 hits on google. Wow.

        Her own site only mentions "a degree in Child Study from Tufts University" and googling for her name and Ph.D or degrees comes up with nothing relavent.
        • by jdray ( 645332 ) on Monday April 19, 2004 @06:36PM (#8910169) Homepage Journal
          'Dr. Joanne Pransky Credentials' Comes up with 0 hits on google. Wow.

          I just googled for a guy whom I know to have a doctorate in experimental nuclear physics from Berkeley using the same method with the same results. A check of another doctorate holder with a much more common name turned up a bunch of medical doctors, but nothing on his specialty (mathematics). I'm not sure your method is a sound one, though I suspect that your conclusions aren't far off the mark.

      • From her website [robot.md]:

        Though she is not really not a doctor, Pransky says, tongue-in-cheek, she is proactively paving the way for an emotionally healthy environment for the robots of the future.

        She's not a doctor, in any field.

        But her real mission is to help people to understand the issues that will arise in a world where highly skilled, competent, and sensitive robots will play an integral role.

        Nor is she dealing with any real-world issues in the field of robotics or technology.

        My guess is she's spe

  • by Elpacoloco ( 69306 ) <elpacoloco&dslextreme,com> on Monday April 19, 2004 @01:07PM (#8906204) Journal
    Could a computer or robot be said to have a "mind" the way a human does?

    What is the difference between "mind" and "software"?
    • by fastdecade ( 179638 ) on Monday April 19, 2004 @01:26PM (#8906459)
      Could a computer or robot be said to have a "mind" the way a human does?

      Define "mind" and I'll tell you if a computer has one.

      What is the difference between "mind" and "software"?

      Define software, and I'll use tell you how it differs from your definition of mind.

      Not trolling, just demonstrating that this sort of deep philosophical questioning (which often happens in AI) usually just boils down to a trivial game of words.
    • Could a computer or robot be said to have a "mind" the way a human does?

      1st, you would have to define "mind". A typical definition goes like:

      The human consciousness that originates in the brain and is manifested especially in thought, perception, emotion, will, memory, and imagination.

      The key points here are "consciousness". In order for a computer or robot to have a "mind" it would have to be "self aware". Cognito ergo sum for the philosophy ppl out there. HAL from 2001 and 2010 became self aware a

      • Actually, as far as HAL goes, if I recall correctly, the reason for HAL going insane had nothing to do with becoming self aware, but was because lying and witholding information were against the 'nature' of hs programming, and he was instructed to lie/withold information(regarding the sighting of a monolith near Jupiter) from the crew...
        • "Lying", "witholding information", and "going insane" are not computer problems. They are phychological problems generically labled as "cognitive dissonance" [dmu.ac.uk]. None of these would bother anybody (or anything) if they were not self aware. Something going against the 'nature' of his programming is a bug, it happens everyday, but I've never seen a machine act like HAL.
    • This is why we have the Turing test. Simply put (as I understand it, I'm sure someone will correct me if I'm wrong) the idea behind the Turing test is that if you can't tell the AI from a biological intelligence, then by definition the AI must be on equal footing with the real thing. If, by talking to an AI, you cannot tell whether or not a human somewhere responding is actually to you, then the AI passes the test and must be treated with the dignity and respect due another human being.

      In a related subject
  • by MagiGraphX ( 767644 ) on Monday April 19, 2004 @01:08PM (#8906209)
    I've watched too much Chobits perhaps, but is it right for a human to fall in love with an artificially intelligent(and emotional) robot? Just a thought of what could happen...
    • That was covered on an Outer Limits episode where this guy got a 'state-of-the-art' robot to help him after he lost use of his legs...then she (the robot) fell in love with him and got all psycho when he stared going out with a human woman
  • by HealYourChurchWebSit ( 615198 ) on Monday April 19, 2004 @01:09PM (#8906218) Homepage
    Is there a big difference in gender between the audiences. If so, what is it about the battling 'bots that one sex find attractive over another? That is, are we looking at more hormonal/emotional causes, e.g. testosterone, or is there something intellectually more rewarding to one gender over another?
  • Future of robots? (Score:5, Interesting)

    by Merkuri22 ( 708225 ) <merkuri AT gmail DOT com> on Monday April 19, 2004 @01:11PM (#8906248)
    We've all seen the movies and read the books about machines in the future, and frankly most of these stories portray robots and AI as terrifying things that humanity will end up battling with for supremicy of the planet. Do you think there are any truths to these stories? Will robots compete with us in the future for jobs and/or living space? Do you ever see robots and humans living side by side as equals, or do you think they will always be subservient machines? Or, even, do you think robots will surpass us one day as the dominant force on the planet?
  • by mykepredko ( 40154 ) on Monday April 19, 2004 @01:12PM (#8906272) Homepage
    Hey Joanne,

    A bit of a navel gazing question for you; what form do you think A.I. will take when somebody finally comes up with a program that is accepted as intelligent?

    My own feeling is that the first A.I. program will simulate a simple life form (like a worm) instead of a highly complex and communicative form like humans. This goes against what Dr. Minsky believes A.I. should be, but I can't honestly believe that our first interaction with an intelligent mechanism would with something with similar capabilities to ourselves, but with something with the same mental capabilities and capacities as a bug.

    The important aspects of Aritficial Intelligence will be making sense of its environments and learning from experience. To demonstrate that the Intelligence is learning is observing and testing the Intelligence's application of this knowledge.

    What are your thoughts?

    Thanx,

    myke
  • Could my girl robot really learn to love me? ;)
  • by maxpublic ( 450413 ) on Monday April 19, 2004 @01:14PM (#8906299) Homepage
    how soon we can expect a merging of realistic human-mimicking robots with RealDolls. And once that's done, will I be able to get my new humaniform RealDolls in the form of a blonde 15-year-old with a penchant for cheerleading outfits, or will the government ban this as some sort of cyber-pedophilia?

    Max
  • by cy_a253 ( 713262 ) on Monday April 19, 2004 @01:14PM (#8906300)
    First a reminder for everyone (Asimov's 3 Laws of Robotics):

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Do you think it will be really possible to "hardwire" the 3 laws, (especially the first one) into robots? How?

    And won't that require the robots to be capable of "abstract judgement", a quality only observed thus far in human beings? How could we implement that? Is it possible?

    • by FreemanPatrickHenry ( 317847 ) on Monday April 19, 2004 @01:25PM (#8906435)
      Regarding Asimov's First Law ("1. A robot may not injure a human being or, through inaction, allow a human being to come to harm."), and assuming we "hardwired" robots with a "nature" in this sense--how would one (a robot) deal with moral paradoxes?

      For instance, what if a robot is on a runaway train car that is about to kill five workers on one side of the track. However, if it pulls a switch, it can move the traincar onto another track. Doing so would save the five workers, but kill one. Does the robot make a qualititative judgement of human life? Does it decide "five lives are better than one," or does it try to decide which human life is "more worth saving?" (IE, if the one worker is a mother of four little children).

      (This scenario was adapted from a recent Discover Magazine article on human morality.)
      • There are a number of Asimov stories dealing with such things - the basic thing is that a robot will deal with it according to the sophistication of it's programming. Most would probably switch the car and just kill one. Those would most likely have their positronic brains fry on them for having taken an action that killed a human. Only a very few would be able to cope - R. Daneel for one. Giskard essentially made that same choice by saving humanity by killing humans (Robots of Dawn?). He invented the
      • There is no concept of numbers of human being armed is the first law (the only concept ressembling that would be the law 0 that Daniel "invented"). There is some chance that the first part of the law will be evaluated first.

        So like most human being (there have been many studies on the subject), the robot will probably choose to stay passive if this kind of situation arise.

        Either way, like most human being, the robot will probably be messed up in some way and need the help of a robot psychologist to help h
    • Useless question. (Score:5, Insightful)

      by aussersterne ( 212916 ) on Monday April 19, 2004 @01:52PM (#8906815) Homepage
      These are judgments even humans are unable to make cleanly or clearly. Entire panels of professional medical ethicists are routinely unable to agree on whether this or that process or product harms or hurts humans, which humans, and whether that harm or hurt protects the existence of the species in the long run or sabotages it.

      Medical technology, genetically modified foods, physician-assisted suicide, abortion, the spread of electronics-based technology, nuclear power, invasion of Iraq...

      This is basically Ethical Paradoxes 101; before we can program this sort of thing into machines, we'd have to be able to reason it all out ourselves!
    • by Nom du Keyboard ( 633989 ) on Monday April 19, 2004 @03:52PM (#8908134)
      Do you think it will be really possible to "hardwire" the 3 laws, (especially the first one) into robots?

      You fail to understand the reason for Asimov's laws. It wasn't to build better robots. It was to build better stories.

      The 3 Laws exist to create a locked room murder mystery style story. (You know the sort. The body is found dead, locked in a room, that could only be locked from the inside. So how was he killed?)

      Asimov set up the locked room (i.e. robots can't hurt us under these rules), and then found every way he could to break them in the process of creating interesting stories that no one else was writing. He came to own that field, and his name will forever be associated with it. A nice form of immortality.

      But it's easy to see how unworkable in real life such rules would be. Take, for example, the Second Law. You've got a robot you bought for about the price of a new BMW 7-series, and the first person who comes along and orders it to follow them home takes it away from you. Yeah, right!

      I'd quit considering Asimov's Laws to the the Gold Standard of how to build a robot. After all, who wants as many problems with their own robots as his had with his through all his stories?

  • My question... (Score:5, Interesting)

    by hookedup ( 630460 ) on Monday April 19, 2004 @01:14PM (#8906303)
    Dr. Joanne Pransky, do you see Asimov's 3 laws of robotics playing a role in our relationship with robots in the future? Since most of our technological advances seem to come from developing warfare systems, will the 3 laws be left by the wayside, or will it become an integral part of robotics in the years to come.
  • by swamp_water ( 208334 ) on Monday April 19, 2004 @01:14PM (#8906308) Homepage Journal
    I'm a Robotic Specialist and I found I had plenty of Mechanical and Electronic skills when I left school, which was great if I wanted to repair assembly lines, but when it came to programming I had to go back to school to get more education. Do you feel Robotic people are lacking skills in computer programming and are behind computer people because of it? or more specifically, Do you think Computer Programmers are more qualified to build robotic systems then robotic people and thats why we have such limited robotic tech compared to the even today's video games?
  • 10 INPUT ANSWER$
  • by beeglebug ( 767468 ) * on Monday April 19, 2004 @01:17PM (#8906340)
    Can you forsee a point at which intelligent machines/robots will refuse to allow humans to program them any more? If so, how will this affect society?

    I don't necesarily mean in a malicious way either, just that at some point artificial intelligence might advance to the point where it would percieve human intervention as potentialy damaging, and respond accordingly.
  • by jhouserizer ( 616566 ) * on Monday April 19, 2004 @01:18PM (#8906351) Homepage

    Over the years, there has been a fair amount of debate about whether robots should take on human forms, especially with regards to having detailed life-like faces. Some robot designers, wary of this debate, have settled on giving their creations near human-like faces [theconnection.org].

    My question is in relation to this topic. Do you think that people (and "sentient robots" that may exist some day) will be be overall better served if robots are readily distinguishable from humans? How strongly will this affect our "bonding" with robots and their bonding with us? Dogs for instance look quite different from humans, but many a family-pet seems to believe itself to be a real part of the family, and sometimes even seem to think themselves to be human. How will this affect the way we deal with "death" of a robot?

  • Cyborg vs. Robot (Score:3, Interesting)

    by arjay-tea ( 471877 ) on Monday April 19, 2004 @01:18PM (#8906354) Homepage
    Why are techie types so heavily drawn to fully automomous robots, virtually ignoring the vast potential inherent in the cybernetic enhancement of already-formidable human faculties?
  • by macshune ( 628296 ) on Monday April 19, 2004 @01:19PM (#8906356) Journal
    Dr. Joanne Pransky,

    As an undergraduate philosophy student interested in the theoretical implications of A.I., could you tell me what your thoughts are on the validity of the assumption that artificial intelligence is possible separate from the notion of embodiment? I think the lack of consideration given embodiment is one reason why artificial intelligence researchers have come up empty-handed so far in their quest to synthesize a conscious, self-reflective entity.

    To ask the question more succinctly, do you think a mind needs a body and possibly and environment to interact with in order to be conscious, or can a mind exist and know itself independent of an external context?
  • by fastdecade ( 179638 ) on Monday April 19, 2004 @01:19PM (#8906358)
    Humans certainly have a range of emotions - is this an evolutionary advantage to be injected into robots or an inefficient side effect to be disregarded?
  • by c5r ( 766244 ) on Monday April 19, 2004 @01:19PM (#8906368)
    What are the main differences in the way (ways?) a robot sees within its own physical construction and operates that physical system optimally and the way (ways?) a human sees within its own physical construction and operates that physical system optimally?
  • How many psychiatrists does it change to change a light bulb?

    One, but only if the lightbulb really wants to change. ;)

    --

  • In Star Wars Knights of the Old Republic, there is this woman who, um, has an unusual relationship with her droid, after the death of her husband, which leads the droid to want to commit suicide...

    Is this likely to happen in the future? I mean, the unusual relationship, not the robotic suicide.

    How would you treat such a dysfunction?

  • When will the pleasure model be made available to consumers?

    Would you ever fuck a robot?
  • the Awesom-O 5000?

    /South Park

  • by digitalamish ( 449285 ) on Monday April 19, 2004 @01:25PM (#8906451)
    How can you call yourself a Dr. and just sit idly by while humans force their creations to battle to the death for sport? Where do you draw the line between 'just being a robot' and being a 'slave'?

    ---
    "Have the lessons of Terminator been lost on all of us?" - overheard during trailer of I, Robot
  • Sex? (Score:5, Funny)

    by mikeophile ( 647318 ) on Monday April 19, 2004 @01:27PM (#8906475)
    Will extramarital sex with robots of various levels of sentience be considered "cheating"?
    • Re:Sex? (Score:3, Insightful)

      by K8Fan ( 37875 )

      Will extramarital sex with robots of various levels of sentience be considered "cheating"?

      Given that a huge number of women consider their husband or boyfriend watching porn and masturbating by himself "cheating", I think we can safely assume the answer is "yes". Sue Johansen's show "Talk Sex With Sue" [talksexwithsue.com] deals with that "issue" nearly every week - some woman calls in freaked by finding her boyfriend/husband's secret porn stash. Humanform sexual robots would definitely be considered cheating. I'd venture to

  • by Fratz ( 630746 ) on Monday April 19, 2004 @01:27PM (#8906486)
    It occurs to me that there may be technology to make robots appear to be human before there is technology to make them act human. Do you feel there's a need to pressure the industry to make sure their robots only appear as human as they behave, so that people do not have incorrect expectations about what the machines can do?
  • Gyromite (Score:3, Funny)

    by Craptastic Weasel ( 770572 ) on Monday April 19, 2004 @01:29PM (#8906497)
    So.. on level twelve, where the good sleeping doctor is walking between the the first doorway and the doctor squishing device (the one the good folks at Nintendo programmed into 2 seconds later in the game), I'm stuck.

    Do you think it is even remotely possible to get that spinning gyro from the thing that keeps it spinning to the red button on one side, and then to the other side before the doctor meets his ill fate?

    yeah... sigh.. me niether.... half to go back to cheating and hitting the button with my finger.
  • Roborights? (Score:5, Interesting)

    by jrpascucci ( 550709 ) * <[moc.oohay] [ta] [iccucsaprj]> on Monday April 19, 2004 @01:30PM (#8906507)
    Do you believe there will come a time that we will have a 'robot rights' movement? Will it be more credible than most of the 'animal rights' movement, or just a good-hearted (but weak-minded) anthropomorphization of our silicon companion machines?

    Someone (Dennis Miller?) once said, animals can have rights as soon as they accept responsibilities. Robots obviously can be given responsibilities (your job is to fit tab A into slot B), but ethically, should they get rights? As soon as someone programs a robot to pass the turing test, and then immediately ask for his rights? Or is it something deeper?

    Beyond some kind of second-class entity status, will robots become citizens? Do robots have a god-given right (recall, our rights are considered by the Declaration of Independence to be given us either by 'Nature's God' or by their 'Creator') to freedom of expression, association, religion? The right to bear arms? Do robots have a 'right to work'? "One Robot, One Vote"? Will Robots have to file tax returns? Will there be Robot Courts? Robot Lawyers? Robot Jail? Robot Schools? Robotic Members elected to the Legislature? Some day, will we have a Robot President? Is a Robot built in Japan eligible to be president? What if the robot was shipped from Japan as parts with software, and put together here, does that count?

    If you start building a robot, and decide to stop, will that be considered to be a robaboration? Or the work of their 'creator'? And if, after building, you switch it on and then decide you don't like it that much, and power it off again and harvest the parts, is that robomurder and disrobomemberment?

    -JRP
    • by Anonymous Coward
      We are Electronic-Americans. The R-word is a pejorative used by the oppressor meat-people to keep us down.
    • As a side question, in relation to the parent... Assuming robots eventually are sentient, members of society, have a reasonable assortment of rights. What about recreation? What happens to society when robots realise that fleshies are building their kind solely as a labor force? Is there any chance that robots would try to make robot production a solely internal matter? What happens when the first research robot decides to clone a human or human like child, and raise it? Obviously, the question of rob
    • More importantly, will robosexuals be discriminated against? Will robanukah be an official holiday? Is there really a robot hell?
    • Re:Roborights? (Score:3, Interesting)

      by euxneks ( 516538 )
      Just to add to this:
      Obviously the Robots will not have any rights unless we have given them a true Artificial Intelligence. Once we have gained this monumental feat (say, an intelligence on the level of our own in an autonomous body) What sorts of rights does this entity have? Should it have rights?
  • The best question... (Score:4, Interesting)

    by Cytlid ( 95255 ) on Monday April 19, 2004 @01:30PM (#8906520)
    I wrote an AI program back on my C64 as a teen, that tore apart sentences (and questions) and tried to derive the meaning of them from a database. The idea was I would add more info to the database, and sooner or later it would learn by itself and add to the database. The idea never got off the ground, but I did try with a quick small database, and asked it a quick question (which would be my submission):

    Who are you?

    (To which it replied "I am I" ... technically correct but totally useless.) Always wondered how a real robot would answer that...
  • Who would win in the Battlebots arena,
    R2D2 or TWIKI from Buck Rogers?
  • It looks to me like a combination of the work being done by Sony and other Japanese manufacturers will give us walking machines [sony.net] that have the same type and degree of mobility as a human eventually. Also, the work being done in a university in Europe (Sorry, I can't seem to find the link anywhere yet, will go and reply to this when I have found it) seems to indicate that we may eventually have a computer program capable of holding a perfectly believable conversation with a human.

    Do you think that the comb
  • Human LIke Robots (Score:3, Interesting)

    by QuantumFTL ( 197300 ) * on Monday April 19, 2004 @01:34PM (#8906577)
    Assuming that some day, we eventually develop human-like android robots, do you feel that individuals who are unkind/abusive to these robots (regardless of whether or not they actually have feelings) are going to start treating other humans this way? If so, does that mean that there should be rules against abuse/cruelty of human-like robots, as a preventative measure against it happening to a real person?

    The existence of "disposable people" would have to cheapen human life in the eyes of some. Are there any other problems with this? Is there anything we can do to prevent this?

    Cheers,
    Justin
  • by Zabu ( 589690 ) on Monday April 19, 2004 @01:34PM (#8906582)
    This is a multipart question.
    If robots are mass produced to carry out simple but time-consuming tasks in the future and are cheap enough to eliminate the need for a large percentage of the human workforce, do you think that there will be widespread anti-robot sentiment?
    When human's jobs are replaced by a cheaper alternative, they feel a great injustice.
    Do you think that robotic 'slaves' is really what an ever expanding population needs? Or will the creation of robots take a different direction to carry out tasks that humans cannot?
  • by futuretaikonaut ( 772613 ) on Monday April 19, 2004 @01:35PM (#8906594)
    In Asimov's robot novels, the assumption was that modern science had invented the positronic brain, which was thought to be capable of actual sentient thought, though most of the robots in the books did so on a very basic and childlike level. It was this that actually gave Dr. Calvin a job... seeing as how the brains had the capacity for original thought, even though it was mostly predictable. As it stands today, and into the foreseeable future, we have invented no such thing capable of acting with original thought. Our hardware has, instead, given the appearance of thought, as it is capacble of so many calculations per second that it appears to come up with things on its own.

    So, my question is, what use is a robot psychologist if every action that a robot can take is already predetermined by its programming? What new field is there to be discovered that is not already known? In the human mind, we are constantly learning new things about the brain, a mechanism we only barely understand, but what is there to derive from a machine we ourselves create?

    Perhaps a better study would be the eventual effects on human society. A million questions remained unanswered regarding that.
  • by Ransak ( 548582 ) on Monday April 19, 2004 @01:40PM (#8906641) Homepage Journal
    I, like many people, really enjoyed Battlebots. So much in fact, I built [roboguys.com] one just like much of America thought about doing. What drives the fascination with Americans and the desire to build/tinker things that are capable of destroying each other? Other robotic competitions like FIRST [usfirst.org] are about completing tasks or doing something constructive (which I suspect is driven by a different motivation) while the more sensational tournaments were about robots killing robots. Is this just the desire to compete in 'left brain' individuals, or something else? And what makes competitons like Battlebots and Robotwars appeal to the American public?
  • emergence (Score:4, Interesting)

    by shams42 ( 562402 ) on Monday April 19, 2004 @01:40PM (#8906651)
    How do you think our emerging understanding of emergence and self-organizing systems will influence AI research and development?

    I ask this because I have long thought that the mind or consciousness is an emergent property of the biology of our nervous systems.

  • C3PO (Score:5, Funny)

    by Furan ( 98791 ) on Monday April 19, 2004 @01:45PM (#8906714) Homepage
    What problems would you diagnose the fictional Star Wars character C3PO with?
  • SBAITSO. (Score:2, Funny)

    by mikeleemm ( 462460 )
    Only one comment I can make is Creative Labs, Dr. Sbaitso.
    • Hello, my name is Doctor Sbaitso
      I am here to help you
      Say whatever is on your mind freely
      Our conversation will be kept in strict confidence

      (of course this *was* in all caps but the lameness filter kicked in)
  • by Strange Ranger ( 454494 ) on Monday April 19, 2004 @01:48PM (#8906759)

    What is your favorite robot/cyborg character in written or film fiction? Why?


    For instance, I'm happy to admit mine is Data from Star Trek: Next Generation. Most especially the earlier seasons. Reason: I'm not much of a "trekkie" but that character made me consider so many different possible aspects of AI and of being not-human. From trying to understand other humans' emotions to his contrast with 'The Borg' down to what it might be like to have an "internal chronometer". For totally different reasons I loved Douglas Adams 'Marvin the Depressed Robot' in HHGTTG.
  • by langeland ( 607444 ) on Monday April 19, 2004 @01:53PM (#8906822)
    We all know about the Turing test, which suposedly (in numorous editions) are meant to tell wheather a computer program is intelligent or not. What about feelings or at least emotions? Do You have any criterions that distinguishes non-emotional/non-feeling computer software from emotional/feeling computer software?
  • Pushing or Shoving? (Score:4, Informative)

    by hoggoth ( 414195 ) on Monday April 19, 2004 @01:55PM (#8906837) Journal
    In your experience with robots, which is the real danger, pushing or shoving [jonathonrobinson.com]?

  • Do you think the push for AI in robotics is an attempt by people to find God? A being with all the human virtues and none of the human foibles that will come and bring utopia to our world?
  • I'm sure doctor-mode beats this thing.
  • by El Mulo ( 659584 )
    What are the legal consecuences of a intelligent machine? Do we protect human rights because a)we are intelligent or potentialy intelligent or b) just because we are from the same species? If and animal or a machine can became as intelligent as us, will their personal rights be protected? Do they have dignity?
  • Sanity (Score:3, Insightful)

    by emkey ( 717933 ) on Monday April 19, 2004 @02:07PM (#8906984) Homepage
    My question has to do with sanity. Specifically is it possible for an AI to be insane? To elaborate, any artificial intelligence is going to require very sophisticated algorythms. These algorythms are going to likely have significant components focused on logical consistency as it is much easier to handle logically consistent concepts then the fuzzy ugly ones we humans deal with. There is a language called lojban I believe that is completely unambiguous. If you were to translate human input into lojban as an intermediate step in having the AI handle input then you would end up with no ambiguity. The reduction in ambiguity would make it very difficult for the AI to misunderstand or deceive itself (Assuming the translation were correct). Since instany seems to be based in large part on the ability to self deceive the removal of self deception from the input along with the need to keep things as logical and self consistent as possible internally would tend to argue to me that insanity in a functional AI would be very unlikely.
  • How quickly do you believe robots will displace unskilled laborers? Will it be faster or slower than previous replacements?

    What new jobs, specificly, will employ vast numbers of laid off unskilled workers?

    What fields of work can't robotics do?

    Will robot owners have any obligations to the unemployed? If so, will they heed them?

    What should we do now?

  • Hi,
    We all saw that Asimov broke up with a traditional model for robot stories, as he did not paint his robots as foes. Rather, restringed by the Three Laws of Robotics [everything2.net], those robots were well behaved servants to mankind, and could not be used for evildoing.

    How do you feel this going on the real world? I am by no means a tchnofobist, but, day by day, I see A.I. researching on one side, and Unmaned Warcraft Machines on the other evolving more and more. Military wil certainly have little concerns in add more
  • The Big Question (Score:2, Insightful)

    by photomic ( 666457 )
    Given that we have become dependent on technology both psychologically (entertainment, information, communication) and physically (medical devices, jobs, manufacturing), at what point would you consider our species having "branched off" to become, for lack of a better word, "cyborgs"?
  • My Roomba (Score:3, Funny)

    by r0me0v0id ( 539705 ) * on Monday April 19, 2004 @03:12PM (#8907692)
    My Roomba recently broke the first law of robotics when, through his inaction, he allowed me to step on him at the edge of a small flight of stairs. My injuries were minor, but my Roomba has not moved from his corner since the incident. I suspect he's deeply distraught over breaking the 1st law. What can I do to coax my little buddy out of his doldrums?
  • Brain vs Body (Score:3, Interesting)

    by CowboyRobot ( 671517 ) on Monday April 19, 2004 @04:48PM (#8908767) Homepage
    There seem to be two general schools of thought regarding robot intelligence. The first looks at AI as a software problem that, once 'solved', can be inserted into any sort of machine equipped with an IC. The second, promoted by followers of Mark Tilden, is more of a bottom-up approach that expects behavior to emerge naturally from complexities in hardware. Given how animals evolved (with 'hardware' issues such as internal organs, nervous systems, etc. being 'solved' before intelligence rose up in human beings) which approach (top-down/mind-first vs bottom-up/body-first) is most likely to result in truly intelligent machines?
  • The real question (Score:3, Interesting)

    by Kaboom13 ( 235759 ) <kaboom108@bellsou[ ]net ['th.' in gap]> on Tuesday April 20, 2004 @12:11AM (#8913297)
    What the hell is a "robot psychiatrist", and why should I care? As someone who has actually built robots, what qualifies you to talk about human-robot relationships over me? Your phd? I apologize for being so cynical, but academia is full of naval-gazing idiots who make broad predictions based of no evidence, and get media and peer acolades for their effort. Those of us actually involved in robotics can see first-hand just how of out touch these people are, but the media loves them. So where's the research? All I found on your website is useless fluff. What exactly do you do besides media appearances? What "psychiatry" have you done with the actual robots of today, and not just speculation of your vision of the robots of tommorow (which seems heavily influenced by science fiction and not reality).

If you think the system is working, ask someone who's waiting for a prompt.

Working...