Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Science

South Korea to Build Robot Theme Parks 125

coondoggie writes "South Korea officials today said they hope to build two robot theme parks for $1.6 billion by 2013. The parks will feature a number of attractions that let visitors interact with robots and test new products. "The two cities will be developed as meccas for the country's robot industry, while having amusement park areas, exhibition halls and stadiums where robots can compete in various events," the ministry said. The theme parks are not a big surprise because South Korea loves its robots. Earlier this year the government of South Korea said it was drawing up a code of ethics to prevent human abuse of robots — and vice versa."
This discussion has been archived. No new comments can be posted.

South Korea to Build Robot Theme Parks

Comments Filter:
  • by Dorceon ( 928997 ) on Wednesday November 14, 2007 @06:00AM (#21347533)
    The Westworld poster had the original, which was, "Where nothing can possibly go worng..." with the letters of worng falling out of line.
  • by NetSettler ( 460623 ) <kent-slashdot@nhplace.com> on Wednesday November 14, 2007 @06:08AM (#21347561) Homepage Journal

    What we currently call "machine intelligence" is not quite up to the intelligence level of a cockroach.

    Indeed, the ethics requirements should be on the makers of the robots, not on the robots. Even very stupid (i.e., lacking in any semblance or even attempt at artificial intelligence) computer programs can have ethical issues--transmitting or storing inappropriate information, computing faulty values, or giving bad advice are simple examples.

    And fanciful notions of the unique nature of positronic brains aside, the set of things you can program for robots is pretty much the same as the set of things you can program for other computers, only the peripherals are different. And like their less animated counterparts, most robot ethical issues, for now, are things that need to be handled at design, development, and debugging time... not at runtime. And most responsibility for problems needs to be traced back to there.

    The actual area where we're likely to see problems won't be in the robots themselves, it will be in our propensity to want to give up our judgment to computers. Computer viruses were largely not enabled by people who wrote them--programs didn't originally just start on their own on a computer--you had to manually start them. But people got tired of that. They didn't like pressing buttons that said "Show me the picture in this email message" or "Run the installation program on this disk." and they wanted it done for them. That desire to yield responsibilty for judgment to a mindless computer is what got us in trouble, not the computer's desire to do us harm.

    The first car to run over a pedestrian while parking it won't have done so because the robot was too eager to drive before it had been properly trained. It will be because the robot was too stupid to know it isn't just a toaster (see The Measure of a Man [wikipedia.org]), coupled with the fact that some programmer was too eager to show off his toy, or perhaps because some park guest was too willing to try untested technology, or because some quality assurance person was too afraid to hold up the opening of the park, or because some politician thought it was cool to talk of computer ethics instead of human ethics.

    Ethics and laziness don't go well together. And we're a pretty lazy lot, we humans. I'd rank the probability that any lawmakers anywhere will ever require that robots not be built until they have ethics built in as so close to 0% as to be indistinguishable from it. People with cool toys to show off in the marketplace are not going to stand for that kind of thing.

  • Re:Robot Ethics? (Score:2, Interesting)

    by somasynth ( 1088691 ) on Wednesday November 14, 2007 @10:41AM (#21348995)
    Makes sense, but I didn't suggest the AI will necessarily enjoy their life, they will likely feel what cannot be described in human terms, perhaps a liking of sorts, or not. What I did suggest was that subjectivity is derived from the processes that allow the being to function rather than being a function in itself, and it isn't limited to meat.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...