Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Censorship Communications Network Social Networks The Internet Technology Your Rights Online

Should Bots Be Required To Tell You That They're Not Human? (buzzfeednews.com) 92

"BuzzFeed has this story about proposals to make social media bots identify themselves as fake people," writes an anonymous Slashdot reader. "[It's] based on a paper by a law professor and a fellow researcher." From the report: General concerns about the ethical implications of misleading people with convincingly humanlike bots, as well as specific concerns about the extensive use of bots in the 2016 election, have led many to call for rules regulating the manner in which bots interact with the world. "An AI system must clearly disclose that it is not human," the president of the Allen Institute on Artificial Intelligence, hardly a Luddite, argued in the New York Times. Legislators in California and elsewhere have taken up such calls. SB-1001, a bill that comfortably passed the California Senate, would effectively require bots to disclose that they are not people in many settings. Sen. Dianne Feinstein has introduced a similar bill for consideration in the United States Senate.

In our essay, we outline several principles for regulating bot speech. Free from the formal limits of the First Amendment, online platforms such as Twitter and Facebook have more leeway to regulate automated misbehavior. These platforms may be better positioned to address bots' unique and systematic impacts. Browser extensions, platform settings, and other tools could be used to filter or minimize undesirable bot speech more effectively and without requiring government intervention that could potentially run afoul of the First Amendment. A better role for government might be to hold platforms accountable for doing too little to address legitimate societal concerns over automated speech. [A]ny regulatory effort to domesticate the problem of bots must be sensitive to free speech concerns and justified in reference to the harms bots present. Blanket calls for bot disclosure to date lack the subtlety needed to address bot speech effectively without raising the specter of censorship.

This discussion has been archived. No new comments can be posted.

Should Bots Be Required To Tell You That They're Not Human?

Comments Filter:
  • Feel free to rephrase as “Should bots be programmed in a manner which might lead people to assume they are human?” if that gives you the answer you’d prefer.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Don't forget: To what stupid degree does this have to be enforced? When I load google.com, does it have to say at the bottom: "This page was generated and served to you by a script, sometimes referred to as a bot."

      When I pull up to a traffic light, does it have to blast in an annoying tone to all drivers: "This traffic intersection is controlled by a bot."

      If for some bizarre reason, I decide to step into a bank teller's office, will they have to have over the loudspeakers: "Your bank balance is kept track o

    • by arth1 ( 260657 )

      I prefer to rephrase it "should deception be illegal?"

  • Five bucks says the Supreme Court will rule that bots, like corporations, are people.
    • LOL - the new SCOTUS after Trump's appointment will be more likely to rule that humans aren't people...

    • by clovis ( 4684 )

      Five bucks says the Supreme Court will rule that bots, like corporations, are people.

      The supreme did not rule that corporations are people. What they said was that corporations have the same rights as other organizations, so if the law allows unions, charities, 501c and other such groups to fund or engage in issue advocacy, then corporations have the same rights to do so.

      Anyway, I would not take your bet because I think you predict correctly. I can see how scotus might find a way to rule that although bots are not people, they are acting as agents for people and rule that bots have free sp

  • Comment removed based on user account deletion
    • Ummm.. I can write an AI that tricks you into thinking it's a person, for a reasonablish amount of time.

      The Turning Test is trivial to pass on a short enough timeline, especially if (A) I get to control where and when the conversation occurs, (B) That conversation is usually formulaic anyway (arranging an appointment), and (C) The human in the loop doesn't care that much (e.g. the person taking the appointment at the salon wants off the phone).

      • by Whibla ( 210729 )

        The Turning (sic) Test is trivial to pass on a short enough timeline ...

        The current day interpretation of the Turing Test might be trivial enough to pass, but...

        ... especially if (A) I get to control where and when the conversation occurs, (B) That conversation is usually formulaic anyway (arranging an appointment), and (C) The human in the loop doesn't care that much (e.g. the person taking the appointment at the salon wants off the phone).

        ...that's not actually the Turing Test.

        I can write an AI that tricks you into thinking it's a person, for a reasonablish amount of time.

        While this might depend on your definition of 'reasonable' I'm going to disagree: No, you really can't. You might be able to write an AI that tricks me into thinking, after a minute or so, that it's a person I have no desire to converse with - but at that point I'm no longer conversing with it.

        I'm not sure I see 'the win'.

        Now, I'm sorry to come across all pedantic, but with a broad

        • You aren't the average person. How many people do not even reach average? I'm sure there are some talented people on this site that most certainly could create a bot that would pass for human in many fixed roles. Taking orders for food, insurance quotes, loan quotes, low level tech support and many more if I gave it some thought.

          Most "bots" shouldn't be mentioned that because it's not important. If someone can think of a specific scenario where they feel the bot should inform them then maybe that should be

        • I agree that a minute is probably too long to expect an AI to trick you. But the context of this was "should bots (by which the Author means Google Duplex) be forced to identify as a such when they call you." Which is a real life thing that probably is 30 seconds with a bored assistant, or similarly transaction-based exposure. The OP said "well, if I cannot tell the difference, we should just treat them as humans".

          I also should have been more precise in stating, "I can write AI that tricks the vast majo

    • Yeah, I donâ(TM)t think weâ(TM)re at the point where we have to worry about AI rights. Weâ(TM)ll have to cross that bridge, but itâ(TM)s not even built yet. The main issues now are: 1) bots are used to make unpopular ideas seem popular (bandwagon persuasion). 2) bots can diseminate false information without even the trivial disincentive of tying the information to real person. 3) bots are highly efficient tools to incite anger. I think people have a right to know when theyâ(TM)re c
    • by anegg ( 1390659 )

      People are independent, conscious and self-aware entities. At this point in time and in the evolution of Artificial Intelligence, we have not yet created synthetic entities that are independent, conscious, and self-aware. Until such point that we do, we should not refer to "bots" or so-called "AI"-based mechanisms in any manner that conveys the impression that these are independent, conscious, and self-aware entities - it only serves to confuse the issues being discussed.

      In contexts where people expect t

  • What if the bot doesn't know it's not human?

    Appearantly this idea has already been examined to a small extent in a 1974 novel, as mentioned on Wikipedia
    https://en.wikipedia.org/wiki/... [wikipedia.org]

    • What if the bot doesn't know it's not human?

      "Is this testing whether I'm a replicant, or a lesbian Mr. Deckard?"

  • by msauve ( 701917 ) on Friday July 27, 2018 @06:12PM (#57021574)
    It's not a violation of speech rights to outlaw fraud, deception, or dishonesty. Simply require that bots honestly answer the question "Are you a human/person?"

    Or, instead, just ask it "Why does the Porridge Bird lay his egg in the air? [wikipedia.org]"
    • by EvilSS ( 557649 )
      For that to work you also have to require humans to do the same.
      • But it is a violation of free speech to mandate I write code specifically saying something.

        If I write a virus that causes damage, then that damage is on me. But if I write a virus and I do not release it, but simply show people; then that is not a crime.

        Based on that thought, you can not force me to write code to make a bot identifying itself. At best, you would have to write a law preventing anyone from deceiving a person. Best of luck with that. You cant write a law that forces one group to adhere to some

        • by msauve ( 701917 )
          "But it is a violation of free speech to mandate I write code specifically saying something."

          In exactly the same way it is to require food to be labeled with ingredients and nutritional info, warnings put on cigarettes, specific APR info on loan offers, country of origin markings on goods, alcohol percentages on liquor bottles, a license plate be displayed on your car, etc.
    • by mysidia ( 191772 )

      The bot can reply "Of course I am a person. (simulating an annoyed voice) Are YOU a person?"

      In this case lying is 1st Amendment protected speech, deception can only be outlawwed in the areas where the free speech protection has been weakened in that a limitation is necessary to stop harm to the public, and the required criteria for fraud is not met (The bot is not part of a good or service being exchanged, so there is no relaxation of the 1st amendment protections), even fraud could not be claimed --

      • Reminds me of those chat bots forever ago. They very obviously would be a bot, and I'd always just test it by saying: "Are you a bot?" Or whatever, anything saying bot. And the response was always stuff like: "BOT? BOT?! WHAT THE HECK IS A BOT!"
    • "Why does the Porridge Bird lay his egg in the air?"

      He must have migrated Back From The Shadows Again.

    • by AmiMoJo ( 196126 )

      I dislike bots because they can't understand or help me. If these new bots are so good I can't tell it's not a human, that's a good thing.

      My worry about bots having to identify themselves is that it's helpful to people trying to force humans to identify things like their religion, political views or the nature of their genitalia. It's a wedge that legitimises prioritizing one person's comfort over the privacy of others.

      • by anegg ( 1390659 )
        There is no "self" to which the phrase "bots having to identify themselves" would apply. What we currently have called "bots" are merely mechanisms that simulate behaviors, not conscious and self-aware entities that have a "self". If people (currently understood to be entities of the species homo sapiens) have a reasonable expectation that they are communicating with another person (entity of the species homo sapiens), but are instead communicating with a mechanism, it is useful to require that this disti
  • I might also entertain the idea of them being licensed/registered so everyone knows who owns it.

    Of course the real problem is how are you going to enforce any of this when you can't really detect it?
  • ...if we want them to act on our behalf as personal assistants.

    The minute we make it mandatory for bots to announce themselves, every business in existence will create "no automated bots" policies (just like every major website does now) and simply auto disconnect any calls from bots, at which point all the bots become useless.

    So no, we shouldn't force bots to announce themselves (unless we don't want bots at all).

    • The reason websites don't want bots is because it leads to abuse. Robocalls are already aggravating as fuck. I don't want them to improve.

      And I never understood the "AI as a personal assistant".

  • Seriously. What do bots do? They help you make reservations. They frustrate you going through tech support. They help you jack off thinking some hottie is on the other end. They feed you fake news that anyone with half a brain would know is fake.

    Oops, there we have it. They spread fake news via ZuckerFuck, and the users of ZuckerFuck don't have half a brain.
  • Just give all bots the first initial of R. Like R. Daneel Olivaw.
  • lol People don't answer calls much today, this is just another reason to not answer a call and BTW every call i get from every business has been by a bot. Looks like the industry looking to to fired and get rid of call center people to maximize profits for wall street ya ask me.
  • Why is this even a question?

  • Should human beings be required to tell that they *ARE* human?
  • me: "Are you human?"

    the other end: "No Sir, I'm 'Agens 251a' an instance of ServiceBot Ultra 2024 by AlphaBot Services provided to you for your technology questions by 1and1 hosting, how may I help you?"

    me: "Oh, thank god, I finally got a bot. I've been trying to explain to clueless humans that me using Linux has nothing to do with your mailservers being unreachable for 20 minutes now."

    bot: "I feel your pain, sir. Don't worry, I come at a bulk deal by next year, we'll be phasing out humans entirely then. An

  • by petes_PoV ( 912422 ) on Saturday July 28, 2018 @05:56AM (#57022984)
    When talking to a remote voice on the end of a phone at a call centre, it makes little difference whether that voice belongs to a person or a machine. They both behave "robotically" - working their way through a script and only having a certain number of pre-approved responses to questions.

    And further, when we take the next step of having the 'bot on my phone handling all incoming calls and making outgoing ones to call centres, it would make the process much slicker when it is 'bot-to-'bot.

    As for being required to inform people they aren't human, I would also like human callers demonstrate that they have more skills and abilities than a bot. If they can't, then what is the point of them?

  • It's great that we're trying to get ahead of this issue but I'm not sure there's a 'right' answer to this question.

    And if there is a right answer I'd be inclined to say it's "No".

    In my mind part of the difficulty in thinking about artificial intelligence, and, more generally, artificial consciousness, is what happens at the boundary - that crossover point between an 'object' and a 'sentience'. We can own objects, we can't own people. Objects don't have rights, creatures and people do - albeit rights that we

  • ...should be required to tell us if they're not human.

  • See:

    https://ericjoyner.com/works/recaptcha/

    Or just google "eric joyner recaptcha" for images, should be first image, top left.

  • People seem to have no problem venting their frustrations on call center staff. At least with a bot, no feelings are hurt. Sucks to work a call center ... or any kind of one-on-one customer interaction, just ask any poor starbucks employee.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...