Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI IBM Robotics

New IBM Robot Holds Its Own In a Debate With a Human (nbcnews.com) 260

PolygamousRanchKid shares a report: The human brain may be the ultimate super computer, but artificial intelligence is catching up so fast, it can now hold a substantive debate with a human, according to audience feedback. IBM's Project Debater made its public debut in San Francisco Monday afternoon, where it squared off against Noa Ovadia, the 2016 Israeli debate champion and in a second debate, Dan Zafrir, a nationally renowned debater in Israel. The AI is the latest grand challenge from IBM, which previously created Deep Blue, technology that beat chess champion Garry Kasparov and Watson, which bested humans on the game show Jeopardy.

In its first public outing, Project Debater turned out to be a formidable opponent, scanning the hundreds of millions of newspaper and journal articles in its memory to quickly synthesize an argument on a topic and position it was assigned on the spot. "Project Debater could be the ultimate fact-based sounding board without the bias that often comes from humans," said Arvind Krishna, director of IBM Research. An audience survey taken before and after each debate found that Project Debater better enriched the audience's knowledge as it argued in favor of subsidies for space exploration and in favor of telemedicine, but that the human debaters did a better job delivering their speeches.

The AI isn't trained on topics -- it's trained on the art of debate. For the most part, Project Debater spoke in natural language, choosing the same words and sentence structures as a native English speaker. It even dropped the odd joke, but with the expected robotic delivery. IBM's engineers know the AI isn't perfect. Just like humans, it makes mistakes and at times, repeats itself. However, the company believes it could have a broad impact in the future as people now have to be more skeptical as they sort out fact and fiction. "Project Debater must adapt to human rationale and propose lines of argument that people can follow," Krishna said in a blog post. "In debate, AI must learn to navigate our messy, unstructured human world as it is -- not by using a pre-defined set of rules, as in a board game."

This discussion has been archived. No new comments can be posted.

New IBM Robot Holds Its Own In a Debate With a Human

Comments Filter:
  • ...when PEOPLE start talking and arguing with themselves like this, we start to consider medicating them quickly.
    • Comment removed based on user account deletion
      • Robot vs. Trump (Score:2, Insightful)

        ... or elect them to be the most powerful person in the world.

        Actually, I would love to see a debate between this robot and Trump to see how it handles made-up facts, illogical assertions etc. Somehow logic and reason does not seem to work with him and he goes after emotion and feelings. Since I suspect that logic and facts are the only tools available to the program I suspect it will lose horribly - but regardless would still be an interesting debate to see.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          I don't think there would be a winner in a Trump vs machine debate. Trumps "wins" in debates come from undercutting arguments with completely fabricated bullshit, and if that doesn't work he'll start throwing insults to throw the opponent off their game. That track just won't work with a machine, because they wouldn't have the emotional base to get upset enough to lose their cool.

          I too would like to see this debate. The machine *MIGHT* release the blue smoke from the never ending stream of made-up day-dr

    • by jellomizer ( 103300 ) on Tuesday June 19, 2018 @12:03PM (#56810352)

      It is a sad state that we equate Debate skills with leadership skills.
      Debates are something you need to win or loose. Not an open discussion to grow and learn. You can win a debate on a false idea or lie over someone who has the truth and data on their side, however they may lack the debate skills to try to convince a neutral party. Often the best and well thoughout idea is far more complex then what can be stated in quick sound blurbs.

      Presidential debates over the past few generations have not been really productive. Most of us are already had made up their mind on who they are voting for, most will just vote for whoever has a R or D on their party affiliation regardless of their stance. So the Debators neutral party is just a tiny fraction of the population. And for the most part they are trying to read non-verbal queues. (such as Nixon sweating) or finding someone just loosing their temper. The topic up for debate are not relevant as we have a good idea where the stance is.

      • by Matheus ( 586080 )

        I think you made your own counter-argument: The most important aspect of leadership is getting people to follow you (willingly in the best case). If your debate skills are weak than others will be able to sway your flock away from where you are trying to lead them. It doesn't matter how "right" you are if some counter-leader can undermine your position.

        Presidential debates are a poor reference point for the usefulness of debate. They are a different beast really although in history certain debates have had

      • I always thought of debate skills as one of those things we teach kids so they can recognize all the stupid tricks people are trying to use on them.
  • by 110010001000 ( 697113 ) on Tuesday June 19, 2018 @10:05AM (#56809532) Homepage Journal
    Completely fake. The topics were prearranged, and yes they were "assigned on the spot" but there was a predetermined list. IBM is desperately trying to sell their AI snakeoil. If AI worked, why not have it solve REAL problems that people will pay for, rather than parlor tricks like plying Go, and Chess and other games?
    • by Tablizer ( 95088 ) on Tuesday June 19, 2018 @10:48AM (#56809886) Journal

      Completely fake. The topics were prearranged, and yes they were "assigned on the spot" but there was a predetermined list. IBM is desperately trying to sell their AI snakeoil. If AI worked, why not have it solve REAL problems that people will pay for, rather than parlor tricks like plying Go, and Chess and other games?

      Orange-bot Translation: "Fake bot, totally rigged. Crooked cheaters knew question list ahead of time. IBM is total snake oil, believe me! If it really were smart, it would do something important, like build a wall and make evil Canada pay for it. Chess is for low-energy losers; audience snores. Total Zee's, so sad."

    • If I hadn't already commented on this, I'd mod you up.
    • Completely fake. The topics were prearranged, and yes they were "assigned on the spot" but there was a predetermined list.

      Cite? Do you know this or are you just guessing?

    • think of how many call center employees just answer simple questions out of a database. That's what this is for. Parts of India & the Philippians are genuinely worried about the job loses.
      • by gweihir ( 88907 )

        Indeed. It basically is for building cheaper (not smarter) expert systems with an natural language interface. As soon as you go off-script, this technology gets completely lost.

    • Reading mammograms is a real problem, and AI is doing somewhat better than most radiologists now. IBM was largely responsible for this development, I believe.

      I think you're missing the point of the exercise. We don't need artificial debaters. But debating requires better understanding of natural language than ordering pizza. To successfully rebut an argument, you need to understand its logic. I don't doubt that there's room for improvement, but it's a non-trivial step.
      • by gweihir ( 88907 )

        But debating requires better understanding of natural language than ordering pizza. To successfully rebut an argument, you need to understand its logic. I don't doubt that there's room for improvement, but it's a non-trivial step.

        You misinterpret what you saw. This demo just shows that "debating" one of a topics that is previously known does not require intelligence. It also casts some real doubt on this type of competition and indicated that the "debates" done there are not actually debates in the sense most people understand that word. This machine can extract logic, but it cannot understand it. It is basically ordering of pizza on a larger scale, but with as many understanding of the nature of the action, i.e. none.

    • by gweihir ( 88907 )

      Does not surprise me one bit. Just, say, training the machine on 10 predetermined topics is probably much less than 10x harder than for one topic. And again, there is nothing "general" here at all, the while thing is a clever fake. Unfortunately, most people are not really suing what they have in general intelligence, so many, many will fall for this trick. Can nicely be seen in some of the statements here.

      Incidentally, somebody high up in the Watson project told me not long ago when asked about actual inte

  • Hmm (Score:5, Funny)

    by cascadingstylesheet ( 140919 ) on Tuesday June 19, 2018 @10:06AM (#56809538) Journal

    So, it scans human-generated content, and then builds a plausible sounding argument to support whatever position you give it.

    This thing is going to cause a lot of unemployment in politics.

    • Re:Hmm (Score:5, Interesting)

      by PolygamousRanchKid ( 1290638 ) on Tuesday June 19, 2018 @11:16AM (#56810068)

      So, it scans human-generated content, and then builds a plausible sounding argument to support whatever position you give it.

      In this way, it works like a "lawyer", and not like a "scientist":

      “there are two ways to get at the truth: the way of the scientist and the way of the lawyer. Scientists gather evidence, look for regularities, form theories explaining their observations, and test them. Attorneys begin with a conclusion they want to convince others of and then seek evidence that supports it, while also attempting to discredit evidence that doesn’t.” Leonard Mlodinow, Subliminal: How Your Unconscious Mind Rules Your Behavior

      It sounds like IBM has created a lawyer, not a scientist.

      • Two things: 1. A lawyer, 2. A "training dummy" for debates or other educational pursuits.

        If applied to other non-critical, non-life threatening, low risk uses, it can potentially supplant humans in many capacities, such as an I.T. Help Desk. It need not be 100% accurate, just as acccurate as an average human. Being able to fashion a reasonable argument or set of instructions from a trusted dataset is a very useful thing. Though it does take away some of the low risk experience opportunities to develop sk
        • Re: Hmm (Score:4, Interesting)

          by Immerman ( 2627577 ) on Tuesday June 19, 2018 @12:29PM (#56810548)

          To extend the usefulness of the "training dummy" - how many board meetings, etc. could benefit from a participant with the ability to translate real-time access to the majority of the relevant data into coherent arguments? If you could set the thing in "Devil's Advocate" mode (i.e. argue against anything proposed) you could potentially kill a lot of bad ideas very early in their formation, and steer more plausible ideas past many potential pitfalls. Heck, get two of them arguing against each other in "for and against" mode to potentially cut to the heart of a lot of issues, especially if they can integrate input from human debaters on the fly. Heck, just interjecting "That statement does not appear to have any supporting evidence" would go a long way.

      • by gweihir ( 88907 )

        Nice one, new to me. Does also describe why things are so fucked up when you take into account how many lawyers are in politics.

    • by gweihir ( 88907 )

      Indeed. Politics is not about being right, it is about sounding right. That, apparently, is a skill that does not need intelligence, just training.

  • cool project (Score:5, Insightful)

    by phantomfive ( 622387 ) on Tuesday June 19, 2018 @10:06AM (#56809548) Journal
    This is a cool project, but the article is utterly useless without a transcript.
  • If this AI truly uses real facts in a debate it would be wonderful. One thing most "debaters" these days seem to despise is actual facts. They get in the way of an emotional argument, something I (sadly) see as most prevalent in the SJW crowd. They have nice-sounding ideas that appeal to emotion but do not stand up in the face of factual examination.

    This is also going to derail politicians in a big way, especially if it sticks to facts. Politicians hate facts. They bank on their constituents not knowin

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Politicians hate facts

      I disagree. Politicians and journalists (I used to work as the latter) love facts. Facts are a dime a dozen. Studies churn out all kinds of facts all the time, and they can be thrown together and framed for any angle you wish to argue.

    • This is also going to derail politicians in a big way, especially if it sticks to facts. Politicians hate facts.

      Politicians love facts. Just about any policy has been justified with science and facts: tariffs, free trade, eugenics, forced sterilizations, segregation, integration, low taxes, high taxes, etc. Oh, sure, sometimes politicians get facts wrong, but that's more sloppiness than inability to find facts that support them. The usual error is in the application of the facts, not the facts themselves.

      • Just about any policy has been justified with science and facts

        I think you're more in disagreement with me on what constitutes a fact rather than politicians liking or disliking them. For example, tariffs cannot be logically argued as economically beneficial so long as the other side can enact counter-tariffs which are equally damaging. No such argument can be made because there is no evidence -- hence no facts -- to support such an argument.

        What's really going on here is a misrepresentation of facts, a purposeful twisting of data or omission of data to the contrary

        • What's really going on here is a misrepresentation of facts, a purposeful twisting of data or omission of data to the contrary in order to support an otherwise-insupportable argument.

          No, what is really going on is that you start with the premise that "if policy X can be shown to be 'beneficial', then government is justified in forcing people to comply with policy X". That's an authoritarian and collectivist mindset that I reject.

          For example, tariffs cannot be logically argued as economically beneficial so l

    • Re: (Score:3, Informative)

      by Tablizer ( 95088 )

      (Warning: political rant ahead) Sorry, but I don't find conservatives particularly logical either. The Kansas tax-cut experiment showed that tax cuts can hurt the budget far far more than economic benefits, if any. Spinners claimed the unemployment rate dropped because of the tax-cuts, but it was dropping for the nation in general. Suckers fell for that argument. Tax-cuts are their dogma; it's not based on empirical observation.

    • If this AI truly uses real facts in a debate it would be wonderful. One thing most "debaters" these days seem to despise is actual facts. They get in the way of an emotional argument, something I (sadly) see as most prevalent in the SJW crowd.

      Funny, and not unexpected - the poster making the strongest emotional argument and the weakest factual argument is that guy who claims it's the Other Guy who eschews facts for emotion.

      This is also going to derail politicians in a big way, especially if it stick

    • As this article suggests , properly picking your facts can help you shape peoples opinions to whatever you think is valuable.
      Here is an example from personal experience. I knew a woman who was employed as a news editor in a local T.V. station in New Hampshire.
      She was also a roman catholic.
      Her boss gave here a set of instructions:
      1) if the AP ( that is the associated press where they get most their articles from). Gives you are report about a public school teacher, a rabbi or a protestant minister it is not

      • also, if you have ever been trained in debate , or been to a debate competition, you have to realize the 'art of debating' is knowing how to present your point better then your opponent, even if you disagree with it. Often the side of the debate is picked randomly , so there is no reason to believe the human debater agreed with what they were defending. That makes it harder for a human, but not a computer.

        • by gweihir ( 88907 )

          I did that in back in school and I completely blew the other side out of the water with a claim that was obviously false. The whole thing was arranges by our pretty good Ethics teacher. I did learn that a golden tongue does not make for truth coming from it. Sadly, most of those present were just left confused and did not understand that lesson at all. Today I understand that the idea that a convincingly-sounding argument could be completely false was just too alien to them to be something they could under

      • by gweihir ( 88907 )

        Indeed. Science can have some bias in the strength of proof required ("extraordinary facts require extraordinary proof"), but that is it. This is basically a denial-of-service protection mechanism, were an attack where some group just throws ridiculous claim after ridiculous claim and thereby prevents verification is mitigated. In the absence of facts, Science must give a strong "we do not know". In the presence of facts, it must use all of them, check them for consistency and check whether more are needed.

  • by ooloorie ( 4394035 ) on Tuesday June 19, 2018 @10:20AM (#56809668)

    The "AI debater" mainly seems to search for possibly relevant statements in a large library and then inject them into the debate. Throwing factoids at each other is clearly how debates happen these days take place and how many "decision makers" operate.

    But that isn't how debates ought to take place. Debates should start with premises and mutually agreed facts and then reach conclusions via reason and logic.

    • >> that isn't how debates ought to take place

      I disagree. This was clearly an even match between two master debaters.
    • But that isn't how debates ought to take place. Debates should start with premises and mutually agreed facts and then reach conclusions via reason and logic.

      You can't run politics that way. That outcome means that someone has to indicate that they were wrong, mistaken, or incorrect in some belief of theirs. Once you start pulling on that thread and admit that your ideology may have been flawed in some way, you might have to question the rest of it as well and that's butting heads with your own deeply held beliefs. It's the same reason that there are a lot of religious people who would cling to young earth creationism even if god descended from heaven and told t

      • You can't run politics that way.

        Sure you can: having those kinds of debate is perfectly possible in a nation with limited government and enumerated powers.

        I'd be all for someone making communist island and laissez faire island and seeing what happens over the course of several decades, but the logistics of doing it make it practically impossible.

        We've done that and the result is always the same: the laissez-faire island does really well, entrepreneurs flee from socialist island to laissez-faire island, and

    • by Kjella ( 173770 ) on Tuesday June 19, 2018 @11:29AM (#56810138) Homepage

      But that isn't how debates ought to take place. Debates should start with premises and mutually agreed facts and then reach conclusions via reason and logic.

      First of all the world is full of complex systems where you can't directly link cause and effect, predictions of the future, other people's actions and reactions and so on that can't be proven like a science experiment. Even when we agree on the facts, we disagree on the significance and meaning of the facts or even the overall model or ideology that they fit into. A question like "Are Trump's import tariffs good for the American economy?" could probably fill volumes of economic journals without a definitive answer in sight. Even in retrospect 10 years from now they'll still be arguing how much it actually mattered and how much would have happened anyway and certainly a lot of guesswork on the alternatives, so ending in conclusions is wildly optimistic. And that's when they don't have a self-interest in disagreeing with it.

      Most public and political debates aren't actual debates, they're more like elevator pitches. You get two minutes in the spotlight to tell people who have no clue about the topic why your idea is great and their idea sucks. They will do the same. What you're looking for is buried deep down in committees, reports, propositions and whatnot where people decide that maybe goods of type X and not Y should be included or the rate should be 25% and not 22%. That's the kind of debate you take when you're preaching to the choir or have an expert group or something. When you're pitching to the general public the goal is simply to convince them that you're the person they should follow.

      • A question like "Are Trump's import tariffs good for the American economy?" could probably fill volumes of economic journals without a definitive answer in sight.

        See, and that question starts from the wrong premise that government should do "what is good for the American economy".

        Even when we agree on the facts, we disagree on the significance and meaning of the facts or even the overall model or ideology that they fit into

        Which is precisely because we should have a debate on models and ideology, not "facts

    • by gweihir ( 88907 )

      As most "debates" these days are not about finding truth, but about "winning", the whole thing has gotten utterly corrupted.

  • by Nkwe ( 604125 ) on Tuesday June 19, 2018 @10:25AM (#56809708)

    In its first public outing, Project Debater turned out to be a formidable opponent, scanning the hundreds of millions of newspaper and journal articles in its memory to quickly synthesize an argument on a topic and position it was assigned on the spot. "Project Debater could be the ultimate fact-based sounding board without the bias that often comes from humans," said Arvind Krishna

    If the data it uses to "argue" comes from human sources, it has a human bias.

    That being said, it is cool technology and it demonstrates how bad human debate can be. If you can win an argument without actually knowing what you are talking about (which you can), it demonstrates the (lack of) value debate can have; it also underscores the lack of real value in the level of political discourse that we have today. We spend a lot of time arguing over things we don't really know about.

    • by Aristos Mazer ( 181252 ) on Tuesday June 19, 2018 @10:39AM (#56809812)
      Please define "actually knowing." The machine appears to have sifted through information, extracted bits relevant to the topic, and then presented arguments supporting its position. At some level, it does know its topic. What it lacks is a value judgement of whether it cares about this position or not. That value judgement seems to me to be a critical part of calling it sentient, but it does seem to know the topic. In many ways, the machine knew more about the topic than the human it was debating given the amount of data that it had absorbed and organized internally into information.
      • > Please define "actually knowing."

        Being aware of the concepts and/or experiencing it directly.

        Knowing, the state of having knowledge, comes about in 2 ways:

        - Intellectual Knowledge
        - Experiential Knowledge

        Examples of each:

        * I can know 1+1=2 by understanding the concept of numbers, the number line, and the addition operator. Once I understand the concept I can define '+' for 2D numbers such as complex numbers, or for even more advanced concepts like images, for audio, etc.

        * The ability to see means we kn

        • Agreed. How can a machine differentiate between a priori and a posteriori knowledge? Won't it just consider everything it absorbs as "fact"?

          • Obviously it has ability to discriminate information to some degree or it would be formulating arguments against itself â" it would have read a paper arguing pro and another arguing con and then accepted both as facts. So that means it has at least some ability to discriminate data. How does it evaluate data sources? The articles thus far do not go into detail there.
        • What constitutes experiential knowledge on the value of space subsidies? If this debate were âoedoes refusing painkillers during childbirth build character?â then Iâ(TM)d agree the compurerâ(TM)s knowledge was gapped (permanently). But when the topic is in the intellectual domain, as both these topics were, then I think it is fair to call the machine âoeknowledgeable.â As for formulating new ideas, Iâ(TM)m very curious where that joke came from and whether that was parrot
    • That being said, it is cool technology and it demonstrates how bad human debate can be. If you can win an argument without actually knowing what you are talking about (which you can), it demonstrates the (lack of) value debate can have; it also underscores the lack of real value in the level of political discourse that we have today. We spend a lot of time arguing over things we don't really know about.

      Agreed, though I'd leave out the "today" part. We didn't invent debate ...

  • by PPH ( 736903 ) on Tuesday June 19, 2018 @10:25AM (#56809716)

    No, it didn't.

    Yes, it did.

    No ....

  • I mean, a computer simulation of Congress cannot be described with any terms that include "intelligence" can it?

  • Lame (Score:5, Funny)

    by Major Blud ( 789630 ) on Tuesday June 19, 2018 @10:56AM (#56809936) Homepage

    Let me know when the computer can win a Slashdot debate. There's no way it could cope with this sort of argument:

    Computer: "AI has made great improvements in it's cognitive ability."
    Anonymous Coward: "Yeah, WELL FUCK YOU!!!!"

    AC wins every time.

  • Maybe they'll target politicians next..

    Imagine how powerful augmented lobbyists could become.
  • Selecting Response.....

    "Your Mother is so fat she smokes Turkeys...."

  • by Headw1nd ( 829599 ) on Tuesday June 19, 2018 @11:20AM (#56810082)

    So IBM is claiming this can be used as a fact-based sounding board, but if it is looking through published work how does it know that what the system is repeating is actually fact? I realize that humans have this same issue, but if you are going to present your device as a paragon of factual information, then I would expect a rigorous system of validation to be part of it.

    I will say being able to build this type of language structure in a way that is at least passable is quite an achievement in and of itself. I have the feeling "holding it's own" is an overstatement, but it was apparently not ridiculous.

  • by Rick Schumann ( 4662797 ) on Tuesday June 19, 2018 @11:27AM (#56810124) Journal
    Not a rhetorical question. Like all the pseudo-intelligent software being trotted out the last several years, it does not know how to 'think'. Try asking it "What are you doing right now, and why are you doing it?" and let's see what it says. All this software is doing is sifting and sorting information, and arranging it into statements, and it doesn't matter if the statements it's making are in response to statements made by the human debater, the machine does not understand what it's doing, just like it doesn't understand anything at all; there's no mind in there, it doesn't 'think', it just processes information, and it's not relevant so far as I'm concerned that it happens to do that in a sophisticated and remarkable way. Not impressed, it's just another dog-and-pony-show to placate investors and stockholders.
    • I fail to see what differentiates that from a large number of humans.
      • by gweihir ( 88907 )

        We are finding not that machines are intelligent, but that many humans are not or at least chose not to use what they have.

    • by gweihir ( 88907 )

      Well, as there is absolutely nothing intelligent from "AI" research at this time (no, not even in the theory-stage if you require some plausibility), this is all they can do. They promised great things and at least their scientists did know they could not deliver. So they fake it and a lot of people just fall for it.

      The thing is that while most humans supposedly have general intelligence, few choose to use it. (A figure of 10...15% independent thinkers pops up in actual experience, for example in advanced t

  • by sacrilicious ( 316896 ) <qbgfynfu.opt@recursor.net> on Tuesday June 19, 2018 @11:41AM (#56810204) Homepage

    IBM's engineers know the AI isn't perfect.

    I detest writing like the above. People trot out the "I know I/it/whatever isn't perfect" lead-in all the time, and I dislike it because it's seductively-packaged idiocy... it costs the speaker nothing (who would, or even could, argue that xyz IS in fact perfect?), while then paving the way for them to follow with an equally vapid statement that does nothing to inform.

    I was recently trying to assess whether buying expensive retainers for my son's post-braces teeth would be worthwhile, and asked his orthodontist what the success/stability rate with them was. She replied, "Well, we can't guarantee perfection, of course, but most people like them." Which cornered me into "being rude" by explaining to her that the fact that the outcomes weren't "perfect" was not informative or helpful; do they work in 80% of patients? 99? 40? THAT information is helpful.

  • I'd love to see how it works against a political opponent, where only opinions and fist-shaking matter, and facts are actively discouraged.

  • by Dasher42 ( 514179 ) on Tuesday June 19, 2018 @12:20PM (#56810478)

    This platform is not a presence in a debate as we think of it. It has no inherent values, no physical experience, and it doesn't have anything at stake other than the demo itself. It is not free of bias, rather, it sorts through the information and biases humans supply it with.

    That last part is why it's still super-valuable, in my opinion. In my time as a politically engaged citizen of the USA, there have been times when the big media across the political spectrum has lagged for almost a year in reporting information that could be found with very good corroborating evidence in major media across the globe. Regarding the Iraq war, major stories seemed to break in the Summer or Fall after the invasion, but they were only breaking stories in the USA. They'd been reported on extensively before the invasion in Europe in Lebanon. I'd been reading that media and cross-comparing and detecting US corporate bias, noticing what was either left out or buried in the footnotes, and becoming aware of the biases and motives they implied.

    One really stunning example: word came out that the BBC and CNN were both carrying "full" transcripts of Hans Blix's testimony to the UN about the efficacy of UN inspectors to verify Iraqi compliance with denuclearization, but that CNN had omitted major parts of a "full" transcript. I did the homework. I downloaded both transcripts and broke out my tools as a Linux guy and analyst, and did the diff. What amounted to two large paragraphs on my screen right in the middle of the testimony were the omitted parts. They were the most detailed and convincing parts of Hans Blix's testimony, and the most relevant to a public that had a right to informed participation about whether the nation should start a pre-emptive war. That a "liberal" institution doctored verified and significant news in favor of a pro-war stance was really, really damning.

    That wasn't the end, that story goes on.

    Point is, as a human, cross-comparing many diverse pieces of information and journalism has definitely brought not just the story, but how some actors are trying to manipulate the story to light. We need an equitable, fairly administered system to make this sort of analysis available to the public. It needs to detect discrepancies and focus the public on where it can validate and verify something into being closer to fully true. It needs to be broad-based enough to not be itself a prop for those looking to use it for propaganda.

    I'm all for using this AI in ways that might help critical thought prevail.

  • by Deb-fanboy ( 959444 ) on Tuesday June 19, 2018 @12:25PM (#56810520)

    scanning the hundreds of millions of newspaper and journal articles

    So, for example, discussing the news in the UK, where most of the newspapers have a mainstream bias, this poor AI will just parrot the same old rubbish you can read in papers such as the Telegraph, Times and Guardian, or worse the Mail, Express or the Sun. Also it is wrong to associate what is in those papers with facts. All these papers bend and distort, over report or omit in order to fit their agenda. Rubbish in, rubbish out.

  • by Innominandum ( 453982 ) on Tuesday June 19, 2018 @02:01PM (#56811088)

    Guys, this is really silly. As Godel has already demonstrated, it is impossible for a machine to meet the criteria of consciousness. "Artificial intelligence" is a chimerical idea and is not possible.

    "Imitative intelligence" would be more accurate. A machine may be able to hold a facade of "intelligence," but any semblance of intelligence has been derived from its creators.

    The claim that the machine "synthesized an argument" is misleading. Machines are not capable of a priori. The machine simply sorted information giving the appearance of a synthesized argument. The author projected this activity of synthesizing an argument onto the machine, but that is not what happened.

    Then the author of the article made the incredible claim that the machine does not have bias, but just the same, they fed it a junk-food diet of newspaper articles & mental garbage.

    This article is propaganda.

    They're trying to persuade you to believe that machines can be intelligent, that machines will soon be just as or more capable than men at thinking, and that human mental faculties are mechanical. Perhaps the hope is that the general populace will eventually fall under of a large "appeal to authority, or argumentum ad verecundiam" umbrella and give up critical thinking altogether. This is already happening to people in STEM, who have largely ignored philosophy, and evidently cannot think rightly.

  • Does trying to debate with ELIZA piss it off?

  • New IBM Robot Holds Its Own In a Debate With a Human

    "No it didn't."

    "Yes, I did."

    "No, you didn't."

    "Yes I did!"

news: gotcha

Working...