Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming Robotics Software Science Technology

AI Programs Exhibit Racial and Gender Biases, Research Reveals (theguardian.com) 384

An anonymous reader quotes a report from The Guardian: An artificial intelligence tool that has revolutionized the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the specter of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. The research, published in the journal Science, focuses on a machine learning tool known as "word embedding," which is already transforming the way computers interpret speech and text.
This discussion has been archived. No new comments can be posted.

AI Programs Exhibit Racial and Gender Biases, Research Reveals

Comments Filter:
  • by turkeydance ( 1266624 ) on Thursday April 13, 2017 @09:14PM (#54232149)
    "And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words." ...what were those unpleasant words?
  • Simple solution (Score:4, Insightful)

    by djinn6 ( 1868030 ) on Thursday April 13, 2017 @09:18PM (#54232175)
    There's a simple solution: fix the training data. The AI cannot learn about humans except through its training data. It doesn't interact with men or women and has no idea what those words represent, except in relation to the other words it was given. If we give it racist data, it will learn to be racist, as Microsoft's chat bot [wikipedia.org] learned last year. If we give it PC data, it will be PC. In the end it's the fault of whoever trained the program if it became biased.
    • Re: (Score:2, Redundant)

      by Fwipp ( 1473271 )

      Exactly. Unfortunately, a lot of people training AI don't think about this stuff, and end up with shitty AI that simply reflects pre-existing biases.

      • Re:Simple solution (Score:4, Interesting)

        by lorinc ( 2470890 ) on Thursday April 13, 2017 @09:39PM (#54232243) Homepage Journal

        Just like for regular humans. People almost never question the religion there were born with, or views on races and culture for that matter.

      • by Suiggy ( 1544213 )

        How can the training data be biased if it's sampled to get a uniform distribution? What if the horrifying reality is actually that the the biases are pre-existing precisely because they've always modeled relatively closely the actual data? What if people actually aren't born equal and there is a genetic component? If that is the case, how can that ever be resolved by "fudging" the training data? Doesn't this just amount to sticking one's head in the sand?

        • Then you keep tweaking the data until the biases vanish. And the data becomes useless.

          • by bytesex ( 112972 )

            Shouldn't the fact that the data then becomes useless, be easy enough to prove? And by that implication, that 'reality = racist, sexist'? I'll be looking out for your whitepaper on the subject.

    • by MrL0G1C ( 867445 )

      Could be easier said than done if the data is gigabytes of text. You have an algorithmic way of deleting racist data?

    • Re:Simple solution (Score:5, Insightful)

      by msauve ( 701917 ) on Thursday April 13, 2017 @09:42PM (#54232253)
      As soon as you start deliberately manipulating the training data, your're introducing your own bias.

      Right-handed people are dexterous, lefties are sinister.
      • by AmiMoJo ( 196126 )

        The real problem is insufficient complexity. If the AI was clever enough to understand that bias is a problem, how it works and how to self-correct, it would be able to get past the bias in the training data.

        Unfortunately, current AI is extremely simple.

        • Re:Simple solution (Score:4, Insightful)

          by religionofpeas ( 4511805 ) on Friday April 14, 2017 @05:49AM (#54233363)

          If the AI was clever enough to understand that bias is a problem, how it works and how to self-correct, it would be able to get past the bias in the training data.

          The bias represents itself as a pattern in the training data, as a result of patterns in reality. Why should the AI consider some patterns to be "a problem" ? What's your criterium ?

    • Re:Simple solution (Score:4, Informative)

      by laddiebuck ( 868690 ) on Thursday April 13, 2017 @10:49PM (#54232443)

      "Simple". The ML community is very aware of this problem, but sanitizing real-world data that may be shaped by subtle biases is really, really hard. You'd need a dedicated sociology PhD involved in every ML research project - a ludicrous load - and even then you wouldn't catch everything. This is a Hard Problem to be aware of for a long time to come.

    • by gweihir ( 88907 )

      But that is just the thing: If this is aimed at translation or text analysis, then this is the right date to train it on and "fixing" it actually breaks things. Translation requires accuracy and part of any good translation is guessing right. PC is just lying about how things really are and that may be fatal when translating or analyzing things.

      Now, I do not claim this is a good thing, but lying to your statistical classifier during training is about as stupid as it gets.

    • There's a simple solution: fix the training data

      The training data is fine, it's the reality that sucks.

    • by dbIII ( 701233 )
      There's the old saying "it takes a village to teach a child". I think training a real A.I. is going to be very hard and require a lot of interaction.

      In the fiction "Sword Art Online" (book 9 onward and maybe in the upcoming anime) there is a attempt at A.I. by simulating the minds of babies and getting staff to go into the simulation and raise those children - rinse and repeat for several generations thanks to an FTL style plot device of epic levels of quantum computing allowing the simulation to be sped
      • There is also "top down" A.I. in the setting which is really just a massive collection of lookup tables, pattern recognition and so on.

        Bottom up learning also works through pattern recognition. And nobody uses lookup tables.

    • Crime statistics aren't racist; they are factual data... that completely ignore social and economic factors that may explain why certain groups are over-represented in the stats. Bias comes into play when the data is applied: giving a black guy a stiffer sentence because of such data (where a judgment is supposed to be about the individual). Or applying a higher insurance premium to certain races because they have a genetic propensity for diabetes or colon cancer. The latter example is fine from a purely
  • ...has too much time on their hands.

  • Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases.

    "unlike some humans"

    There, fixed that for you. Or even better: "like most humans".

    Statistical learning does inferences based on what humans produced. If humans are crap, do not expect something better than crap. .

    • Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases.

      "unlike some humans"

      There, fixed that for you. Or even better: "like most humans".

      Exactly AIs once they approach human intelligence will start of on the stupid end, so it is inevitable they will be Republican to begin with.

  • by king neckbeard ( 1801738 ) on Thursday April 13, 2017 @09:33PM (#54232221)
    Most spoken languages exhibit a lot of bias. For example, Deutsch means people or folk, and that lightly implies what is not Deutsch is not people. A lot of languages have that mindset, and it's not surprising. Language evolved during times when people had values we disagree with.
    • I missed the AI part in the summary, but I still call bullshit.
    • by Suiggy ( 1544213 )

      Exactly, or take for example the Hebrew word "Adam" which means human. Only Israelites or Jews are "Adam", however, and the rest are the "goyim" or "cattle."

  • Last program I wrote solved Sudoku puzzles, written in Java (unemployed at the time). Before that it was a test suite for Qualcomm chips to ensure all the various subsystems kinda worked.. Before that was an 802.11 driver for a chip for a startup. Memory is fading, think before that I was testing BREW games on various handsets (pre-iPhone).

    Not seeing anything resembling a gender bias going back some 15-20 years here.
  • by Snotnose ( 212196 ) on Thursday April 13, 2017 @10:22PM (#54232359)
    Maybe because the AI's are modeled on what works, not on what some people wish would work.

    One beer ago I wouldn't have had the nerve to say that, says a lot for where social discourse is nowdays.
    • Completely agree with your sentiment.

      For whatever it's worth, I completely agree that could be what we're seeing here.

    • Yes, I mean if you ask an AI "Out of these 1,000 candidates which would be the best choice for construction worker" and the 1,000 candidates were an even mix of men and women and the AI picked all men would that be gender bias simply because the AI thinks that for that particular job men make a better choice? I personally don't think so.

      • Funny you should mention construction and men. It's certainly male dominated, but not as much as it used to be perhaps? That might depend on the location of course. There's a lot of construction going on in London at the moment and a lot of it is on the higher tech end, i.e. not relying on the carrying capacity of humans.

        Sure scaffolders are likely to remain very far majority men for the forseeable future, since that's one job which in particular relies on a quite astonishing amount of physical strength and

    • I think. Blacks are going to be more likely to default on a loan, right? They've got higher rates of poverty. You don't even have to have a 'black guy' flag in your app. Poverty segregates all on its own and you can just use their zip codes and the schools they went to. There's a million ways to profile somebody as black or white without having a little checkbox...

      I've said it before, but this is what folks mean by "institutionalized racism". It's when racism is part of the basic makeup of society. If y
      • Re:I'm with you (Score:4, Insightful)

        by meta-monkey ( 321000 ) on Friday April 14, 2017 @09:35AM (#54234045) Journal

        It's when racism is part of the basic makeup of society.

        But race is part of the basic makeup of society. You get called an SJW and shouted down because your premise is the opposite of reality, and you've put your political ideology ahead of science and your own lying eyes. This is very bad, because since you don't understand the problems your "solutions" only make things worse.

  • Will SJW now sit in on computer science projects?
    A form of science commissar https://en.wikipedia.org/wiki/... [wikipedia.org] to ensure any AI is only allowed to access SJW approved data sets to learn from?
    SJW approved images, authors and texts?
    SJW approved and sorted political history?
    An AI cant learn from the wider internet, it will be held back to small sets of SJW approved data.
    Holding back science did not really work too well for East Germany or the Soviet Union.
    If the a nation wants to hold back its most ad
    • by meta-monkey ( 321000 ) on Friday April 14, 2017 @09:40AM (#54234073) Journal

      What would an export grade AI look like after years of SJW meddling with the design?

      To add on, build an AI in the politicized west and ship it to Japan. They unpack it next to a Japanese-designed AI. They ask the AIs, "is an average Japanese person more intelligent, less intelligent, or the same intelligence as an average Ugandan person?"

      How does that play out? We know what each AI will be required to say. Why would anyone not afflicted with western social justice leftism ever want to buy the American AI?

  • by nyri ( 132206 )

    That's it. I'm quitting slashdot.

    Slashdot editors have shown that they are willing to take a stand in summaries but when it comes to this constant torrent of identity politics crap, they stay silent. I infer only one thing from this: Slashdot editors (at least passively) support the basic tenants of SJW movement such as world is socially constructed, or that all the differences between group representations in any section of society are and should be only explained by oppression by those holding all the pow

    • Re:Ok. Thx, bye (Score:5, Insightful)

      by Mal-2 ( 675116 ) on Thursday April 13, 2017 @11:55PM (#54232587) Homepage Journal

      And this is how you get hugboxes.

      People who hold opinion X see a bias against it. People who hold opinion !X also see a bias against it. Both ends cry foul and drift off to places that are "not biased" (that is, biased like all others, just in a way that is acceptable to them).

      If you want to leave, leave. But nobody gives a shit about Yet Another Grand Exit. Have fun in your echo chambers.

  • by rsilvergun ( 571051 ) on Friday April 14, 2017 @12:18AM (#54232635)
    by Institutionalize racism. It's when it's buried so deep in your society that it's hard to separate it from statistical data. Forest for the trees and what not. It starts getting hard to separate cause and effect. Actually no, that's not right. It becomes easy to _not_ separate them. In the overt scenario blacks get profiled for crime. In the not so overt one they can't get loans because folks in those neighborhoods are 3% more likely to default. This is what happens when you feed large amounts of data into complex systems without knowing or caring about the consequences...
  • “Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.””

    The single most subversive thing that can be done in the present environment is to financially back lossless compression prizes. One such prize is the Hutter Prize for Lossless Compression of Human Knowledge — although it needs to be expanded to include all of Wikipedia. Pe

  • as machines are getting closer to acquiring human-like language abilities

    Nope, nothing like that is happening. Algorithms which work with speech are still stupid as fuck and have exactly zero understanding of the language.

    Even the most talented translators have major troubles trying to translate things between dissimilar languages. The way people separate things in the world in order to call them differs so much between languages, proper translation is oftentimes simply impossible unless you describe a t

    • Nope, nothing like that is happening. Algorithms which work with speech are still stupid as fuck and have exactly zero understanding of the language

      They are steadily improving, so they are "getting closer" to the goal, even though we can agree there's still a long way to go.

  • One could say that expecting correct English without consideration of various accents and practices is in itself prejudicial. However clarity is essential and clarity can be hard to come by. One example was from a group of black women some of whom were from Haiti, the US and Jamaica. Instead of she is a prostitute they would often remark she massage. When said in their dialect it comes out as one word, shemassage. Now obviously we would not want an AI device that would have shemassage as an out p
  • of course (Score:5, Interesting)

    by argStyopa ( 232550 ) on Friday April 14, 2017 @08:00AM (#54233709) Journal

    ..the begged question is that gender or racial bias and stereotypes are intrinsically "wrong". They are to our 21st century sensibilities, but they served humanity pretty well for millions of years.

    Maybe where you have a society where women ARE primarily concerned with raising children, there are better outcomes than when men raise children or women go off to pursue their careers. Maybe where you have a society where obvious strangers are marginalized and driven away, the remainder ends up more cohesive.
    I'd be curious how these AI biases would develop if 'fed' only native African literature and information.

    I'm not making an 'appeal to nature' here, saying what "should" be or "shouldn't" be.
    One might suggest that, evolutionarily speaking, maintaining a bias is harder than not, assuming no reinforcement. That our language (pretty fundamental to being human, after all) is pervasive with such institutional biases would suggest that there is a value/benefit to such.

    • Re:of course (Score:5, Insightful)

      by meta-monkey ( 321000 ) on Friday April 14, 2017 @10:03AM (#54234197) Journal

      I'm not making an 'appeal to nature' here, saying what "should" be or "shouldn't" be.

      But the authors of the article are making such a statement, they just have nature completely backwards. They believe mankind, separated from "society" is naturally non-racist, non-sexist, non-gendered even, and that the outcomes of race, gender, or class groups is imposed on the formless humans by society, to where the concepts themselves of race and gender are "social constructs," and if we smash them everything will just...be great.

      This is very similar to Marx's concept of communism and capitalism. He believed that mankind had no innate human nature, so the natural state of mankind was stateless communism, where everyone just naturally gets along and shares and contributes from his ability to the needs of others, and that capitalism was a foreign, oppressive system imposed on these innocents. This is why Marx is famous for his criticisms of capitalism, but as for his descriptions of communism...well not only does he not have them, he thought it was near blasphemous to try to describe how this natural communism would be carried out in practice because the imposition of such order is contrary to the natural, emergent collective spirit of mankind constrained and oppressed by capitalism. Want pure glorious wealth and utopian plenty? Just smash capitalism and you'll get it. And if you've smashed capitalism and perfect communism hasn't emerged...well it must be because you've still got some secret capitalists gumming up the works and they need to be ferreted out and sent to gulag.

      This is the same concept behind feminism and anti-racism. Gender norms have nothing to do with the clear, obvious, and scientifically proven biological differences between the sexes. These are in fact imposed by the evil Patriarchy. Smash the Patriarchy and gender equality will simply emerge. If it doesn't, well, it must be because there's still evil sexists hiding around here and they need to be identified and purged. Difference in racial outcomes have nothing to do with the clear, obvious, and scientifically proven biological differences between human haplogroup populations. These are in fact enforced by evil White Supremacy. Smash White Supremacy and racial equality will simply emerge. If it doesn't, well, it must be because there still exist evil racists hiding around here and they need to be identified and purged.

      This is the fundamental error of the social justice movement: the belief that race and gender are social constructs when in fact society is a racial and sexual construct.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...