AI Programs Exhibit Racial and Gender Biases, Research Reveals (theguardian.com) 384
An anonymous reader quotes a report from The Guardian: An artificial intelligence tool that has revolutionized the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the specter of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals. Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. The research, published in the journal Science, focuses on a machine learning tool known as "word embedding," which is already transforming the way computers interpret speech and text.
from the biased report... (Score:5, Interesting)
Re:from the biased report... (Score:5, Insightful)
"Murder", "rape", "robbery", "incarceration"... just a guess.
Re: (Score:2)
Probably a statistically accurate representation of how mainstream-media report things. i.e. it is not the AI tool that has a bias here.
The effect of training an AI on propaganda is that is is then trained on propaganda. Why is this even news? It is obvious.
Simple solution (Score:4, Insightful)
Re: (Score:2, Redundant)
Exactly. Unfortunately, a lot of people training AI don't think about this stuff, and end up with shitty AI that simply reflects pre-existing biases.
Re:Simple solution (Score:4, Interesting)
Just like for regular humans. People almost never question the religion there were born with, or views on races and culture for that matter.
Re: (Score:3)
How can the training data be biased if it's sampled to get a uniform distribution? What if the horrifying reality is actually that the the biases are pre-existing precisely because they've always modeled relatively closely the actual data? What if people actually aren't born equal and there is a genetic component? If that is the case, how can that ever be resolved by "fudging" the training data? Doesn't this just amount to sticking one's head in the sand?
Re: (Score:2)
Then you keep tweaking the data until the biases vanish. And the data becomes useless.
Re: (Score:2)
Shouldn't the fact that the data then becomes useless, be easy enough to prove? And by that implication, that 'reality = racist, sexist'? I'll be looking out for your whitepaper on the subject.
Re: (Score:2)
Oh, but this is the whole point of outrage. The average, perfectly objective statistical sample is racist.
Re: (Score:2)
Could be easier said than done if the data is gigabytes of text. You have an algorithmic way of deleting racist data?
Delete it all (Score:2)
You'll probably only have about a 0.0001% false positive rate.
Re: (Score:2)
You have an algorithmic way of deleting racist data?
Train another AI based on a group of SJW's. If the SJW AI screams at something, that thing is racist.
Do the same for every politically powerful group in the US.
Re:Simple solution (Score:5, Informative)
Simple. 'decolonize' it.
https://www.youtube.com/watch?... [youtube.com]
SocJus taken to its 'logical' conclusion. Reality is bigotry.
Re:Simple solution (Score:5, Insightful)
Right-handed people are dexterous, lefties are sinister.
Re: (Score:2)
The real problem is insufficient complexity. If the AI was clever enough to understand that bias is a problem, how it works and how to self-correct, it would be able to get past the bias in the training data.
Unfortunately, current AI is extremely simple.
Re:Simple solution (Score:4, Insightful)
If the AI was clever enough to understand that bias is a problem, how it works and how to self-correct, it would be able to get past the bias in the training data.
The bias represents itself as a pattern in the training data, as a result of patterns in reality. Why should the AI consider some patterns to be "a problem" ? What's your criterium ?
Re:Simple solution (Score:4, Informative)
"Simple". The ML community is very aware of this problem, but sanitizing real-world data that may be shaped by subtle biases is really, really hard. You'd need a dedicated sociology PhD involved in every ML research project - a ludicrous load - and even then you wouldn't catch everything. This is a Hard Problem to be aware of for a long time to come.
Re:Simple solution (Score:4, Insightful)
You'd need a dedicated sociology PhD involved in every ML research project
This just reminds me of the political officers in the Red Army, who accompanied each unit to make sure everyone was good communist.
Re: (Score:2)
But that is just the thing: If this is aimed at translation or text analysis, then this is the right date to train it on and "fixing" it actually breaks things. Translation requires accuracy and part of any good translation is guessing right. PC is just lying about how things really are and that may be fatal when translating or analyzing things.
Now, I do not claim this is a good thing, but lying to your statistical classifier during training is about as stupid as it gets.
Re: (Score:2)
There's a simple solution: fix the training data
The training data is fine, it's the reality that sucks.
Re: (Score:2)
In the fiction "Sword Art Online" (book 9 onward and maybe in the upcoming anime) there is a attempt at A.I. by simulating the minds of babies and getting staff to go into the simulation and raise those children - rinse and repeat for several generations thanks to an FTL style plot device of epic levels of quantum computing allowing the simulation to be sped
Re: (Score:2)
There is also "top down" A.I. in the setting which is really just a massive collection of lookup tables, pattern recognition and so on.
Bottom up learning also works through pattern recognition. And nobody uses lookup tables.
Re: (Score:3)
Someone.... (Score:2)
Not a problem with AI (Score:2)
Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases.
"unlike some humans"
There, fixed that for you. Or even better: "like most humans".
Statistical learning does inferences based on what humans produced. If humans are crap, do not expect something better than crap. .
Re: (Score:2)
Joanna Bryson, a computer scientist at the University of Bath and a co-author, warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases.
"unlike some humans"
There, fixed that for you. Or even better: "like most humans".
Exactly AIs once they approach human intelligence will start of on the stupid end, so it is inevitable they will be Republican to begin with.
Human language is pretty biased. (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Exactly, or take for example the Hebrew word "Adam" which means human. Only Israelites or Jews are "Adam", however, and the rest are the "goyim" or "cattle."
Yeah, no (Score:2)
Not seeing anything resembling a gender bias going back some 15-20 years here.
I'm gonna get so nailed for this :( (Score:5, Insightful)
One beer ago I wouldn't have had the nerve to say that, says a lot for where social discourse is nowdays.
Re: (Score:2)
Completely agree with your sentiment.
For whatever it's worth, I completely agree that could be what we're seeing here.
Re: (Score:2)
Yes, I mean if you ask an AI "Out of these 1,000 candidates which would be the best choice for construction worker" and the 1,000 candidates were an even mix of men and women and the AI picked all men would that be gender bias simply because the AI thinks that for that particular job men make a better choice? I personally don't think so.
Re: (Score:2)
Funny you should mention construction and men. It's certainly male dominated, but not as much as it used to be perhaps? That might depend on the location of course. There's a lot of construction going on in London at the moment and a lot of it is on the higher tech end, i.e. not relying on the carrying capacity of humans.
Sure scaffolders are likely to remain very far majority men for the forseeable future, since that's one job which in particular relies on a quite astonishing amount of physical strength and
I'm with you (Score:2)
I've said it before, but this is what folks mean by "institutionalized racism". It's when racism is part of the basic makeup of society. If y
Re:I'm with you (Score:4, Insightful)
It's when racism is part of the basic makeup of society.
But race is part of the basic makeup of society. You get called an SJW and shouted down because your premise is the opposite of reality, and you've put your political ideology ahead of science and your own lying eyes. This is very bad, because since you don't understand the problems your "solutions" only make things worse.
Re: (Score:3)
nobody's ever demonstrated real differences between the races outside the societal context,
This is the complete opposite in reality. Genetics is a real thing, and yes, many, many studies [wordpress.com] have been done showing the biological differences between different human ethnic groups (shorthand collated into "races" for simplicity of reference). This is real and this is science, and there is no excuse for still parroting the radical egalitarian ideology. That is a political ideology with no basis in fact. How do you possibly arrive at the concept that somehow humans left Africa 50,000 years ago and then, a
Re: (Score:2)
I'm drunk enough to post the truth.
No, you're drunk enough to post what you believe is the truth. Big difference.
Re: (Score:2)
Looks like we have a new APK in the making. Welcome, I look forward to years of entertainment here!
SJW to sit in on computer science? (Score:2, Insightful)
A form of science commissar https://en.wikipedia.org/wiki/... [wikipedia.org] to ensure any AI is only allowed to access SJW approved data sets to learn from?
SJW approved images, authors and texts?
SJW approved and sorted political history?
An AI cant learn from the wider internet, it will be held back to small sets of SJW approved data.
Holding back science did not really work too well for East Germany or the Soviet Union.
If the a nation wants to hold back its most ad
Re:SJW to sit in on computer science? (Score:5, Insightful)
What would an export grade AI look like after years of SJW meddling with the design?
To add on, build an AI in the politicized west and ship it to Japan. They unpack it next to a Japanese-designed AI. They ask the AIs, "is an average Japanese person more intelligent, less intelligent, or the same intelligence as an average Ugandan person?"
How does that play out? We know what each AI will be required to say. Why would anyone not afflicted with western social justice leftism ever want to buy the American AI?
Ok. Thx, bye (Score:2, Funny)
That's it. I'm quitting slashdot.
Slashdot editors have shown that they are willing to take a stand in summaries but when it comes to this constant torrent of identity politics crap, they stay silent. I infer only one thing from this: Slashdot editors (at least passively) support the basic tenants of SJW movement such as world is socially constructed, or that all the differences between group representations in any section of society are and should be only explained by oppression by those holding all the pow
Re:Ok. Thx, bye (Score:5, Insightful)
And this is how you get hugboxes.
People who hold opinion X see a bias against it. People who hold opinion !X also see a bias against it. Both ends cry foul and drift off to places that are "not biased" (that is, biased like all others, just in a way that is acceptable to them).
If you want to leave, leave. But nobody gives a shit about Yet Another Grand Exit. Have fun in your echo chambers.
This is what folks mean (Score:5, Insightful)
Re: (Score:3)
Why don't we profile whites for crime, since a lot of white folks commit crimes?
On average, white people commit fewer crimes, so it would be stupid to profile them because they are white. You could still profile them based on other criteria, for instance the make of car they drive, or the neighborhood they live in.
Re: (Score:2)
Now, say a white kid steals a candy and a black kid steals a candy. A decent but racist cop sees candy stealing as a thing kids do, so he will give a stern talk about rights and wrongs and will make the kid apologize. Unless the kid is black, in which case he is already a lost cause
Obviously, the punishment for committing a crime should not depend on the background of the perp. This is something that can be fixed.
Now, why are whites not profiled as “drug abusers”?
Maybe because they aren't causing problems ?
Re: (Score:3)
This is not something that can be easily fixed. If the members of a jury believe that blacks are more likely to commit crimes because more blacks are convicted of crimes, they'll tend to convict blacks more easily than whites, all without conscious intent. Perhaps whites tend to get better defense attorneys, because they're perceived as less likely to have committed a cri
Universal algorithmic IQ test (Score:2)
“Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.””
The single most subversive thing that can be done in the present environment is to financially back lossless compression prizes. One such prize is the Hutter Prize for Lossless Compression of Human Knowledge — although it needs to be expanded to include all of Wikipedia. Pe
A little bit of BS in the article (Score:2)
Nope, nothing like that is happening. Algorithms which work with speech are still stupid as fuck and have exactly zero understanding of the language.
Even the most talented translators have major troubles trying to translate things between dissimilar languages. The way people separate things in the world in order to call them differs so much between languages, proper translation is oftentimes simply impossible unless you describe a t
Re: (Score:2)
Nope, nothing like that is happening. Algorithms which work with speech are still stupid as fuck and have exactly zero understanding of the language
They are steadily improving, so they are "getting closer" to the goal, even though we can agree there's still a long way to go.
Business Depends Upon Correct English (Score:2)
of course (Score:5, Interesting)
..the begged question is that gender or racial bias and stereotypes are intrinsically "wrong". They are to our 21st century sensibilities, but they served humanity pretty well for millions of years.
Maybe where you have a society where women ARE primarily concerned with raising children, there are better outcomes than when men raise children or women go off to pursue their careers. Maybe where you have a society where obvious strangers are marginalized and driven away, the remainder ends up more cohesive.
I'd be curious how these AI biases would develop if 'fed' only native African literature and information.
I'm not making an 'appeal to nature' here, saying what "should" be or "shouldn't" be.
One might suggest that, evolutionarily speaking, maintaining a bias is harder than not, assuming no reinforcement. That our language (pretty fundamental to being human, after all) is pervasive with such institutional biases would suggest that there is a value/benefit to such.
Re:of course (Score:5, Insightful)
I'm not making an 'appeal to nature' here, saying what "should" be or "shouldn't" be.
But the authors of the article are making such a statement, they just have nature completely backwards. They believe mankind, separated from "society" is naturally non-racist, non-sexist, non-gendered even, and that the outcomes of race, gender, or class groups is imposed on the formless humans by society, to where the concepts themselves of race and gender are "social constructs," and if we smash them everything will just...be great.
This is very similar to Marx's concept of communism and capitalism. He believed that mankind had no innate human nature, so the natural state of mankind was stateless communism, where everyone just naturally gets along and shares and contributes from his ability to the needs of others, and that capitalism was a foreign, oppressive system imposed on these innocents. This is why Marx is famous for his criticisms of capitalism, but as for his descriptions of communism...well not only does he not have them, he thought it was near blasphemous to try to describe how this natural communism would be carried out in practice because the imposition of such order is contrary to the natural, emergent collective spirit of mankind constrained and oppressed by capitalism. Want pure glorious wealth and utopian plenty? Just smash capitalism and you'll get it. And if you've smashed capitalism and perfect communism hasn't emerged...well it must be because you've still got some secret capitalists gumming up the works and they need to be ferreted out and sent to gulag.
This is the same concept behind feminism and anti-racism. Gender norms have nothing to do with the clear, obvious, and scientifically proven biological differences between the sexes. These are in fact imposed by the evil Patriarchy. Smash the Patriarchy and gender equality will simply emerge. If it doesn't, well, it must be because there's still evil sexists hiding around here and they need to be identified and purged. Difference in racial outcomes have nothing to do with the clear, obvious, and scientifically proven biological differences between human haplogroup populations. These are in fact enforced by evil White Supremacy. Smash White Supremacy and racial equality will simply emerge. If it doesn't, well, it must be because there still exist evil racists hiding around here and they need to be identified and purged.
This is the fundamental error of the social justice movement: the belief that race and gender are social constructs when in fact society is a racial and sexual construct.
Re: (Score:2, Insightful)
Racists are quite hard to squash.
Specially when they adopt a social justice discourse, still judging everyone by their skin colors but having a nice "those are the nice guys" written over the darker portion of their 1930 skin color measure ruler.
Re: (Score:2)
Racism cannot be squashed, same as stupidity cannot be. And the two are connected. It is one of the ways stupid people make themselves feel better about themselves. Now, realistically, when it comes to actually understanding things, something like 80% or so of the population is stupid. And while only a part of them go for racism, the others just just the same mechanism on other characteristics of themselves they think make them superior. Breeding, geographic aspects, age, gender, what they eat, etc. The lis
Re: (Score:2)
Racism is not just about hating certain people by their skin colors, but categorizing people in general by their skin colors.
It's a very intellectually lazy way to deal with the world, which many people sadly do.
Re: (Score:2)
It's a very intellectually lazy way to deal with the world, which many people sadly do.
They do it because it works. I'm sure you also take a lot of shortcuts in judging people. If someone cuts in line in front of you in the store, do you always have the same reaction, or is your reaction different whether the person is wearing a biker jacket or a sundress ?
Re: (Score:2)
For fight or flight reactions its quite ok to judge by the appearance, but its wiser to judge by the "do it look like a local threat?" rather than "whats the skin color?", which in several cases might be the same, but not always.
Let's say you will quickly judge someone that looks like an yakuza guy over some nerdy japanese, even with both being of the same "race".
Re: (Score:2)
For fight or flight reactions its quite ok to judge by the appearance, but its wiser to judge by the "do it look like a local threat?" rather than "whats the skin color?",
But often, people aren't judged solely on skin color. Two middle aged black guys in expensive suits walking in a convenience store at night will make a completely different impression on people as a couple of loud kids in sweatpants. People often accuse police officers of stopping a guy "just because he's black", but in reality, they look at dozens of clues, of which skin color is just one.
Bias worth mentioning (Score:2)
I'm pretty sure you never heard of this thing:
"Women are wonderful" effect [wikipedia.org]
"Gender bias" sounds a bit ironic, with that in mind.
Re: (Score:3)
Yep.
But it is doing it quite twice, once by the event itself and another in the way it is being reported.
Or rather... (Score:5, Interesting)
AIs could incorporate existing biases.
Say you train an AI that will accept or reject loan applications by giving it a stack of previous loans. If the human loan officers were biased against minorities—rejecting otherwise acceptable applications—that AI may end up doing the same. This bias is much easier to detect in human behavior but less so with AI which can't explain why it made any particular choice or even what its criteria are.
Re: Or rather... (Score:3, Insightful)
Uhh you totally ignore the FACT that making loans to minorities is inherently more risky.
It isn't that they are "bad". They are more likely to be poor, to have less stability, and to default.
Whatever the reasons for that are, it does not change the truth: Making loans to people of color is more risky. The AI would be operating correctly, if it's parameters were to be the most successful loan AI.
Facts are not racist. You, however, are racist for ignoring facts based on the color of someone's skin.
Re: (Score:3)
However, even if the factors that make minorities more risky are already accounted for, an AI may be biased against them because the training data contained a correlation between race and perceived risk.
Re: Or rather... (Score:4, Insightful)
If the correlation was merely *perceived* as you say, then this is correct.
But the risk usually is real.
You won't find scientific sources for these claims because in the current climate such a research is a public suicide for the researcher, but that doesn't change the reality. And you can't expect an AI system to ignore the elephant in the room.
Re: (Score:3)
Yup. Statistically speaking, a black person earning $200k/year is more likely to
- end up in prison for drug trade.
- get shot by a cop (even if they are entirely innocent)
- sue the bank for discrimination when bank collects overdue debt
- get involved in dangerous activities like political protests (anti-discrimination), be arrested or injured as result.
Simply put, all things being equal, choosing the white is safer - a more economically sound decision. And yes, it sucks, it's unfair, it's not the black perso
Re: (Score:2, Insightful)
They are more likely to be poor, to have less stability, and to default.
Economic circumstance is not inherent is it? So it isn't inherent is it? You got confused, and then a bunch of confused people modded you up because what you said fit their prejudices and lazy correlation=causation thinking.
And of course the article is about computers making the exact same mistake—and the people getting m
Re: (Score:2, Insightful)
Economic circumstance is not inherent is it?
It could be. Your personality influences the circumstances you live in.
and lazy correlation=causation thinking.
No, there doesn't need to be causation. Just having a correlation is enough grounds for bias.
Re: Or rather... (Score:5, Insightful)
Probably incorrect use of "inherent", in the common meaning, "pervasive".
It's not "inherent" as in "nothing ever can change that, it's an irrevocable part". It's prevalent. Take an individual, you may find a fantastic person. Take an average over the population though, and you see "the average is bad." It is. Don't deny it - the correlation is strong, and while correlation is not causation, in risk assessment correlation is sufficient to deliver accurate results.
I'm not going into detail what social, political, economical and genetic factors may or may not contribute to the correlation. It's a can of worms no professional dares to approach fully objectively. But the correlation between racial and economic status is a fact, and correlation between economic status and risk is a fact. So why would a machine learning device ignore a strong factual trend, just because its existence is offensive?
Re: (Score:2, Insightful)
So why would a machine learning device ignore a strong factual trend, just because its existence is offensive?
Because SJW's want us to live in a society where anything offensive -- regardless of whether it's hard, provable, objective fact -- must be stamped out. These are the same type of people who burned people at the stake for daring to claim the Earth wasn't the center of the universe, or the same ones who destroyed scientific careers of those who dared claim luminiferous aether wasn't a real thing, or who shunned aeronautics engineers who said the sound barrier could be broken, and so on and so forth. These
Re: Or rather... (Score:5, Interesting)
Making loans to people of color is more risky.
It depends on the color. Asian Americans have lower default rates than whites.
Re:Or rather... (Score:5, Insightful)
If you truly wanted to avoid racial or gender bias you would just remove that information from what you feed into the algorithm, at which point it can't a priori be biased against anyone because it can't even evaluate them based on those criteria. But let's suppose you do that and then look at the results after the fact, add that data back in and come to the startling conclusion that your AI is disproportionately rejecting candidates from some group. It can't possibly be because it knows they're a member of that group, but because that group happens to have worse outcomes.
If you stop to think about this, its not too hard to come to a reasonable conclusion that if your AI that knows nothing about race is suggesting that black/white/latino borrowers are a higher risk, it's because they're a higher risk. Reality doesn't care about feelings or trying to make sure that outcomes are equal across groups, so we conclude that some group is a worse risk. It probably is the case that black borrowers are more likely to default, but it's not because they're black, but because blacks are typically less well off so of course they're going to default on loans more often. In reality they probably shouldn't (and maybe wouldn't have) received a loan, but some policy designed to make it easier for them to get approval caused it to happen, but that doesn't make them a safer risk, it just lets some people feel better about the world.
If you want to check if your AI is racist find a group of loan applications that are for all intents the same with the only difference being the race of the applicant see if you get a different results based on race for that input set. My guess is that you probably wouldn't. Because if you're stripped out racial data as a category to train on, the algorithm wouldn't suddenly decide to discriminate based on it. Also, for some machine learning algorithms (e.g., anything like a decision tree) you can look at precisely how it evaluates a case, so you could see pretty easily if the AI has a step where race==groupX ? reject : approve becomes pretty apparent. That's not true for all algorithms, but just because its an AI doesn't means its a black box that is beyond all human understanding.
Self fulfilling prophecies... (Score:5, Insightful)
Therefore, they will continue to be uneducated, unemployed, without means to make a business and in general poorer and more likely to engage in a life of crime. All that nasty stuff that comes with poverty and lack of work, education and opportunities in general.
Ergo they will continue to be riskier and worse off than those in social groups with better evaluations. Rinse and repeat.
Re: (Score:2, Insightful)
Ergo they will continue to be riskier and worse off than those in social groups with better evaluations. Rinse and repeat.
There is an obvious solution you're ignoring: how about you loan them the money? Or if you lack sufficient funds, get a group of like-minded individuals together, form a banking institution specializing in loans to these people being rejected by traditional institutions, and see how it turns out.
What? You don't want to risk your own money on such a venture? You can't find others willing to risk theirs? You find your default rates are higher than your institution can sustain? Your business fails?
Funny
Re:Or rather... (Score:4, Informative)
Reality doesn't care about feelings or trying to make sure that outcomes are equal across groups, so we conclude that some group is a worse risk. It probably is
Except the latest interpretation of the Civil Rights Act by the courts is that Disparate Impact counts the same as direct discrimination. If your company adopts a policy that has a negative disparate impact on different groups, then it's deemed in violation of the law, so even if your AI is making a correct decision, it may be deemed racially biased by the courts, and require your company modify its policies to compensate for the bias.
Re: (Score:2)
Fantastic comment. Regarding the end part about loan applications, you said:
Because if you're stripped out racial data as a category to train on, the algorithm wouldn't suddenly decide to discriminate based on it.
If the data included any location information it could very well exhibit racial bias as an unintended consequence. I live in a very racially divided area (there is literally a road that pretty much delineates the metro area by race - a continuing consequence of the past and wealth disparity).
Re: (Score:2)
debate is about whether race is a valid grounds on which to judge someone.
Isn't that exactly the point? If the AI isn't told about race, but still recommends "racist" outcomes, there's more going on than the race of the person. The AI isn't being racist, the race of the candidate is being ignored; the person is only being judged on valid grounds.
Re:Or rather... (Score:4, Interesting)
E.g.: Your AI makes a statement, "Women be like this, while men be like this." And you tell your AI, "No AI, bad."
So your AI rethinks it and comes up with another statement, "People with vaginas be like this, while people with penises be like this." And you tell your AI, "No AI, bad."
So your AI rethinks it and comes up with another statement, "People named Betty or Veronica be like this, while people named Archie or Jughead be like this." And you tell your AI, "No AI, bad."
So your AI rethinks it and comes up with another statement, "People who wear makeup be like this, while people who don't be like this." And you tell your AI, "No AI, bad."
Etc. You could do this forever and you still wouldn't catch them all, they'd just get more subtle.
Re: Or rather... (Score:2, Funny)
What can't Nlggers stop looting in emergencies and politely forage for supplies like honest white people!
Re: (Score:3)
Re:Or rather... (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Researchers, who go looking for bias are ridiculed, booed, and then fired.
If a clear racial bias appears in your research results, you'd better bury them deep and never show anyone lest you're marked a nazi.
Re: (Score:2)
If they would stay there, it would be fine, but they keep going to new places and declaring them safe spaces.
Re:Bias bias bias (Score:5, Insightful)
Well.... The idea is if you can declare place X a "safe space" where free-speech and microaggressions/uncomfortable messages are strictly prohibited, then the only thing you need to do next is get a process by which you can expand the size of X, until X encompasses the entire planet, and then your mission is accomplished.
Start with something simple... like a designated area.... then get expanded to something, say the size of a building, then say the size of a college campus, then get someone to declare public areas in a city safe spaces, Then get laws passed applying to places that are private venues but places of public accommodation, Finally, get progressive judges to adopt the same rules for more private spaces, then work on getting a multi-state area, finally, take it to all 50 states.
Re:Bias bias bias (Score:5, Funny)
Herstory will prove you wrong.
Re: (Score:3, Insightful)
This reminds me of a similar news story from a while back about how "reality was racist" because a lot of studies found that a lot of so-called stereotypes were, in fact - *gasp* - true.
Rather than accept that maybe the people they call "racist" are in fact rational beings, the study authors called out reality itself as racist.
Re: Uh, no. (Score:3, Insightful)
Wasn't there some kerfuffle over Google image searches showing black kids in police mugshots vs white kids in college campuses? It turns out when you search the internet you find every bias under the sun. Whodathunkit?
AI learns by example. If you feed it biased data it learns the bias. I don't understand why we're surprised by this.
Re: (Score:2)
If you feed it biased data it learns the bias.
Exactly. And this is nothing new, by the way; the same worry was expressed around 20 years ago after a proposal to use expert systems to assist judges with sentencing. Though when I see how some people plan to use "big data" today, I can only conclude we haven't learned anything since then.
Re: (Score:2, Insightful)
The problem is when the bias exists in reality, not in perception or opinions.
The correlation between socioeconomic status and risk of defaulting on a loan is clear, and it would be silly to question it.
The disparity between socioeconomic status of different races is a huge issue, not just a fact, a fact that is loudly announced in a voice full of outrage. This means a clear correlation here too.
So why, when you have "A implies B" and "B implies C" suddenly everyone starts looking for excuses to claim "A i
Re: (Score:3)
There's logical implication and then there's causation. A implies B and B implies C means A implies C. A is characteristic of B and B correlates with C doesn't tell us how the causation runs, or indeed what's the best way to estimate C. It may be that, holding socioeconomic class constant, there's no significant difference between blacks and whites defaulting, in which case race would be useful in a prediction only as a proxy for socioeconomic class.
If it's irrelevant, a data mining system might still
Re: (Score:2)
Link? Our did you just make up a study to support your position?
Re: (Score:3)
Bias does not mean what the authors think it means.
It means exactly what they think it means. Maybe you are confused? It starts out as data, but once it is learned and the AI uses it to act on it is bias. It reinforces old cultural norms on new generations.
Re: (Score:2)
Re: (Score:2)
Bias: deviation of the expected value of a statistical estimate from the quantity it estimates,
You have a bag with 10 red and 20 blue marbles. You choose a marble and write it down, then return to the bag.
But it's your subjective prejudice to expect the blue marbles will come up more frequently, right? Everyone knows they will show up in equal proportions, you racist!
Re: (Score:2)
Re: (Score:2)
I also wish more people who read Psychology 101 book also read Statistics 101 book.
Then they might learn that bias is sometimes a fact of the nature completely apart from perception. That not all random processes follow the gauss curve and if you estimate them correctly, this is not merely your subjective perception.
Re: (Score:2)
Some words should be more associated with one gender than the other. Body parts and clothing in particular come to mind.
Re: (Score:2)