A New Bill Would Force Companies To Check Their Algorithms For Bias (theverge.com) 183
An anonymous reader quotes a report from The Verge: U.S. lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems -- like facial recognition or ad targeting algorithms -- for bias. The Algorithmic Accountability Act is sponsored by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), with a House equivalent sponsored by Rep. Yvette Clarke (D-NY). If passed, it would ask the Federal Trade Commission to create rules for evaluating "highly sensitive" automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.
The Algorithmic Accountability Act is aimed at major companies with access to large amounts of information. It would apply to companies that make over $50 million per year, hold information on at least 1 million people or devices, or primarily act as data brokers that buy and sell consumer data. These companies would have to evaluate a broad range of algorithms -- including anything that affects consumers' legal rights, attempts to predict and analyze their behavior, involves large amounts of sensitive data, or "systematically monitors a large, publicly accessible physical place." That would theoretically cover a huge swath of the tech economy, and if a report turns up major risks of discrimination, privacy problems, or other issues, the company is supposed to address them within a timely manner.
The Algorithmic Accountability Act is aimed at major companies with access to large amounts of information. It would apply to companies that make over $50 million per year, hold information on at least 1 million people or devices, or primarily act as data brokers that buy and sell consumer data. These companies would have to evaluate a broad range of algorithms -- including anything that affects consumers' legal rights, attempts to predict and analyze their behavior, involves large amounts of sensitive data, or "systematically monitors a large, publicly accessible physical place." That would theoretically cover a huge swath of the tech economy, and if a report turns up major risks of discrimination, privacy problems, or other issues, the company is supposed to address them within a timely manner.
What is bias? (Score:1, Interesting)
What is bias? Does "bias" mean "not a white male?"
In Asia, AI training data is almost exclusively Asian. That means results will skew Asian. Is that evidence of algorithmic bias? How would you go about determining that?
Re:What is bias? (Score:5, Insightful)
Re: (Score:3, Insightful)
Unfortunately in todays world, political correctness wins - and you'll be banned for stating the facts.
Re: Would /. be subjected to this regulation? (Score:2)
Does Slashdot have algorithmic moderation? If they can do that then why can't they support "smart" punctuation?
Re: (Score:2)
When the facts say that men are on average stronger and taller than women, are the facts wrong?
If an algorithm is hiring bricklayers, and all it knows is the gender of the applicant, it will pick men.
But feed the algorithm enough data: the weight, strength, job history, test laying rate etc, then gender bias will be removed. It will pick the best applicants, who just happen to be men. (OK, all the applicants were men, but you get the idea.)
Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, s
Re:What is bias? (Score:4, Insightful)
Interestingly enough in your example, even if you removed the actual gender from the data, you'd probably still have a 'biased' selection algorithm.
This came up in some other scandal where an algorithm *tried* not to be racist by excluding race and ended up still very biased in a law enforcement context. Note that the algorithm was seemingly bogus for other reasons so its not the best example, but even if it was worknig correctly it still probably would have been biased and the bias would have been undeserved. Notably they looked at arrest records of the parents as an indicator, and if a biased system caused their parents to be arrested, then the system would gladly extend that bias to a new generation.
Which all points to a key problem of playing 'whack-a-mole' with various endpoints where bias manifests when the bias problem is a bit more systemic. If a field is unfairly excluding minorities or women, then you don't just wave a wand at the employers, you have to look at education and cultural upbringing and accept that correcting a balance problem may be a generational problem. Also make sure the people you think are being slighted actually want this kind of help, rather than elevating the state of thnigs they would rather do.
Re:What is bias? (Score:4, Insightful)
Assuming the algorithm is appropriately designed, it should only matter if whether or not your parents being incarcerated is a good predictor of recidivism. If it isn't, the data would show that a bunch of black people who were arrested had children who didn't commit crimes, and that there were several black parents who were never arrested that did have children who committed crimes. I understand that someone could easily look at the data wrong and make terrible conclusions (see the gender pay gap as one common example) based on bad reasoning, but that's another matter if we're assuming that the algorithm was properly designed.
I think you'd have a stronger claim with the argument that arresting parents over trivial matters or non-violent crimes eroded the family structure in many African American communities which resulted in an increase in criminality. Studies that have explored this to that level of details support such reasoning. It isn't that black people commit more crime because they are black, it's that poor people from single-parent families commit more crime, and there happen to be a disproportionate number of black people that fall into that group. It's not the only factor, but we'd significantly reduce the problem by decriminalizing drugs.
Re: (Score:2)
While I'm willing to believe the other parts of it, I wouldn't count out the problem of trying to fix biased systems by training algorithms. The data being fed into a machine learning strategy is going to just try its best to imitate the system it is being fed data about. Generally we lack straightforward means to 'adjust' such algorithms.
The situation you described is indeed what I was thinking of and I did oversimplify, but the core remains: the algorithm cannot measure the absolute truth, only the hist
Re: (Score:2)
Of course, evidence suggests that the arrest/conviction experience is unfair to people performing the same sorts of activities. So historical arrest/conviction/recidivsm data is not the ideal basis for an algorithm. Just because someone got arrested/convicted doesn't mean they actually did it. Just because someone didn't get arrested/convicted doesn't mean they didn't do it.
Re: (Score:2)
Interestingly enough in your example, even if you removed the actual gender from the data, you'd probably still have a 'biased' selection algorithm.
That is not "bias" by gender though, but bias by relevant correlates, which is fair.
If google hires more male white and Asian engineers, and few black females, it is not bias by race and gender, but bias by intelligence and skills of the applicant pool, which is affected by not just the skills but the differing preferences of the demographics.
Any small difference in engineering aptitude between men an women is dwarfed by their different preferences and interests. No politically correct algorithm is going t
Re: (Score:2)
There may be different preferences, but to what extent are those interests engineered by culture in a way that disadvantages people? We drill into kids heads that they *should* want something and coincidently as a culture we drastically make life miserable for that aspiration... There are of course two fixes, changing culture to truly make all avenues equally appealing or fix the problem where we do not compensate highly valuable responsibilities.
The problem is part of the advertised purpose of using algo
Re:What is bias? (Score:5, Insightful)
Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, so the algorithm will ignore it.
If that were true then it wouldn't matter. However, either by nature or nurture, color matters. If it didn't Asians wouldn't be given penalties and other groups bonuses for college admissions. If we want to argue that race isn't important, and I don't think it is important, then we have to do away with the diversity quest and let it play out.
Re: (Score:2)
A black man has no such option - even if he was born upper class the assumption that he's poor trash trying to hide in a suit will follow him all his life.
Seriously? I find that hard to believe. But perhaps I'm biased since most of what I know about America comes from American TV, which is full of black characters in upper-middle-class roles.
But no. If someone who dresses and talks like, say, Obama, comes into your shop, nobody is going to be watching him closely in case he steals something.
If white guy dresses and talks like a street criminal, he will be treated as such. Surely?
Re: (Score:2)
The situation in fictitious media is improving much faster than in reality - but we still have black men being arrested by cops for breaking and entering while entering their own homes in high-end neighborhoods. Being pulled over for driving expensive cars, etc. And of course the news tells a completely different story - a black man killed by cops over what should have been a minor issue, if even that, is almost always a "thug", whereas a white mass-shooter is a "disturbed individual".
I suspect your own country has similar problems, but you'd have to be either very observant, or rude enough to ask a black friend about it directly to really notice.
You yourself are only telling part of the story. Let's confront the elephant in the room - 11% of the people commit ~1/2 the murders. This is not bias - it's what creates bias. At some point own has to own their own reputation instead of blaming others. Black people are mostly murdered by ... black people. Racism is not the factor that it's made out to be.
Re: (Score:2)
Alternately - poor people are mostly murdered by poor people.
How do the odds of an upper class black man being a murderer compare to those of an upper-class white man?
> This is not bias - it's what creates bias.
Very true. But as soon as you try to apply statistical information to individuals, that bias becomes unjustified discrimination. Especially in a situation like we currently have, where far more blatant historical racism forced most black people in to poverty - which is well known to increase the
Re: (Score:2)
Very true. But as soon as you try to apply statistical information to individuals, that bias becomes unjustified discrimination.
Truer words were never spoken. This is why I'm against special rules for any group regardless of race, gender, nationality, or any other way you want to segment people. The current policies are two wrongs will somehow make history right.
Especially in a situation like we currently have, where far more blatant historical racism forced most black people in to poverty - which is well known to increase the probability of criminal behavior.
Even accounting for economic status that demographic commits more crime than expected. Plus there is the Asian Privilege issue that remains unaccounted for in the historical bias narrative.
Almost all mass-shootings, in the U.S. are committed white men - and yet, as a white man I don't face automatic suspicion of being a mass shooter. I'm not forced to pay for another man's crimes simply because of the color of my skin. Why should a black man be?
Since you're here I can expect you have some awareness of relative frequency of ev
Re: (Score:3)
Even if it matters, it might not be fair. Let's say that a group of the population is on average more poor. Given that a person's information is somewhat noisy, a good machine learning algorithm that can determine that a person is a member of a poor group will give that person a bias towards poor. In other words, t
Re: (Score:2)
a good machine learning algorithm that
can determine that a person is a member of a poor group will give that
person a bias towards poor.
Yes, but only if you have insufficient data. If you have to choose between loaning your money to two people, and all you know is one is black and the other white, then of course you are much safer loaning it to the white guy!
But if you have their tax records and bank statements for the last ten years, the skin colour becomes irrelevant!
Re: (Score:2)
So how much information is enough? It might be more than people realize. A machine learning algorithm is based on correlation, so it will probably give the "bad" information some influence. And what happens for a person that doesn't have many records. I guess it's tough luck since any priors start to have a bigger influence.
Exactly the opposite will happen (Score:2)
Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, so the algorithm will ignore it.
Exactly the opposite will happen. .let's pick example that won't brush POC people wrong as it disadvantages men, car insurance.
E.g
You let AI know the gender - it will figure men are more likely to get into crash.
If you stop feeding gender to AI, it will figure people named John are more likely to be causing trouble, than people named Julie.
You will have to play a long cat and mouse game cutting off information sources for the AI to a point of it becoming useless.
Just think about the whole "discrimination" i
Re: (Score:2)
it disadvantages men, car insurance.
You let AI know the gender - it will figure men are more likely to get into crash.
If you stop feeding gender to AI, it will figure people named John are more likely to be causing trouble, than people named Julie.
You will have to play a long cat and mouse game cutting off information sources for the AI to a point of it becoming useless.
Don't cut off data, add more!! Yes men crash more than women, but why? Given enough information about the individual, the algorithm will stop making guesses based on gender. Age, personality, aggression level, claims history, driving skill, risk-taking ... Add in a GPS driving log for younger drivers, and each person will pay a premium based on their own individual risk, not on what is between their legs. Of course men and women are different, so average premium will differ, but there will be
Re: (Score:2)
Re: (Score:2)
Adding more data could help you make better estimates about likeliness of that particular instance of human crashing cars,
Yes, that's what I'm saying. Fair treatment based on individual merit. This does not imply equal outcomes for classes for people.
And trying to force equal outcomes would be very bad.
Re: (Score:3, Informative)
When facts conflict with beliefs (especially politically), which do you think will win?
Especially when we are talking politically the answer is clearly beliefs. How else do you explain Trump and Brexit?
Re: (Score:2)
When facts conflict with beliefs (especially politically), which do you think will win?
Especially when we are talking politically the answer is clearly beliefs. How else do you explain Trump and Brexit?
You forgot liberal postmodernism and religion. It explains that too.
Re: (Score:2)
That's bullshit democrat bias right there!
No it isn't. First I'm not American and certainly not a democrat (or republican for that matter) supporter and secondly the promotion of beliefs over facts is responsible for both the cause and effect of the Trump and Brexit votes. Voters recoiling from the far left philosophy that puts feelings over reality is what pushed people to the far right where they voted based on lies over reality.
Personally, I tend to rather like reality since it is the only thing that ultimately, regardless of our individual
Re:What is bias? (Score:4, Insightful)
Which has nothing to do with bias. Bias, in this context, is unwarranted assumptions. Men are on average stronger and taller than women, but a system which, say, ranks potential firefighter applicants using their gender as a factor instead of looking at their performance in the actual job is biased.
Re: What is bias? (Score:1, Insightful)
Thank you for finally admitting that 'affirmative action' is the purest form of bias.
Many bias defined by outcomes (Score:5, Insightful)
Which has nothing to do with bias. Bias, in this context, is unwarranted assumptions. Men are on average stronger and taller than women, but a system which, say, ranks potential firefighter applicants using their gender as a factor instead of looking at their performance in the actual job is biased.
Sorry, but you haven't been listening to the left if you think the test for bias is about assumptions. Equal outcomes is very strongly being pushed as the measure for bias.
Do you have more males than females going into trades? That must mean a bias against females exists in the trades.
Do you have more Asians getting into STEM? That must mean a bias in favor of Asians in STEM.
Language is being redefined and weaponized to push people's agendas(I know, it always has). Today we have the definitions of equality, racism, bias, violence, assault and others being changed to better fit agendas. Racism being the grossest example because of it's importance and power. I grew up understanding racism to be discrimination based on race. Today though the push is on to redefine racism to be a combination of discrimination AND power. This turning the convenient trick then that 'whites' have all the power, so now only they can be racist, by definition.
Re: (Score:2)
Equality of outcome is scourge that need to be purged.
Well said. Pity you posted AC.
Left and Right should be in balance
Left = Open to ideas, artistic, compassionate
Right = Structured, Rules, Order
Marxist Left = Wanting control of your ideas through government. Power to remove any threat to their ideology that all people (except them as the rule makers) are equal.
Thus they don't think it is hypocritical to use discrimination (eg minorities have lower entry standards) to achieve their ideology of equality of
Re: (Score:2)
Apart from thieves there's probably one of each of those in the team.
Re: (Score:2)
Which has nothing to do with bias. Bias, in this context, is unwarranted assumptions.
Assumptions like a specific dopey ass kid is more likely to get their vehicles wrecked than a group of older squares?
The whole point of many of these systems is rendering prejudicial assumptions of future behavior based on limited knowledge. The name of the game itself in fact the very reason these systems exist at all is inherently prejudicial.
What people interested in these things really seem to be seeking is curtailing rendering of judgment in the first place.
Men are on average stronger and taller than women, but a system which, say, ranks potential firefighter applicants using their gender as a factor instead of looking at their performance in the actual job is biased.
Who seriously believes anyone cares about mo
Re: (Score:2)
Assumptions like a specific dopey ass kid is more likely to get their vehicles wrecked than a group of older squares?
That's another interesting example.
In the UK women used to get cheaper car insurance. That turned out to be illegal under gender equality rules. Insurance companies had to stop using gender as a risk factor. Women's car insurance went up, men's went down, in some cases a lot for younger drivers.
I see a lot of complaints about age discrimination in the tech jobs market. I wonder if older tech workers would accept removing age as a risk factor for insurance, pushing their premiums up, in exchange for also rem
Re: (Score:2)
New firefighters have to undergo some tests before being accepted, both physical and academic. Those tests are designed to measure their likely performance at the job.
Re: (Score:2)
If the test is an accurate assessment of future job performance, creating an easier test invalidates its predictive ability.
Two items to check: "IS the test an accurate assessment of future job performance?" and "IS the easier test less predictive?" They both seem like reasonable assumptions, but the world is full of cases where the common sense idea is inaccurate.
Re: (Score:2)
Diana Moon Clampers - the United States Handicapper General [archive.org] in 2081.
Re: (Score:2)
When the facts say that men are on average ... taller than women
#notallmen
Definition (Score:4, Insightful)
Bias is favoring one thing over another. Which is what you want certain algorithms to do. I want Youtube to find stuff I like. I want Google to find pages that are relevant to me.
Not sure how you are going to tease out the "good" bias from the "bad" bias, though. To extend your example, if 90% of the people in Hong Kong looking for a famous concert pianist are trying to find Lang Lang, who is hugely popular there, he's going to come up pretty fast when looking for concert pianists in general, which is what you want. It means the algorithm is being biased against Helene Grimaud, which is fine, because she isn't what most people are looking for in Hong Kong. That doesn't mean she doesn't come up at all, it just means she's ranked lower in the search results.
Re: (Score:2)
Bias is favoring one thing over another. Which is what you want certain algorithms to do. I want Youtube to find stuff I like. I want Google to find pages that are relevant to me.
How about a dating site that decided you had a bias in favour of tall partners, or light skinned partners? There is no simple answer to this.
Some people consider things like hair colour preference to be a matter of personal taste and completely fine to filter by. Other people complain that they are short and don't get dates and it's unfair.
Out in the real world people may select potential partners based on those preferences. But often relationships start without any deliberate selection, through friends or
Re: (Score:2)
How about a dating site that decided you had a bias in favour of tall partners, or light skinned partners?
If their algorithm has decided this and it doesn't reflect your actual preference, then it's a flaw in their algorithm. I'm pretty sure there is incentive for them to fix it, if that provides better results.
I think, if the algorithm is reflecting what people actually want, then there is no problem. I think people are conflating machine learning with "programmed-in" biases.
Re: (Score:2)
Are we okay with filtering people based on those preferences though?
Say you are a significantly below average height male. Conventional standards of attractiveness in many western countries favour tall men. In fact short men are often the butt of jokes. Short men are going to find it hard to meet someone on a dating site if it filters them out based on perceived preferences, or allows the user to set a minimum height when searching.
Some people argue that is just individual preference and it's fine. Sucks bu
Re: (Score:2)
Are we okay with filtering people based on those preferences though?
For a dating site, yes absolutely. You wouldn't make the same argument against allowing people to filter based on sex would you? Wouldn't bothering to list sex as an attribute on a person's profile just reinforce standard of conventional attractiveness too?
Making a dating site that's less useful at helping people find attractive (physical or in a broader sense) partners just means that people will go to a different site. If you don't like it, go make a pure personality dating site. I have a feeling it wi
Re: (Score:2)
Personally I think gender selection for dating is fine, but with certain limits. It should be the gender the person puts down, nothing like "biological sex at birth" or "has penis". And it should include provision for non-binary.
Re: (Score:2)
If someone doesn't want to date people with a penis, then it would save a lot of time and grief if they knew that right away.
Re: (Score:2)
Actually yeah, I take it back. Trans people are in enough danger as it is, best to let them decide to declare trans or not and for people looking to filter trans people if they are that way inclined.
Re: (Score:2)
For dating, if a discrepancy between declared and apparent gender is going to be an issue, wouldn't both sides want to get that out of the way immediately?
Re: (Score:2)
Re: (Score:2)
There are some good reasons for filtering by gender, some justification for it. Height though seems to be just a thing that conventional beauty considers to be a factor for arbitrary reasons.
Re: (Score:2)
Are we okay with filtering people based on those preferences though?
I'm not inclined to impose my values on what what criteria people use when they are choosing whom to date. I think it should be up to the individual. There are already dating sites that cater to certain overall preferences (jdate/christian date/muslima/grindr/etc...) If you care about that kind of thing then great. If not, go to a dating site that doesn't. I'm sure they are out there.
As for unintended bias in algorithms, I'm sure it could happen, but again, I think it's a bug that most dating sites would wa
Re:Definition (Score:5, Insightful)
How about a dating site that decided you had a bias in favour of tall partners, or light skinned partners? There is no simple answer to this.
And how about a dating site that figures out you had a bias for a certain gender ?
Re: (Score:2)
That gets to the heart of the question. To what extent are preferences something that a person has control over, are something that are dependent on social norms and influences, and which are inherent. And to what extent does re-enforcing those preferences contribute to systemic biases, if any?
Re: (Score:2)
Does it matter what preferences I have control over ? Why can't I just find a date that matches my preferences, whether they are inherent or created by social norms ?
Re: (Score:2)
It's more a systemic issue, as in if the dating website never recommends any X people, it's re-enforcing that preference. If you see X people on the list it at least presents the opportunity to broaden your options a bit.
It's probably a good thing for the dating site too. There are only so many people who match very specific criteria near you, and making some less perfect match suggestions increases the probability of you finding someone you end up actually liking.
Re: (Score:3)
Bias is favoring one thing over another. Which is what you want certain algorithms to do. I want Youtube to find stuff I like. I want Google to find pages that are relevant to me.
Not sure how you are going to tease out the "good" bias from the "bad" bias, though.
Right. The problem is that the word "bias" is overloaded. Bias can mean prejudice. Bias can also mean a systemic, non-random distortion of a statistic. Algorithms are completely incapable of the former, as they have no sentience. It's the former that the bill wants to target, but the bill punts on the practically impossible task of defining prejudice by asking a group of humans to do that at a later time. This defining of prejudice by prejudiced humans can never lead to the eradication of prejudice bu
Re:Definition (Score:5, Insightful)
These days that means that a hiring algorithm had better hire better than 50% women, and every ethnicity in relation to it's percentage representation in the population. What the algorithm is not allowed to do is take in to account any factors that might skew that, such as the applicant pool being predominantly one group, or the qualifications of individual applicants.
Re: (Score:2)
Forget about 'what is biased' and consider 'what is large'.
Re: (Score:2)
Here is a book on that.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Well, as Asians they're exempt from this kind of thing. It's only western white folk who need to worry about diversity, bias, blah blah.
Re: (Score:2)
Equality of outcome (Score:2)
This is an attempt to enforce equality of outcome
You keep using that word. I do not think it.. (Score:1)
means what you think it means.
Dear lords as a cybersecurity/IA professional reading this thing makes my head hurt from the buzzword bingo and absolutely worthless definitions included within.
"taking into account the novelty of the technology used" - what? WTF does that mean? Is that a legal phrase or the verbal diarrhea of a staffer that thinks this sounds cool but is worthless from a legislative, and more importantly judicial, perspective? The corporate lawyers are going to run rings around this B.S
Inconvenient truths (Score:1)
You can't legislate them out of existence.
Excellent idea (Score:3, Insightful)
Let's get some hard data showing the bias that is present in censorship. The US is more conservative than liberal:
https://news.gallup.com/poll/2... [gallup.com]
Without algorithmic bias online media would lean conservative for the simple reason that the US has more conservatives than liberals. Yet somehow online platforms (Reddit, Facebook etc.) tilt overwhelmingly liberal.
This can only be a result of bias that has been put into algorithms and sanctioned.
Comment removed (Score:4, Informative)
Re: (Score:2)
The rest of the world also know that the US has no left left. You can vote either idiot con man or lunatic.
FTFY
Re: (Score:2)
You didn't really get the GP's point did you?
Re: (Score:2)
Re: (Score:2)
I think your premise would have likely been the case in the early days of the Internet. When I first started working with the Internet you didn't have very many people online who were over 40. I certainly remember old people who were proud of their lack of technical skills.
That being said, in the years since most of the US adult population has joined the Internet with 89% of adults in the US using it.
https://www.statista.com/stati... [statista.com]
In the old days algorithms were indeed influenced by the data that they rec
Re: (Score:3)
There's nothing stopping con
The problem is that the bias reflects real data (Score:3)
I am currently reading "The Sum of Small Things." In the first chapter, the idea of different racial groups having different demand levels is shown through the data.
There are valid reasons for that difference in demand. However, to pretend it isn't there is to try to live in denial.
In the case, in the book, the increased demand by Blacks for conspicuous consumption goods, ceteris paribus, is based on the belief that many Blacks find it necessary, but often not the result of conscious decision making, to carry visible markers of the middle class because it is not assumed. Now, we can reject this conclusion. However, to reject the discussion because we reject the data gets us no closer to truth. instead, it moves us away from truth.
Re: (Score:2)
Joke. (Score:1)
Q: What do you call a Black test-tube baby.
A: Janitor In A Drum.
Machines learn from humans - we are biased (Score:2)
What this bill proposes is to replace the real, actual bias with another, artificial bias that is more desirable / politically correct / whatever.
Not saying this is a bad thing - combating centuries-old entrenched preconceived ideas is probably a good thing more often than not - but please stop saying we're *removing* bias.
CCTV (Score:1)
Data tracks to decades of FBI stats.
Its not bias to have a computer create a database of crime.
To detect the use of shared/fake ID by illegal immigrants.
Clueless (Score:2)
This strikes me as another attempt by clueless pols to legislate fairness. Not that legislating fairness is wrong, however in this case, it is more "Well golly, tech companies can do anything, we'll make them do this!!!" Ya, and they wouldn't find a way to game and unworkable mandate, eh?
Giant companies better watch out! (Score:2)
Re: (Score:3)
Data science uses training data that often contains factors like race, sex, income, and education level that when included cause an algorithm to train to moderate or recommend people differently based on your assigned group.
But even if you leave out factors like race and sex, it is possible that the machine learning application figures it out for itself, and creates an internal pattern that happens to produce a good match to race or sex, or any other factor, and then discriminates based on that internal pattern.
If an investigator then looks at the result, it appears that the AI has certain biases.
Re: (Score:2)
If blacks commit 2x as many crimes, then they should be arrested 2x as much
There can also be self-reinforcing statistics. Just for argument's sake, lets assume blacks constitute 2x of the arrests. That does not mean they're committing crimes 2x more often, just that they're being caught 2x more often. It is possible for blacks to get more closely scrutinized than whites, possibly resulting in more white criminals not being caught.
You need to be very careful when interpreting what a number means. And like all facts, interpretations are subjective.
What About AI? (Score:2)
Re: (Score:2)
One would have to spend considerable time and effort designing tests to determine bias
Simple. If result is not 50% male, 50% female, it's biased and needs to be corrected.
Re: (Score:2)
How did you determine that 50% male, 50% female is the correct answer?
Seems arbitrary.
Equality of outcomes again (Score:4, Interesting)
If we're using neutral data as an input and the system comes to it's own conclusions...doesn't that say something about the data set? Shouldn't we try to understand why the algorithm came to that conclusion instead of immediately jumping to "check your privilege" ?
Re: (Score:2)
Shouldn't we try to understand why the algorithm came to that conclusion instead of immediately jumping to "check your privilege" ?
Isn't that what we are doing? Trying to understand if there are flaws in the training data or the way the training was administered?
The reality is that it's extremely difficult to provide completely unbiased training data. Worse still it's often an on-going process, e.g. if you have a data set for deciding sentencing it will need to keep evolving over time to reflect new laws and new circumstances that didn't exist before. Just like we need to keep a constant eye on human bias in the system, the same is goi
Re: (Score:3)
it's extremely difficult to provide completely unbiased training data.
The problem is that a lot of these biases are actually real. In order to get unbiased data, you have to artificially shape it by applying an opposite bias.
Re: (Score:2)
"Group X commits more crimes" may be a statistical fact, but should it influence sentencing?
Something being "real" in some sense isn't always the important thing.
Re: (Score:2)
Mod parent up insightful
Re: (Score:3)
This is about deciding if the data used is neutral and if the system is neutral. A system trained on data that is not neutral will not be neutral. For example, if an algorithm is set up to address policing because it is found the police have been biased, but it is trained using previous arrest records from the police, it would be biased as well.
So if an algorithm doesn't recognize Micky Mouse (Score:2)
... is it biased?
E.g. an algorithm that is supposed to recognize humans will probably do worse for humans in a Mickey Mouse costume. Should it now be trained to recognize those dressed up as MM equally well, although it is a very rare case? Or should it be trained so it deals best with those situations that it will probably encounter more often, and that are thus more relevant.
I.e. should the algorithm be trained with a representative sample (for the country it is to be used in), or should every ethnicity,
What about the bias that users want? (Score:2)
The vast majority of users serviced by AI systems have expectations by which they judge the service. When humans serve those users, good customer service usually dictates meeting those expectations. The expectations are largely driven by the microcultural background of the customer. At other times, they are expressed in the phrasing of the question, especially in context with the microcultural background. When a human has a great sense of a customer's expectations and utilizes it to meet those expectations
Cathy O'Neil (Score:2)
Watch Weapons of Math Destruction by Cathy O'Neil [youtube.com] to see how algorithms have bias, and the results can also be used in various ways. If this law addresses some of that, then it is a positive change.
Re: (Score:2)
Algorithms have no intrinsic bias, they are just a huge set of algebraic equations or, depending how you mean it, the software implementation of said equations. Any bias you have in the results comes from the training dataset. Any "expert" who rants about how "algorithms have bias" is clueless.
Responsibility lays on whoever trains the system (a.k.a. optimizes for a certain space of data points), not on the math equations or the hardware implementing them.
Responsibility lays on whoever aims the gun and squee
Re: (Score:2)
It's a very interesting watch.
There's a difference, though, between algorithms which use non-statistical mathematics, and those that use statistics.
Any time statistical analyses are involved, there are going to be times when it leads to a non-optimal answer. For instance, if an algorithm is based on data like "Steve likes classic rock songs 80% of the time", then that algorithm, when asked "will Steve like this particular classic rock song?" will get it right about 80% of the time. 20% of the time the alg
This is a little confusing to me. (Score:2)
This may be an unpopular opinion, but I'm not sure where bias even enters into it.
Isn't the point of any algorithm to make a choice? Like "this face matches sample A to a larger degree than it matches any other sample", in the case of a facial recognition algorithm? If so, then shouldn't the one and only criteria be "does this algorithm, as it is programmed, return the most correct answer with the highest probability and lowest probability of false positives?"
Now, if the data/choices the algorithm uses/ma
Re: (Score:2)
Re: (Score:2)
Here's the code for it (Score:2)
function GetBias(time,bias)
{
return (time / ((((1.0/bias) - 2.0)*(1.0 - time))+1.0));
}
What a joke! I want a law that says (Score:3)
FYI the US government rarely does a budget anymore they are to busy doing useless political investigations and passing things that are just a waste of tax payers time and money.
One plus is the useless ness of government is bi partisan. The US government is now made up of DEMs and GOPers who think their government job is to do the bidding of their parties
Maybe administration/congress/president and their staffs should not get any pay checks either until they DO THEIR MAIN JOB!!!! a budget!!
Just my 2 cents
Give us the output (Score:2)
I have a conjecture that bias is self-emergent (Score:3)
Re: (Score:2)