Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Government AI Businesses Software United States Technology

A New Bill Would Force Companies To Check Their Algorithms For Bias (theverge.com) 183

An anonymous reader quotes a report from The Verge: U.S. lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems -- like facial recognition or ad targeting algorithms -- for bias. The Algorithmic Accountability Act is sponsored by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), with a House equivalent sponsored by Rep. Yvette Clarke (D-NY). If passed, it would ask the Federal Trade Commission to create rules for evaluating "highly sensitive" automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

The Algorithmic Accountability Act is aimed at major companies with access to large amounts of information. It would apply to companies that make over $50 million per year, hold information on at least 1 million people or devices, or primarily act as data brokers that buy and sell consumer data. These companies would have to evaluate a broad range of algorithms -- including anything that affects consumers' legal rights, attempts to predict and analyze their behavior, involves large amounts of sensitive data, or "systematically monitors a large, publicly accessible physical place." That would theoretically cover a huge swath of the tech economy, and if a report turns up major risks of discrimination, privacy problems, or other issues, the company is supposed to address them within a timely manner.

This discussion has been archived. No new comments can be posted.

A New Bill Would Force Companies To Check Their Algorithms For Bias

Comments Filter:
  • What is bias? (Score:1, Interesting)

    by mveloso ( 325617 )

    What is bias? Does "bias" mean "not a white male?"

    In Asia, AI training data is almost exclusively Asian. That means results will skew Asian. Is that evidence of algorithmic bias? How would you go about determining that?

    • Re:What is bias? (Score:5, Insightful)

      by Erioll ( 229536 ) on Thursday April 11, 2019 @08:09AM (#58420584)
      When the facts say that men are on average stronger and taller than women, are the facts wrong? Some things are fact. Other things are uncomfortable facts. When facts conflict with beliefs (especially politically), which do you think will win?
      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Unfortunately in todays world, political correctness wins - and you'll be banned for stating the facts.

      • by quenda ( 644621 )

        When the facts say that men are on average stronger and taller than women, are the facts wrong?

        If an algorithm is hiring bricklayers, and all it knows is the gender of the applicant, it will pick men.
        But feed the algorithm enough data: the weight, strength, job history, test laying rate etc, then gender bias will be removed. It will pick the best applicants, who just happen to be men. (OK, all the applicants were men, but you get the idea.)

        Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, s

        • Re:What is bias? (Score:4, Insightful)

          by Junta ( 36770 ) on Thursday April 11, 2019 @09:48AM (#58421174)

          Interestingly enough in your example, even if you removed the actual gender from the data, you'd probably still have a 'biased' selection algorithm.

          This came up in some other scandal where an algorithm *tried* not to be racist by excluding race and ended up still very biased in a law enforcement context. Note that the algorithm was seemingly bogus for other reasons so its not the best example, but even if it was worknig correctly it still probably would have been biased and the bias would have been undeserved. Notably they looked at arrest records of the parents as an indicator, and if a biased system caused their parents to be arrested, then the system would gladly extend that bias to a new generation.

          Which all points to a key problem of playing 'whack-a-mole' with various endpoints where bias manifests when the bias problem is a bit more systemic. If a field is unfairly excluding minorities or women, then you don't just wave a wand at the employers, you have to look at education and cultural upbringing and accept that correcting a balance problem may be a generational problem. Also make sure the people you think are being slighted actually want this kind of help, rather than elevating the state of thnigs they would rather do.

          • Re:What is bias? (Score:4, Insightful)

            by alvinrod ( 889928 ) on Thursday April 11, 2019 @12:10PM (#58422114)
            Were you referring to the case of some software that was being used by judges to determine sentencing based on likelihood of recidivism? I do recall that particular case using data like parents' arrest record, along with a lot of other questions that had a higher likelihood of occurring in individuals from poorer communities, which has a strong racial correlation in many places.

            Assuming the algorithm is appropriately designed, it should only matter if whether or not your parents being incarcerated is a good predictor of recidivism. If it isn't, the data would show that a bunch of black people who were arrested had children who didn't commit crimes, and that there were several black parents who were never arrested that did have children who committed crimes. I understand that someone could easily look at the data wrong and make terrible conclusions (see the gender pay gap as one common example) based on bad reasoning, but that's another matter if we're assuming that the algorithm was properly designed.

            I think you'd have a stronger claim with the argument that arresting parents over trivial matters or non-violent crimes eroded the family structure in many African American communities which resulted in an increase in criminality. Studies that have explored this to that level of details support such reasoning. It isn't that black people commit more crime because they are black, it's that poor people from single-parent families commit more crime, and there happen to be a disproportionate number of black people that fall into that group. It's not the only factor, but we'd significantly reduce the problem by decriminalizing drugs.
            • by Junta ( 36770 )

              While I'm willing to believe the other parts of it, I wouldn't count out the problem of trying to fix biased systems by training algorithms. The data being fed into a machine learning strategy is going to just try its best to imitate the system it is being fed data about. Generally we lack straightforward means to 'adjust' such algorithms.

              The situation you described is indeed what I was thinking of and I did oversimplify, but the core remains: the algorithm cannot measure the absolute truth, only the hist

          • by quenda ( 644621 )

            Interestingly enough in your example, even if you removed the actual gender from the data, you'd probably still have a 'biased' selection algorithm.

            That is not "bias" by gender though, but bias by relevant correlates, which is fair.
            If google hires more male white and Asian engineers, and few black females, it is not bias by race and gender, but bias by intelligence and skills of the applicant pool, which is affected by not just the skills but the differing preferences of the demographics.

            Any small difference in engineering aptitude between men an women is dwarfed by their different preferences and interests. No politically correct algorithm is going t

            • by Junta ( 36770 )

              There may be different preferences, but to what extent are those interests engineered by culture in a way that disadvantages people? We drill into kids heads that they *should* want something and coincidently as a culture we drastically make life miserable for that aspiration... There are of course two fixes, changing culture to truly make all avenues equally appealing or fix the problem where we do not compensate highly valuable responsibilities.

              The problem is part of the advertised purpose of using algo

        • Re:What is bias? (Score:5, Insightful)

          by liquid_schwartz ( 530085 ) on Thursday April 11, 2019 @09:56AM (#58421236)

          Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, so the algorithm will ignore it.

          If that were true then it wouldn't matter. However, either by nature or nurture, color matters. If it didn't Asians wouldn't be given penalties and other groups bonuses for college admissions. If we want to argue that race isn't important, and I don't think it is important, then we have to do away with the diversity quest and let it play out.

        • Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, so the algorithm will ignore it.

          Even if it matters, it might not be fair. Let's say that a group of the population is on average more poor. Given that a person's information is somewhat noisy, a good machine learning algorithm that can determine that a person is a member of a poor group will give that person a bias towards poor. In other words, t

          • by quenda ( 644621 )

            a good machine learning algorithm that
            can determine that a person is a member of a poor group will give that
            person a bias towards poor.

            Yes, but only if you have insufficient data. If you have to choose between loaning your money to two people, and all you know is one is black and the other white, then of course you are much safer loaning it to the white guy!

            But if you have their tax records and bank statements for the last ten years, the skin colour becomes irrelevant!

            • But if you have their tax records and bank statements for the last ten years, the skin colour becomes irrelevant!

              So how much information is enough? It might be more than people realize. A machine learning algorithm is based on correlation, so it will probably give the "bad" information some influence. And what happens for a person that doesn't have many records. I guess it's tough luck since any priors start to have a bigger influence.

        • Same for almost anything. Skin colour rarely matters, and given enough more direct data on factors that do matter, skin colour will have no predictive value, so the algorithm will ignore it.

          Exactly the opposite will happen.
          E.g .let's pick example that won't brush POC people wrong as it disadvantages men, car insurance.

          You let AI know the gender - it will figure men are more likely to get into crash.
          If you stop feeding gender to AI, it will figure people named John are more likely to be causing trouble, than people named Julie.
          You will have to play a long cat and mouse game cutting off information sources for the AI to a point of it becoming useless.

          Just think about the whole "discrimination" i

          • by quenda ( 644621 )

            it disadvantages men, car insurance.

            You let AI know the gender - it will figure men are more likely to get into crash.
            If you stop feeding gender to AI, it will figure people named John are more likely to be causing trouble, than people named Julie.
            You will have to play a long cat and mouse game cutting off information sources for the AI to a point of it becoming useless.

            Don't cut off data, add more!! Yes men crash more than women, but why? Given enough information about the individual, the algorithm will stop making guesses based on gender. Age, personality, aggression level, claims history, driving skill, risk-taking ... Add in a GPS driving log for younger drivers, and each person will pay a premium based on their own individual risk, not on what is between their legs. Of course men and women are different, so average premium will differ, but there will be

      • Re: (Score:3, Informative)

        When facts conflict with beliefs (especially politically), which do you think will win?

        Especially when we are talking politically the answer is clearly beliefs. How else do you explain Trump and Brexit?

        • by zifn4b ( 1040588 )

          When facts conflict with beliefs (especially politically), which do you think will win?

          Especially when we are talking politically the answer is clearly beliefs. How else do you explain Trump and Brexit?

          You forgot liberal postmodernism and religion. It explains that too.

      • Re:What is bias? (Score:4, Insightful)

        by AmiMoJo ( 196126 ) on Thursday April 11, 2019 @09:05AM (#58420910) Homepage Journal

        Which has nothing to do with bias. Bias, in this context, is unwarranted assumptions. Men are on average stronger and taller than women, but a system which, say, ranks potential firefighter applicants using their gender as a factor instead of looking at their performance in the actual job is biased.

        • Re: What is bias? (Score:1, Insightful)

          by Anonymous Coward

          Thank you for finally admitting that 'affirmative action' is the purest form of bias.

        • by Anonymous Coward on Thursday April 11, 2019 @10:54AM (#58421592)

          Which has nothing to do with bias. Bias, in this context, is unwarranted assumptions. Men are on average stronger and taller than women, but a system which, say, ranks potential firefighter applicants using their gender as a factor instead of looking at their performance in the actual job is biased.

          Sorry, but you haven't been listening to the left if you think the test for bias is about assumptions. Equal outcomes is very strongly being pushed as the measure for bias.

          Do you have more males than females going into trades? That must mean a bias against females exists in the trades.
          Do you have more Asians getting into STEM? That must mean a bias in favor of Asians in STEM.

          Language is being redefined and weaponized to push people's agendas(I know, it always has). Today we have the definitions of equality, racism, bias, violence, assault and others being changed to better fit agendas. Racism being the grossest example because of it's importance and power. I grew up understanding racism to be discrimination based on race. Today though the push is on to redefine racism to be a combination of discrimination AND power. This turning the convenient trick then that 'whites' have all the power, so now only they can be racist, by definition.

          • by labnet ( 457441 )

            Equality of outcome is scourge that need to be purged.
            Well said. Pity you posted AC.

            Left and Right should be in balance
            Left = Open to ideas, artistic, compassionate
            Right = Structured, Rules, Order
            Marxist Left = Wanting control of your ideas through government. Power to remove any threat to their ideology that all people (except them as the rule makers) are equal.

            Thus they don't think it is hypocritical to use discrimination (eg minorities have lower entry standards) to achieve their ideology of equality of

            • Australias best Rugby footballer made a personal insta post saying 'drunk, homosexuals, adulterers, liars, fornicators, thieves, athiests, idolaters, hell awaits you, repent' and now has been sacked from rugby

              Apart from thieves there's probably one of each of those in the team.

        • Which has nothing to do with bias. Bias, in this context, is unwarranted assumptions.

          Assumptions like a specific dopey ass kid is more likely to get their vehicles wrecked than a group of older squares?

          The whole point of many of these systems is rendering prejudicial assumptions of future behavior based on limited knowledge. The name of the game itself in fact the very reason these systems exist at all is inherently prejudicial.

          What people interested in these things really seem to be seeking is curtailing rendering of judgment in the first place.

          Men are on average stronger and taller than women, but a system which, say, ranks potential firefighter applicants using their gender as a factor instead of looking at their performance in the actual job is biased.

          Who seriously believes anyone cares about mo

          • by AmiMoJo ( 196126 )

            Assumptions like a specific dopey ass kid is more likely to get their vehicles wrecked than a group of older squares?

            That's another interesting example.

            In the UK women used to get cheaper car insurance. That turned out to be illegal under gender equality rules. Insurance companies had to stop using gender as a risk factor. Women's car insurance went up, men's went down, in some cases a lot for younger drivers.

            I see a lot of complaints about age discrimination in the tech jobs market. I wonder if older tech workers would accept removing age as a risk factor for insurance, pushing their premiums up, in exchange for also rem

      • Diana Moon Clampers - the United States Handicapper General [archive.org] in 2081.

      • When the facts say that men are on average ... taller than women

        #notallmen

    • Definition (Score:4, Insightful)

      by JBMcB ( 73720 ) on Thursday April 11, 2019 @08:20AM (#58420656)

      Bias is favoring one thing over another. Which is what you want certain algorithms to do. I want Youtube to find stuff I like. I want Google to find pages that are relevant to me.

      Not sure how you are going to tease out the "good" bias from the "bad" bias, though. To extend your example, if 90% of the people in Hong Kong looking for a famous concert pianist are trying to find Lang Lang, who is hugely popular there, he's going to come up pretty fast when looking for concert pianists in general, which is what you want. It means the algorithm is being biased against Helene Grimaud, which is fine, because she isn't what most people are looking for in Hong Kong. That doesn't mean she doesn't come up at all, it just means she's ranked lower in the search results.

      • by AmiMoJo ( 196126 )

        Bias is favoring one thing over another. Which is what you want certain algorithms to do. I want Youtube to find stuff I like. I want Google to find pages that are relevant to me.

        How about a dating site that decided you had a bias in favour of tall partners, or light skinned partners? There is no simple answer to this.

        Some people consider things like hair colour preference to be a matter of personal taste and completely fine to filter by. Other people complain that they are short and don't get dates and it's unfair.

        Out in the real world people may select potential partners based on those preferences. But often relationships start without any deliberate selection, through friends or

        • by JBMcB ( 73720 )

          How about a dating site that decided you had a bias in favour of tall partners, or light skinned partners?

          If their algorithm has decided this and it doesn't reflect your actual preference, then it's a flaw in their algorithm. I'm pretty sure there is incentive for them to fix it, if that provides better results.

          I think, if the algorithm is reflecting what people actually want, then there is no problem. I think people are conflating machine learning with "programmed-in" biases.

          • by AmiMoJo ( 196126 )

            Are we okay with filtering people based on those preferences though?

            Say you are a significantly below average height male. Conventional standards of attractiveness in many western countries favour tall men. In fact short men are often the butt of jokes. Short men are going to find it hard to meet someone on a dating site if it filters them out based on perceived preferences, or allows the user to set a minimum height when searching.

            Some people argue that is just individual preference and it's fine. Sucks bu

            • Are we okay with filtering people based on those preferences though?

              For a dating site, yes absolutely. You wouldn't make the same argument against allowing people to filter based on sex would you? Wouldn't bothering to list sex as an attribute on a person's profile just reinforce standard of conventional attractiveness too?

              Making a dating site that's less useful at helping people find attractive (physical or in a broader sense) partners just means that people will go to a different site. If you don't like it, go make a pure personality dating site. I have a feeling it wi

              • by AmiMoJo ( 196126 )

                Personally I think gender selection for dating is fine, but with certain limits. It should be the gender the person puts down, nothing like "biological sex at birth" or "has penis". And it should include provision for non-binary.

                • If someone doesn't want to date people with a penis, then it would save a lot of time and grief if they knew that right away.

                  • by AmiMoJo ( 196126 )

                    Actually yeah, I take it back. Trans people are in enough danger as it is, best to let them decide to declare trans or not and for people looking to filter trans people if they are that way inclined.

                • by Wulf2k ( 4703573 )

                  For dating, if a discrepancy between declared and apparent gender is going to be an issue, wouldn't both sides want to get that out of the way immediately?

                • So why is it okay to record and filter based on gender, but not height?
                  • by AmiMoJo ( 196126 )

                    There are some good reasons for filtering by gender, some justification for it. Height though seems to be just a thing that conventional beauty considers to be a factor for arbitrary reasons.

            • by JBMcB ( 73720 )

              Are we okay with filtering people based on those preferences though?

              I'm not inclined to impose my values on what what criteria people use when they are choosing whom to date. I think it should be up to the individual. There are already dating sites that cater to certain overall preferences (jdate/christian date/muslima/grindr/etc...) If you care about that kind of thing then great. If not, go to a dating site that doesn't. I'm sure they are out there.

              As for unintended bias in algorithms, I'm sure it could happen, but again, I think it's a bug that most dating sites would wa

        • Re:Definition (Score:5, Insightful)

          by religionofpeas ( 4511805 ) on Thursday April 11, 2019 @10:19AM (#58421372)

          How about a dating site that decided you had a bias in favour of tall partners, or light skinned partners? There is no simple answer to this.

          And how about a dating site that figures out you had a bias for a certain gender ?

          • by AmiMoJo ( 196126 )

            That gets to the heart of the question. To what extent are preferences something that a person has control over, are something that are dependent on social norms and influences, and which are inherent. And to what extent does re-enforcing those preferences contribute to systemic biases, if any?

            • Does it matter what preferences I have control over ? Why can't I just find a date that matches my preferences, whether they are inherent or created by social norms ?

              • by AmiMoJo ( 196126 )

                It's more a systemic issue, as in if the dating website never recommends any X people, it's re-enforcing that preference. If you see X people on the list it at least presents the opportunity to broaden your options a bit.

                It's probably a good thing for the dating site too. There are only so many people who match very specific criteria near you, and making some less perfect match suggestions increases the probability of you finding someone you end up actually liking.

      • Bias is favoring one thing over another. Which is what you want certain algorithms to do. I want Youtube to find stuff I like. I want Google to find pages that are relevant to me.

        Not sure how you are going to tease out the "good" bias from the "bad" bias, though.

        Right. The problem is that the word "bias" is overloaded. Bias can mean prejudice. Bias can also mean a systemic, non-random distortion of a statistic. Algorithms are completely incapable of the former, as they have no sentience. It's the former that the bill wants to target, but the bill punts on the practically impossible task of defining prejudice by asking a group of humans to do that at a later time. This defining of prejudice by prejudiced humans can never lead to the eradication of prejudice bu

        • Re:Definition (Score:5, Insightful)

          by green1 ( 322787 ) on Thursday April 11, 2019 @10:03AM (#58421274)
          Bias is easily defined: Anything that doesn't match the subjective opinion of the government of the day.

          These days that means that a hiring algorithm had better hire better than 50% women, and every ethnicity in relation to it's percentage representation in the population. What the algorithm is not allowed to do is take in to account any factors that might skew that, such as the applicant pool being predominantly one group, or the qualifications of individual applicants.
    • Forget about 'what is biased' and consider 'what is large'.

    • by jma05 ( 897351 )

      Here is a book on that.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Well, as Asians they're exempt from this kind of thing. It's only western white folk who need to worry about diversity, bias, blah blah.

    • by Bengie ( 1121981 )
      If it's not true random, it has a bias. They really need to define which biases.
    • This is an attempt to enforce equality of outcome

  • means what you think it means.

    Dear lords as a cybersecurity/IA professional reading this thing makes my head hurt from the buzzword bingo and absolutely worthless definitions included within.

    "taking into account the novelty of the technology used" - what? WTF does that mean? Is that a legal phrase or the verbal diarrhea of a staffer that thinks this sounds cool but is worthless from a legislative, and more importantly judicial, perspective? The corporate lawyers are going to run rings around this B.S

  • by Anonymous Coward

    You can't legislate them out of existence.

  • Excellent idea (Score:3, Insightful)

    by onyxruby ( 118189 ) <onyxruby@ c o m c a s t . net> on Thursday April 11, 2019 @08:20AM (#58420658)

    Let's get some hard data showing the bias that is present in censorship. The US is more conservative than liberal:

    https://news.gallup.com/poll/2... [gallup.com]

    Without algorithmic bias online media would lean conservative for the simple reason that the US has more conservatives than liberals. Yet somehow online platforms (Reddit, Facebook etc.) tilt overwhelmingly liberal.

    This can only be a result of bias that has been put into algorithms and sanctioned.

    • Comment removed (Score:4, Informative)

      by account_deleted ( 4530225 ) on Thursday April 11, 2019 @08:23AM (#58420666)
      Comment removed based on user account deletion
    • by msauve ( 701917 )
      You are, perhaps incorrectly, assuming that all groups are equally represented on those platforms. I'll submit (without evidence) that overall, conservatives tend to be older and less likely to join and participate in those social media platforms. The algorithms act upon the data they receive from platform members and participants.
      • I think your premise would have likely been the case in the early days of the Internet. When I first started working with the Internet you didn't have very many people online who were over 40. I certainly remember old people who were proud of their lack of technical skills.

        That being said, in the years since most of the US adult population has joined the Internet with 89% of adults in the US using it.

        https://www.statista.com/stati... [statista.com]

        In the old days algorithms were indeed influenced by the data that they rec

    • You'd probably want to break it down by age. Younger people are generally going to skew liberal in almost any population and regardless of where the population as a whole lies along the political spectrum. That's just the tendency of the next generation. The technology websites (which are most heavily used by the younger demographics) are going to skew that way as well, and create a bit of a snowball effect as other people are attracted to those places that have similar users.

      There's nothing stopping con
  • by Hasaf ( 3744357 ) on Thursday April 11, 2019 @08:25AM (#58420672)

    I am currently reading "The Sum of Small Things." In the first chapter, the idea of different racial groups having different demand levels is shown through the data.

    There are valid reasons for that difference in demand. However, to pretend it isn't there is to try to live in denial.

    In the case, in the book, the increased demand by Blacks for conspicuous consumption goods, ceteris paribus, is based on the belief that many Blacks find it necessary, but often not the result of conscious decision making, to carry visible markers of the middle class because it is not assumed. Now, we can reject this conclusion. However, to reject the discussion because we reject the data gets us no closer to truth. instead, it moves us away from truth.

  • Comment removed based on user account deletion
    • by Anonymous Coward

      Q: What do you call a Black test-tube baby.

      A: Janitor In A Drum.

  • What this bill proposes is to replace the real, actual bias with another, artificial bias that is more desirable / politically correct / whatever.

    Not saying this is a bad thing - combating centuries-old entrenched preconceived ideas is probably a good thing more often than not - but please stop saying we're *removing* bias.

  • Detects real crime by real people in inner city areas.
    Data tracks to decades of FBI stats.
    Its not bias to have a computer create a database of crime.
    To detect the use of shared/fake ID by illegal immigrants.
  • This strikes me as another attempt by clueless pols to legislate fairness. Not that legislating fairness is wrong, however in this case, it is more "Well golly, tech companies can do anything, we'll make them do this!!!" Ya, and they wouldn't find a way to game and unworkable mandate, eh?

  • Giant companies like Microsoft, Twitter, Facebook, Google Search and YouTube all have been claiming they can't can't be bias because moderation, trends, and recommendations are not done manually but with automated systems. Finally Congress is learning just being an automated algorithm doesn't make something impartial. Look at Microsoft's twitter bot that was made to be racist after just one day because it's algorithm responded to inputs. Not just that. It these algorithms often take into account flags by us
    • Data science uses training data that often contains factors like race, sex, income, and education level that when included cause an algorithm to train to moderate or recommend people differently based on your assigned group.

      But even if you leave out factors like race and sex, it is possible that the machine learning application figures it out for itself, and creates an internal pattern that happens to produce a good match to race or sex, or any other factor, and then discriminates based on that internal pattern.

      If an investigator then looks at the result, it appears that the AI has certain biases.

  • It would be nearly impossible with AI. With learning AI currently, while they can learn, we don't know exactly how they came to those conclusions in many cases. One would have to spend considerable time and effort designing tests to determine bias, as simply checking code or algorithms wouldn't be possible.
    • One would have to spend considerable time and effort designing tests to determine bias

      Simple. If result is not 50% male, 50% female, it's biased and needs to be corrected.

  • by grasshoppa ( 657393 ) on Thursday April 11, 2019 @08:57AM (#58420872) Homepage

    If we're using neutral data as an input and the system comes to it's own conclusions...doesn't that say something about the data set? Shouldn't we try to understand why the algorithm came to that conclusion instead of immediately jumping to "check your privilege" ?

    • by AmiMoJo ( 196126 )

      Shouldn't we try to understand why the algorithm came to that conclusion instead of immediately jumping to "check your privilege" ?

      Isn't that what we are doing? Trying to understand if there are flaws in the training data or the way the training was administered?

      The reality is that it's extremely difficult to provide completely unbiased training data. Worse still it's often an on-going process, e.g. if you have a data set for deciding sentencing it will need to keep evolving over time to reflect new laws and new circumstances that didn't exist before. Just like we need to keep a constant eye on human bias in the system, the same is goi

      • it's extremely difficult to provide completely unbiased training data.

        The problem is that a lot of these biases are actually real. In order to get unbiased data, you have to artificially shape it by applying an opposite bias.

        • by AmiMoJo ( 196126 )

          "Group X commits more crimes" may be a statistical fact, but should it influence sentencing?

          Something being "real" in some sense isn't always the important thing.

    • by dirk ( 87083 )

      This is about deciding if the data used is neutral and if the system is neutral. A system trained on data that is not neutral will not be neutral. For example, if an algorithm is set up to address policing because it is found the police have been biased, but it is trained using previous arrest records from the police, it would be biased as well.

  • ... is it biased?

    E.g. an algorithm that is supposed to recognize humans will probably do worse for humans in a Mickey Mouse costume. Should it now be trained to recognize those dressed up as MM equally well, although it is a very rare case? Or should it be trained so it deals best with those situations that it will probably encounter more often, and that are thus more relevant.

    I.e. should the algorithm be trained with a representative sample (for the country it is to be used in), or should every ethnicity,

  • The vast majority of users serviced by AI systems have expectations by which they judge the service. When humans serve those users, good customer service usually dictates meeting those expectations. The expectations are largely driven by the microcultural background of the customer. At other times, they are expressed in the phrasing of the question, especially in context with the microcultural background. When a human has a great sense of a customer's expectations and utilizes it to meet those expectations

  • Watch Weapons of Math Destruction by Cathy O'Neil [youtube.com] to see how algorithms have bias, and the results can also be used in various ways. If this law addresses some of that, then it is a positive change.

    • by Anonymous Coward

      Algorithms have no intrinsic bias, they are just a huge set of algebraic equations or, depending how you mean it, the software implementation of said equations. Any bias you have in the results comes from the training dataset. Any "expert" who rants about how "algorithms have bias" is clueless.

      Responsibility lays on whoever trains the system (a.k.a. optimizes for a certain space of data points), not on the math equations or the hardware implementing them.
      Responsibility lays on whoever aims the gun and squee

    • by flippy ( 62353 )

      It's a very interesting watch.

      There's a difference, though, between algorithms which use non-statistical mathematics, and those that use statistics.

      Any time statistical analyses are involved, there are going to be times when it leads to a non-optimal answer. For instance, if an algorithm is based on data like "Steve likes classic rock songs 80% of the time", then that algorithm, when asked "will Steve like this particular classic rock song?" will get it right about 80% of the time. 20% of the time the alg

  • This may be an unpopular opinion, but I'm not sure where bias even enters into it.

    Isn't the point of any algorithm to make a choice? Like "this face matches sample A to a larger degree than it matches any other sample", in the case of a facial recognition algorithm? If so, then shouldn't the one and only criteria be "does this algorithm, as it is programmed, return the most correct answer with the highest probability and lowest probability of false positives?"

    Now, if the data/choices the algorithm uses/ma

    • It's very easy for people misinterpret statistical results. In your example, Asians may be 80% less likely to buy from a store, but if Asians are also 90% less likely to enter the store in the first place, then a given Asian who actually enters the store is twice as likely to buy than other people. So ignoring Asian customers would be an extremely bad business strategy.
      • by flippy ( 62353 )
        Agreed 100%. It's important for people in general to understand that any statistical analysis shouldn't be used to decide how to treat/deal with a given individual.
  • function GetBias(time,bias)
    {
        return (time / ((((1.0/bias) - 2.0)*(1.0 - time))+1.0));
    }

  • by oldgraybeard ( 2939809 ) on Thursday April 11, 2019 @11:18AM (#58421792)
    the congress of the United States can not work on any law or regulation until the 12 appropriation bills that make up the budget of the United States of America are passed by congress and signed by the president.
    FYI the US government rarely does a budget anymore they are to busy doing useless political investigations and passing things that are just a waste of tax payers time and money.

    One plus is the useless ness of government is bi partisan. The US government is now made up of DEMs and GOPers who think their government job is to do the bidding of their parties
    Maybe administration/congress/president and their staffs should not get any pay checks either until they DO THEIR MAIN JOB!!!! a budget!!

    Just my 2 cents ;)
  • I think the senators should just skip to the chase and give us the output that the algorithm should produce. Problem solved.
  • I'll bet if you completely exclude specific criteria from being a factor for consideration because of some undesired bias that might occur around such information, the resulting decisions may still show bias.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...