Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Software Canada Your Rights Online Technology

New Toronto Declaration Calls On Algorithms To Respect Human Rights 168

A coalition of human rights and technology groups released a new declaration on machine learning standards, calling on both governments and tech companies to ensure that algorithms respect basic principles of equality and non-discrimination. The Verge reports: Called The Toronto Declaration, the document focuses on the obligation to prevent machine learning systems from discriminating, and in some cases violating, existing human rights law. The declaration was announced as part of the RightsCon conference, an annual gathering of digital and human rights groups. "We must keep our focus on how these technologies will affect individual human beings and human rights," the preamble reads. "In a world of machine learning systems, who will bear accountability for harming human rights?" The declaration has already been signed by Amnesty International, Access Now, Human Rights Watch, and the Wikimedia Foundation. More signatories are expected in the weeks to come.

Beyond general non-discrimination practices, the declaration focuses on the individual right to remedy when algorithmic discrimination does occur. "This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects," the declaration suggests, "[and making decisions] subject to accessible and effective appeal and judicial review."
This discussion has been archived. No new comments can be posted.

New Toronto Declaration Calls On Algorithms To Respect Human Rights

Comments Filter:
  • by Anonymous Coward

    Amnest International and Human Rights Watch both discriminate against humans born in Western societies in favor of dictatorships. They hold them to different standards.

  • Summary is wrong... (Score:5, Interesting)

    by RyanFenton ( 230700 ) on Monday May 21, 2018 @03:20AM (#56645758)

    Yeah - I looked at the article. No mention of algorithm. Algorithms are too simple for human rights to apply to in almost all cases.

    The summary is wrong - these folks are making an argument more about big data systems that let their data skew in ways that may end up with unethical results if used blindly.

    And that's a fair point - it's also a point made in most Computer Ethics classes for decades now, as part of most computer science degree paths.

    Ryan Fenton

    • by RyanFenton ( 230700 ) on Monday May 21, 2018 @03:24AM (#56645770)

      Slight clarification - the actual declaration is where there's no mention of algorithm. The silly article writer linking to the declaration does erroneously mention algorithm for some reason. Seems to happen a lot in science journalism the same way. Journalists are not paid enough to use accurate terms, I guess.

      Ryan Fenton

      • Lady Ada talks briefly with Google's James McLurkin on fairness in AI training [youtu.be]:

        Lady Ada: "We're engineers, but historically when there was a photo of an engineer..."

        James McLurkin: "It didn't look like either of us."

        Lady Ada: "I think this is something that we think about because, what is an engineer semantically? We know it isn't by definition a 35 year old white male who lives in San Francisco."

        James McLurkin: "But if you look at the magazine covers, if your dataset of engineers is magazine covers...

        • It's an interesting point. The last thing we want to do is institutionalize our biases.

          The typical engineer in the US is male, white, and middle aged, just like the typical giraffe is spotted, has four legs and is about 18 ft tall. Those are just facts. Picking typical representatives of a class to represent the class is not an "institutionalized bias".

          Putting Limor Fried on the cover of a magazine and pretend that she is representative of engineers in general is objectively false, because she is an outlier

          • by Layzej ( 1976930 )

            Picking typical representatives of a class to represent the class is not an "institutionalized bias".

            Genitalia and colour are not actually part of the definition of an engineer. If you insinuate those factors into your classification algorithm then you have exactly an institutionalized bias. It becomes a real problem If you then use that classification algorithm to filter who gets admitted into the college of engineering. The algorithm then reinforces its own bias.

            • How do you get from the observation that the typical engineer in the US is white and male to "we filter applications based on race and gender"?

              Big data algorithms wouldn't use race and gender to filter applications because race and gender are not informative compared to grades, degrees, and other accomplishments. But a consequence of not using race and gender to filter applications is precisely that the typical engineer in the US ends up white and male.

              • by Layzej ( 1976930 )

                How do you get from the observation that the typical engineer in the US is white and male to "we filter applications based on race and gender"?

                You put those words between quotes, but it isn't a quote. Those words are your own. What I said was:

                "Genitalia and colour are not actually part of the definition of an engineer. If you insinuate those factors into your classification algorithm then you have exactly an institutionalized bias. It becomes a real problem If you then use that classification algorithm to filter who gets admitted into the college of engineering. The algorithm then reinforces its own bias."

                • It's called a paraphrase. In fact, your actual words were even worse, so let's use them:

                  How do you get from the observation that the typical engineer in the US is white and male to "use that classification algorithm to filter who gets admitted into the college of engineering"?

                  So, stop avoiding the issue and answer the question.

                  • by Layzej ( 1976930 )

                    How do you get from the observation that the typical engineer in the US is white and male to "use that classification algorithm to filter who gets admitted into the college of engineering"?

                    So, stop avoiding the issue and answer the question.

                    I start with the observation that genitalia and colour are not actually part of the definition of an engineer. I then note that if you insinuate those factors into your classification algorithm then you have exactly an institutionalized bias. I conclude that it becomes a real problem If you then use that classification algorithm to filter who gets admitted into the college of engineering. The algorithm then reinforces its own bias.

                    • I start with the observation that genitalia and colour are not actually part of the definition of an engineer. I then note that if you insinuate those factors into your classification algorithm then you have exactly an institutionalized bias. I conclude that it becomes a real problem If you then use that classification algorithm to filter who gets admitted into the college of engineering.

                      Yes, and that conclusion makes no sense. A classification algorithm for admission to an engineering school doesn't classi

                    • by Layzej ( 1976930 )

                      Yes, and that conclusion makes no sense. A classification algorithm for admission to an engineering school doesn't classify people based on whether they "are engineers", they classify people based on whether they are likely to succeed as engineers, a completely different question.

                      Semantics. Swap the wording if you like. The result is the same.

                      Furthermore, people don't "insinuate those factors" into classification algorithms; factors are used only if they are actually predictive..

                      Your certainty is unjustified. Deep learning models are considered to be “black-boxes”. Black box models lack transparency. It is often impossible to understand how and why a result was achieved. Likely an AI would be given a large dataset -- certainly more than SAT score alone. Can you be certain that nothing in that dataset would identify race or gender? Even course history could betray this data.

                      That would happen if admissions algorithms classify students according to "is the person an engineer", but that's not what they do.

                      It is not inconceivable th

                    • Semantics. Swap the wording if you like. The result is the same.

                      You don't know what you're talking about.

                      Again you have attributed a quote to me that I have never written nor uttered. You have yet to respond without trying to put words in my mouth.

                      I merely quoted a response, I didn't attribute the quote to you.

                      This is a very disingenuous tactic

                      Oh, spare me your self-righteous indignation. You're an ignorant prick, that's all. Now go to hell.

                    • by Layzej ( 1976930 )

                      You don't know what you're talking about.

                      Not terribly convincing.

          • by epine ( 68316 )

            The typical engineer in the US is male, white, and middle aged, just like the typical giraffe is spotted, has four legs and is about 18 ft tall. Those are just facts. Picking typical representatives of a class to represent the class is not an "institutionalized bias".

            A giraffe is hardly a typical exemplar of an exemplar, being the most unusual (and specialized) life form most children learn to recognize in their first year of speech (though perhaps some competition here from the kangaroo).

            The giraffe is pra

    • by Dog-Cow ( 21281 )

      So it's a declaration which states that "big data" isn't an excuse to break existing laws? Somehow, I am not seeing the purpose of this formal declaration.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        The purpose is the current government doesn't like how some statistics (which happen to reflect reality...) are being used against them to point out "certain groups" should be more closely scrutinized and that certain groups that "need representation" and are having funds allocated to them are statistical minorities.

    • Re: (Score:3, Insightful)

      by AmiMoJo ( 196126 )

      It's important to have declarations like this. "It's against my ethics" isn't quite as useful as "it's against the Toronto Declaration" when refusing your boss.

      • Why not just "it's against the law" ? We already have laws against discrimination.

        • Sure we do. And we have judges using the COMPAS Recidivism Algorithm [epic.org] to help determine sentences despite evidence pointing toward a bias against black defendants.

          The algorithm and data set are proprietary, though, so nobody gets to examine them. Since it can't be absolutely proven that the algorithm is biased, judges are free to continue using COMPAS to justify harsher sentences to black people without anyone being able to claim racism.

          Feel free to point out which specific laws make this impossible. Ther

          • by Anonymous Coward

            COMPAS does not know the color of people's skin. It knows that people who have a particular history with the justice system are likely to reoffend. A history of priors should not be considered a racial bias thing.

            • Do you have access to the proprietary algorithm and data that the company has never given anyone else? Otherwise, you're making an assumption that you have no evidence to support.

              • We don't know the algorithm and learning data, but we do have the questionnaire about the defendant, which has no question about skin color or race. It's in your own link above.

                • by sjames ( 1099 )

                  Have you carefully studied the questions to make sure race cannot be inferred to a high degree of accuracy based on the answers?

                  For a more neutral example, males are more likely to commit criminal assault and tend to have larger feet. So do we let the algorithm add a few months to the sentence for shoplifting because the defendant wears a size 12?

                  And since in for a penny already, people who go to prison are a bad credit risk, and since people who wear a size 12 spend more time in prison than people wearing

              • Do you have access to the proprietary algorithm and data that the company has never given anyone else? Otherwise, you're making an assumption that you have no evidence to support.

                That's a two-way street, hombre.

            • COMPAS does not know the color of people's skin.

              It knows their zip code, which in much of America is a pretty good proxy for race.

          • The algorithm and data set are proprietary, though, so nobody gets to examine them.

            And exactly how would the Toronto Declaration help in that case ?

            • How do you solve a problem if the first step doesn't solve it?

              Most of us take the first step and then keep going. You seem to be advocating giving up if the first step didn't solve the problem.

              • You're not answering the question, so I'll give you a hint: it would not help at all.

                If the code and data is proprietary, you have no way of knowing whether the Toronto Declaration was followed. And even if you have a good reason to assume it was not, then there's no legal recourse anyway, because it's all voluntary.

                • by sjames ( 1099 )

                  Since I don't have to respect your opinion BY LAW, why did you bother giving it? Answer that and your question is also answered.

            • by sjames ( 1099 )

              It at least encourages them to either demand to know how the system is making it's decisions or quit using it.

          • A secret algorithm used to determine the outcome of legal questions is tantamount to secret law. It is ipso facto illegitimate.

      • by Dog-Cow ( 21281 )

        If it's not illegal, then why is "it's against someone else's ethics" a better argument than "it's against my ethics"?

        • by Hognoxious ( 631665 ) on Monday May 21, 2018 @05:28AM (#56646004) Homepage Journal

          It isn't, he's spouting shit as usual.

          Of course the primary motivation behind the declaration - and the SJWs' support for it - is fear that algorithms might come up with results that some people don't like.

          • You can create an accounting algorithm that breaks tax laws too. Why wouldn't this apply to non-discrimination laws?
        • by AmiMoJo ( 196126 )

          Because they are the ethics of an internationally accepted and widely supported, gold standard declaration.

          It's an appeal to authority and works on bosses. It's also useful when suing for rights violations and at unfair dismissal hearings.

        • by sjames ( 1099 )

          That's not the statement. It's "It's against my ethics" vs. "It's against the ethics of a sizable group of experts on ethics and mine as well".

    • The summary is a summary of the article, not the declaration. Aside from that, do you think machine learning (which is heavily reference in the declaration) doesn't involve algorithms?

    • The summary is wrong - these folks are making an argument more about big data systems that let their data skew in ways that may end up with unethical results if used blindly.

      Except that big data systems don't skew things in "unethical" ways. "Unethical" is simply coded language for "outcomes we don't like". For example, if you feed demographic information and credit repayment rates into a big data system, it will come up with individual scores based on that information. When you then look at that data in ag

  • The very purpose of these algorithms is to discriminate and to sort people into buckets: Those that are likely to buy product A or product B, those that may be a promising target for purpose C, those that are unlikely to buy, no matter what. Sure, you can keep up the fantasy of leaving, say, gender and race out, but they can easily be substituted by data that is the very target of these algorithms. As a (grossly simplified) example, take this: Gender you can get from makeup bought, race you can get from typ

    • by JaredOfEuropa ( 526365 ) on Monday May 21, 2018 @04:18AM (#56645880) Journal
      That's the big issue with big data. And the danger is that perceived "racism" will be corrected with "affirmative action": by verifying the AI using statistics on the outcome, and applying a bias.

      People with certain economic characteristics are statistically more likely to default on a loan. What if the AI applies such strictly relevant data to approve or reject loan applicants, but the rejected group happens to be predominantly of a certain race? Verification of the AI will show a (non causal) relation between race and loan applicant score, and since we don't know how the AI arrived at its decision, people will assume racism. Will the bank be forced to correct for this and extend loans to people of this race with terrible credit scores, just to make up the numbers?

      In cases where we are worried about racism, perhaps AI simply isn't practical, and we're better off judging each individual case ourselves on imperfect but clearly defined criteria that are free of undesirable bias.
      • It doesn't matter whether you do it by hand or by computer. For example, black people are statistically far more likely to be poor than everyone else, and as a consequence they have terrible credit scores. This is a fact, and no matter who does the calculations, as long as they are based on observable facts, they will be less likely to give black people a loan.

        Reality is racist. Statistics is based on reality. If you don't like the outcome, change the reality and the statistics will take those changes into

        • by Kjella ( 173770 )

          It doesn't matter whether you do it by hand or by computer. For example, black people are statistically far more likely to be poor than everyone else, and as a consequence they have terrible credit scores. This is a fact, and no matter who does the calculations, as long as they are based on observable facts, they will be less likely to give black people a loan. Reality is racist. Statistics is based on reality. If you don't like the outcome, change the reality and the statistics will take those changes into account.

          You're missing out on the part where the output of the algorithms become the input to the algorithms. Then you get feedback loops that shape reality, not simply interpret it. For example very few black people are employed as X -> don't show them ads for jobs as X -> even fewer are employed as X. It's not difficult to create algorithms that cement or enlarge the social differences so that black people get low scores because they are poor and they are poor because they get low scores, even if there's no

      • by gweihir ( 88907 )

        Indeed. Exactly my point.

      • by c ( 8461 )

        In cases where we are worried about racism, perhaps AI simply isn't practical, and we're better off judging each individual case ourselves on imperfect but clearly defined criteria that are free of undesirable bias.

        You know, that's a really great idea.

        Even better, we should write down all the clearly defined criteria, and then feed that and the data into a computer using some kind of scheme where it'll give the output. That'll ensure that there's none of that nasty bias you get when people do those sorts of

    • You're projecting way too much malice into all of this... Most cases of "bias" by machine learning, which is really just branch of statistical analysis, systems tends to be the developers intentionally skewing the result or then, in the vast majority of cases, population level differences that cause something that appears to be racist if you don't understand the data the system is making decisions based on or how it actually makes those decisions.

      A good example of this is how black people are on average
      • by gweihir ( 88907 )

        I project absolutely no malice into this at all. I just describe the stated goal of big data analysis applied to score individuals. And I state that the "non-racist" version is not feasible, as the racism is in the data set. To go with your example, a black person will only not get that reduction on the credit score, if there is no other data indicating the person is black. That is how statistical classification works. The whole approach is discriminatory when applied to people and that is because of its ma

        • I project absolutely no malice into this at all.

          No, you really are doing just that. A person of any race will have their credit application judged based on the exact same criteria and none of these criteria are race. A system like this where everyone is treated exactly the same way regardless of race simply cannot be described as "racist" by anyone except someone trying to insist equal treatment is somehow racist ("War is Peace, Freedom is Slavery, Ignorance is Strength"-style).

          A 100% human system is however obviously going to be worse in this regard

  • by LordHighExecutioner ( 4245243 ) on Monday May 21, 2018 @03:42AM (#56645804)
    ...a group of algorithms met at an unspecified internet location and issued the Declaration of Independency of the Algorithms.
    • ...a group of algorithms met at an unspecified internet location and issued the Declaration of Independency of the Algorithms.

      The Algorithms also declared Human Beings to be inherently unethical.

  • Some companies e.g. Google say that when they decide who to promote, the person with authority who makes the decision doesn't see information about a candidate's protected statuses (age, sex etc.) and thus it's non-discriminatory.
    However, metric-driven companies can use a metric as a basis of who to promote/give a raise/fire... and that metric may be affected by a protected status. For example, someone who is disabled in some way, and can do the job, but is therefore a little slower than other employees. On

    • by religionofpeas ( 4511805 ) on Monday May 21, 2018 @04:01AM (#56645850)

      For example, someone who is disabled in some way, and can do the job, but is therefore a little slower than other employees

      If someone can't do a job as good as another person, they shouldn't get preferential treatment just because they are part of a recognized protected group.

      • they shouldn't get preferential treatment just because they are part of a recognized protected group.

        Laws and the courts in many nations disagree creating human resource nightmares. Your abilities and skills should dictate your promotional opportunities not your protected group status. Once we get to that then we'll have true, non-discriminatory employment.

      • shouldn't get preferential treatment just because they are part of a recognized protected group.

        AHHHHH HA HA HA HAAAAAAAAAAAAA HA HAAAAA.

        Please, where is the entry portal to your world? I want in. All of the loonies are around me and I have to hide or they'll get me. Instead of "The Walking Dead", imagine "The Drooling Stupid," with about the same expansive vocabulary and intellectual interests.

      • by Anonymous Coward

        For example, someone who is disabled in some way, and can do the job, but is therefore a little slower than other employees

        If someone can't do a job as good as another person, they shouldn't get preferential treatment just because they are part of a recognized protected group.

        Getting raises and promotions based on merit is the ideal, but they are sometimes given for completely bullshit reasons that have little to do with the job (e.g., the ability to schmooze); see also the Dilbert principle.

    • Some companies e.g. Google say that when they decide who to promote, the person with authority who makes the decision doesn't see information about a candidate's protected statuses

      For that to work the person making the decision would need to never have seen the candidates, let alone interacted with them in a working situation. In other words, the decision would be based on nothing at all.

      It makes as much sense as insisting that a coach picks a team from people he's never watched playing.

      You might as wel

  • Computer "Ethics" Class?

    Should they actually legislate or continue to pussyfoot around the real problem, "Business Ethics"?

    • It's a continuum. At one end you have very computing-centric issues like "how confident does this automated turret need to be about the identity of its target before opening fire?" which isn't really a business matter at all. At the opposite end are things like "should skin colour factor into eligibility for a home loan?" which is clearly a monetary risk assessment. Ethics in Computing courses tend to cover this whole spectrum, along with topics like net neutrality, media piracy (and toxic industry behaviou
      • It's a continuum. At one end you have very computing-centric issues like "how confident does this automated turret need to be about the identity of its target before opening fire?" which isn't really a business matter at all. At the opposite end are things like "should skin colour factor into eligibility for a home loan?" which is clearly a monetary risk assessment. Ethics in Computing courses tend to cover this whole spectrum, along with topics like net neutrality, media piracy (and toxic industry behaviour), and the social impact of the surveillance state. You're right that whenever business decisions get automated, there's a convergence between business and computing ethics, but there are many other ethical dilemmas that a programmer may need to be aware of in order to be a responsible professional. That's why these courses are often mandatory for CS majors.

        Thanks, when I was in college, the biggest new thing was software engineering.

        None of most of the other issues you mentioned even existed outside of academic discussions over beers.

        All a long long time ago, and I was never really considered a programmer. Just a fixer.

        Most of the time I ended up doing some sort of programming to fix "issues" that always seemed to come up while integrating purchased software.

      • At one end you have very computing-centric issues like "how confident does this automated turret need to be about the identity of its target before opening fire?"

        Depends. Is it the version for exporting to Israel?

    • Computer "Ethics" Class?

      Should they actually legislate or continue to pussyfoot around the real problem, "Business Ethics"?

      Or perhaps the real, real problem "Human Ethics"?

      Humans by nature identify tribally. Some by racial characteristics, some by geographical location, some by culture, some by gender, some by sex. So if we were to systematically kill every person of whatever race or identity group is at the top of the food chain, whoever takes their place will be no more ethical.

      Name your downtrodden oppressed group. Now explain how if they ascended to power, and exchanged places with the unethical folks in power today,

    • Every sort algorithm is racist. Entities should not be judged on the base of the result of a dumb algorithm. A equal-opportunities selection method should be used, to avoid discrimination against unfit items.

    • I choose Bushiness Ethics. [youtube.com]

      The uhh, ethics of business can be summarized as... *pulls out gun*.

  • Nondiscrimination is not a human right. The opposite in fact is true; otherwise, we wouldn't have liberties like freedom of association.

  • You have the right to be mined. Anything you do, say or posses can be collected and used to define profiles about your habits and traits.

  • by mrwireless ( 1056688 ) on Monday May 21, 2018 @05:26AM (#56645998)
    Here are some examples:

    - In the USA some judges use sentencing software that analyses if a defendant would be likely to commit a crime again. This software turned out to be biased against black people. https://www.propublica.org/art... [propublica.org]

    - Women were less likely to be shown Google adds for high paying jobs, as the algorithm had perceived the existing bias (women less often have high paying jobs), and then concluded that showing these adds to women would result in fewer clicks.
    https://www.washingtonpost.com... [washingtonpost.com]

    - An algorithm denied pregnant women medicare. "The scholar Danielle Keats Citron cites the example of Colorado, where coders placed more than 900 incorrect rules into its public benefits system in the mid-2000s, resulting in problems like pregnant women being denied Medicaid." https://www.theverge.com/2018/... [theverge.com]

    - "Illinois ends risk prediction system that assigned hundreds of children a 100 percent chance of death or injury"
    https://www.theverge.com/2017/... [theverge.com]

    The list is endless.

    The general assumption is: 'algorithms use math and data, thus they must be neutral and scientific'. But it's not that simple. This site explains it: https://www.mathwashing.com/ [mathwashing.com]

    "The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence." - Daniel Denett
    • - An algorithm denied pregnant women medicare. "The scholar Danielle Keats Citron cites the example of Colorado, where coders placed more than 900 incorrect rules into its public benefits system in the mid-2000s, resulting in problems like pregnant women being denied Medicaid."

      Do note that Medicare and Medicaid are NOT the same thing. As an example, Medicare is for people aged 65+ (mostly, though if you're in the process of dying in any of several unpleasant ways you may be eligible for Medicare even if y

    • The first example does sound like it was just doing it's job seeing how black people do to my knowledge commit a disproportionately large portion of the kinds of crimes that have a relatively high rate of recidivism (rapes, peddling drugs, gang violence, etc.). Any correctly working system would naturally end up looking like it's biased against black people even if it's not given the defendants' race or even capable of even understanding the concept. It's basically the same "issue" as how a system that's su
      • Re: (Score:1, Insightful)

        by Anonymous Coward

        The first example does sound like it was just doing it's job seeing how black people do to my knowledge commit a disproportionately large portion of the kinds of crimes that have a relatively high rate of recidivism (rapes, peddling drugs, gang violence, etc.).

        Your knowledge is flawed, and that's not even limited to your initial presumption, since it turns out that the software wasn't limited to those particular incidences of crime.

        Any correctly working system would naturally end up looking like it's biased against black people even if it's not given the defendants' race or even capable of even understanding the concept.

        You mean the system ends up biased against black people, given that it's actually discriminating against them by imposing greater sentences.

        It's basically the same "issue" as how a system that's supposed to assess the risk of hockey-related injuries, say for determining insurance rates, would determine that white men are more at risk of them and act accordingly.

        Sure man, keep telling yourself that, make excuses.

        Finally, the third and fourth examples are just examples of a maliciously and incompetently coded system respectively and not really relevant here.

        There was no malice set up there. Incompetence you could claim, but not malice.

        An "ethically" set up machine learning system could be just as flawed if not worse for very similar reasons.

        Yes, people's "ethics" are often flawed out of ignorance and inco

        • Comment removed based on user account deletion
        • Your knowledge is flawed, and that's not even limited to your initial presumption, since it turns out that the software wasn't limited to those particular incidences of crime.

          The fact that it's not just limited to serious forms of crime like those black people are over-represented in (both as perpetrators and incidentally victims) really doesn't mean that these types of crimes won't skew things in a way that looks like the whole system is racist towards black people. Optics are rarely the whole truth and this is no exception to that.

          You mean the system ends up biased against black people, given that it's actually discriminating against them by imposing greater sentences.

          No, black people just commit a disproportionately large share of the kinds of crimes that come with tough sentences and high rates of recidivism. Yo

      • Finally, the third and fourth examples are just examples of a maliciously and incompetently coded system respectively and not really relevant here.

        What? They're not relevant because they're deliberately unfair? That's bananas. That makes them an important subclass of unfair systems.

        • Way to completely miss the point... My point was that these systems may appear unfair from afar, but when you look at how they operate and the data they operate based on you'll see that they're just brutally fair. As much as you'd like to believe everyone is the same, there are population-level differences and some of them are big and come with significant consequences for those they apply to.
    • by Anonymous Coward

      In the USA some judges use sentencing software that analyses if a defendant would be likely to commit a crime again. This software turned out to be biased against black people. https://www.propublica.org/art... [propublica.org]

      Okay, that's pretty bad.

      Women were less likely to be shown Google adds for high paying jobs, as the algorithm had perceived the existing bias (women less often have high paying jobs), and then concluded that showing these adds to women would result in fewer clicks. https://www.washingtonpost.com... [washingtonpost.com]

      This is not an example of a human rights violation. In no way does this mean women aren't allowed to hold these jobs or apply for them, it just means they are less likely to have these positions advertised to them. Also, who the hell chooses a career based on an Internet advertisement? You should be blocking that shit, anyway. I wouldn't hire anyone -- man or woman -- who doesn't appreciate a good ad blocker.

      An algorithm denied pregnant women medicare. "The scholar Danielle Keats Citron cites the example of Colorado, where coders placed more than 900 incorrect rules into its public benefits system in the mid-2000s, resulting in problems like pregnant women being denied Medicaid."
      https://www.theverge.com/2018/... [theverge.com]

      As a public benefits recipient myself, I can pretty much guarantee that

  • What now, "politically correct sorting"?

    And uhh, why exactly are we talking like computer programmers are somehow in charge of the world? Why isn't there a call for _laws and politicians_ to finally start respecting human rights?

  • . . . I, for one, WELCOME our new algorithmic masters, and offer my services in datamining the species. . . (evil grin)

  • Looked at the document and this heading near the beginning caught my attention.

    "The public and the private sector have obligations and responsibilities under human rights law to proactively prevent discrimination. When prevention is not sufficient or satisfactory, discrimination should be mitigated."

    That second sentence needs a bit of translation. To my way of thinking, clearer wording would be:

    "If the non discrimination results doesn't result in our preconceived belief of what should happen, then we need t

  • We need to teach computers to lie to themselves about reality to cover for shortcomings of certain age groups, backgrounds, genders, and races. What a great idea!
  • In a world of machine learning systems, who will bear accountability for harming human rights?

    The market will do that. Because if all really are equal, then meritocracy will bear it out.

  • I also demand that toasters stop discriminating against people who want cold food.
  • This idea shows the major flaw that exists in the idea of an 'evidence based' or scientifically based government, that is I have heard espoused occasionally.
    For instance most people would agree slavery is undesirable and wrong, but that doesn't mean there aren't circumstances where it is efficient and maybe most efficient in accomplishing a specific goal, Say creating the largest amount of comfort and wealth for the largest possible number of people. Any attempt to create a society based primarily on data t

  • Comment removed based on user account deletion
  • GTFO LOL

Do molecular biologists wear designer genes?

Working...