Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Government United States Technology

NYC Creates a High-Level Position To Oversee Ethics In AI (engadget.com) 58

New York City has created a position primarily to ensure that there are no biases in AI and other algorithms. Engadget reports: Mayor Bill de Blasio has issued an executive order creating a position for an Algorithms Management and Policy Officer. Whoever holds the position will work within the Mayor's Office of Operations and serve as both an architect for algorithm guidelines and a go-to resource for algorithm policy. This person will make sure that city algorithms live up to principle of "equity, fairness and accountability," the Mayor's office said.

The officer will have the help of advisory and steering committees that will respectively draw on appointed public members and city officials to "drive the conversation" around algorithms. To put it another way, the algorithms officer should have a better understanding of how algorithms apply in the real world. It's not yet clear who will take the position.

This discussion has been archived. No new comments can be posted.

NYC Creates a High-Level Position To Oversee Ethics In AI

Comments Filter:
  • Fool's errand (Score:4, Insightful)

    by liquid_schwartz ( 530085 ) on Friday November 22, 2019 @09:30PM (#59444814)
    Life is not fair and has biases. If the AI captures objective reality then it will merely reflect the biases that really exist. Lost in the data will the the perfectly valid reasons why biases and differences in outcomes exist. Certain types of people favor certain lifestyles or occupations. When I see as much complaining about the lack of men in certain fields like HR or speech therapists then and likely only then will I listen to the howl about a lack of women in tech. Until then it's just a game of blame the white male with Asian males occasionally being allowed to be the target.
    • by AHuxley ( 892839 )
      The city only wants the AI to report the propaganda numbers.
      Police find criminals and crime? Thats politically not allowed.
      Illegal migrants getting city services with ID fraud? Not looked for, not reported.
      Permit costs not allowing shops to stay open? Not a number the city wants to publish.
      Trash and waste a problem in the streets? No numbers. The cost of the subway? Money going from the subway to other random city services? No numbers...
      That role of a Algorithms Management and Policy Officer is g
    • Life is not fair and has biases. If the AI captures objective reality then it will merely reflect the biases that really exist.

      Correct. So the solution is to decide what outcomes are socially acceptable and slap a filter on the output layer.

      For instance, an AI designed to recommend pretrial release or confinement was found to be biased against blacks. The reason is that blacks on pretrial release really are less likely to show up for trial. So there is nothing "wrong" with the way the AI works.

      But an explicitly biased output offends our sense of fairness. So just take the outputs and apply a fudge factor to make the recommendat

      • by kenh ( 9056 )

        For instance, an AI designed to recommend pretrial release or confinement was found to be biased against blacks. The reason is that blacks on pretrial release really are less likely to show up for trial. So there is nothing "wrong" with the way the AI works.

        This is an example of no bias, as the outcome is determined exclusively by a subjects actions. The "algorithm" doesn't need to no a subject's zip code, it doesn't need to know their race, income level, or even gender - when you add those factors, biases creep in, intentional or not.

        Imposing a politically correct bias into an unbiased system to achieve a more politically-correct outcome is the exact opposite of what is needed.

      • by kenh ( 9056 )

        For instance, an AI designed to recommend pretrial release or confinement was found to be biased against blacks. The reason is that blacks on pretrial release really are less likely to show up for trial. So there is nothing "wrong" with the way the AI works.

        Showing up for your subsequent trial is a perfectly reasonable consideration for an "AI" algorithm deciding eligibility for pre-trial release or confinement. I question why race specifically is included in the decision.

        You might suggest instead that race shouldn't be an input. But THAT DOESN'T WORK because the AI still has zip code, type of offense (blacks and whites often commit different types of crimes), acquaintances, family connections, prior arrest record, etc. So you are still going to end up with racial bias. You can't fix it by fiddling with the inputs without destroying the predictive capability.

        Why are you working so hard to shield reality from the decision process? What factors should such an algorithm consider? I contend that "zip code, type of offense, acquaintances, family connections, (and) prior arrest record" are all perfectly valid inputs to the pre-trial release or confinem

        • I question why race specifically is included in the decision.

          Because when race isn't included, black people are confined at much higher rates.

          Why are you working so hard to shield reality from the decision process? What factors should such an algorithm consider? I contend that "zip code, type of offense, acquaintances, family connections, (and) prior arrest record" are all perfectly valid inputs to the pre-trial release or confinement decision, race is not.

          If you know someone's zip code, type of offence, acquantiances, etc. then you KNOW THEIR RACE with 95% accuracy, whether it is an explicit input or not.

          Go run for NYC mayor, and tell the voters that black people stay in jail because they are more likely to be criminals and everyone should just accept that. Good luck.

          Why should a white person in the same circumstances be treated differently (better OR worse) than the person of color based?

          They shouldn't. That is the whole point. To make sure whites and blacks are treated the same, race has to be c

          • Go run for NYC mayor, and tell the voters that black people stay in jail because they are more likely to be criminals and everyone should just accept that. Good luck.

            You're intentionally conflating two different factors. The fact that people in a certain zip code mostly happen to be black doesn't mean you're keeping them imprisoned "because black people are more likely to be criminals". You're keeping them in prison because people in that zip code are more likely to be criminals. The difference may seem academic, but you clearly understand that it is actually significant. If you didn't, you wouldn't feel the need to rephrase the situation in a way which makes it see

            • I don't think there is much difference between denying probation because someone is black and denying probation because they live in a predominantly black neighborhood. The end result is the same and is not something that NYC voters will accept.

              • And there you are again, doing the exact same thing which I just pointed out. Is it intentional, or are you honestly unaware that you're doing it?

                • And there you are again, doing the exact same thing which I just pointed out. Is it intentional, or are you honestly unaware that you're doing it?

                  I understand exactly what I am saying. I also understand exactly what you are saying.

                  I am just pointing out that you seem to be completely disconnected from prevailing public opinion in America, and the history of race relations.

                  To be clear, if a black person and a white person have nearly identical profiles and criminal records, the black guy LESS LIKELY TO SHOW UP FOR TRIAL. It is virtually impossible to design an AI system that will fail to pick up on this FACT, and this is true WHETHER OR NOT RACE IS

      • If the AI captures objective reality then it will merely reflect the biases that really exist.

        Correct. So the solution is to *decide what outcomes are socially acceptable* and slap a filter on the output layer.

        And you don't see that as bias? You're just painting over one with another that you prefer.

        • And you don't see that as bias?

          Of course it is bias. But it is bias that is socially acceptable.

          As a society, we want people to be treated the same, regardless of race. That doesn't happen if you ignore race.

          You're just painting over one with another that you prefer.

          Exactly.

          • As a society, we want people to be treated the same, regardless of race. That doesn't happen if you ignore race.

            This is the kind of double think which gave us "affirmative action". It's garbage. You're assuming that as a society we want statistical equality of outcome between arbitrary groups, rather than equality of opportunity between individuals.

            • You're assuming that as a society we want statistical equality of outcome between arbitrary groups, rather than equality of opportunity between individuals.

              The "society" we are talking about here is New York City.

              If you think most NYC voters want a libertarian utopia of equal opportunity, then I wonder what planet you are from.

    • Life is not fair and has biases.

      True, but that provides exactly zero reason, and even less justification, to build 'm into algorithmic black boxes an turn 'em loose on the public.

      If the AI captures objective reality then it will merely reflect the biases that really exist.

      Yes. And? Red-lining people for loans on basis of their zip code reflects objective reality too. And it's illegal now. For excellent reasons.

      Lost in the data will the the perfectly valid reasons why biases and differences in outcomes exist. [...] Until then it's just a game of blame the white male with Asian males occasionally being allowed to be the target.

      Whee ... that's one big (and fallacious (and uninformed)) ball of confusion you're serving up here.

      Let's see if I can disentangle that a little for you.

      First of all, you seem to go off on a rant about while (and

  • The AI gets data sets in.
    Humans ask questions about crime, police, what CCTV saw, water use, power use, transport use, illegal migrants using city services, road repair?
    Whats the AI going to do? Hide the facts due to the political views of an Algorithms Management and Policy Officer?
    Will the Algorithms Management and Policy Officer only allow the release of politically correct maths, city numbers, city stats, "good" police work?
    The city Algorithms Management and Policy Officer will always give the "pro
    • > Hide the facts due to the political views of an Algorithms Management and Policy Officer?
      > ill the Algorithms Management and Policy Officer only allow the release of politically correct maths?

      Yes.

    • The AI gets data sets in. ...
      Whats the AI going to do?

      It's simple: it'll make associations that may or may not be causal.

      Say for example that AI identifies that people named Sally that own a blue sedan (purchased on a third Friday of a spring month) happen to have a 7% greater chance at being late on a payment. This could easily lead to anyone named Sally being quoted a higher loan interest rate. Is it fair to discriminate on such coincidental insanity?

      This is hypothetical but also entirely possible. See also: p-hacking [wikipedia.org]

      • by AHuxley ( 892839 )
        What is a city going to do?
        Set loan interest rates?
        Force a bank to put aside all their decades of collected data and give loans to people who can and will never pay back the loan?

        Say "Sally" wants a car loan.
        Get a job. Take out a tiny loan. Pay it back. Pay any bills on time. Got a CC? Pay that back as expected.
        Show some good credit history.
        Get a small loan for a very average car. Pay that back as the loan amount was not much and the wage could support the payments.
        Banks dont like and don't
        • by kenh ( 9056 )

          Force a bank to put aside all their decades of collected data and give loans to people who can and will never pay back the loan?

          Cities did this, to end the practice of "red lining" that prevented residents of certain neighborhoods from getting loans, despite a well-documented history of failed loans in those very neighborhoods.

          • by AHuxley ( 892839 )
            At some point people asking for a loan have to show a bank they can be expected to pay a loan back as an individual.
            Demanding a bank gove out loans that will not be paid back will only see the bank fail on that loan.
            All that city effort to try and get money into failed neighborhoods?
            If the city so wants to help neighborhoods it let fail, spend some new city money in the neighborhoods needing more support..
            Ethics and a fake political CoC AI wont induce new investment.
            Everyone will still have the well-
            • At some point people asking for a loan have to show a bank they can be expected to pay a loan back as an individual.

              A black person with a identical credit score as a white person is LESS LIKELY TO PAY BACK THE LOAN.

              There are two ways to deal with this FACT:

              1. Accept reality and deny loans to blacks or charge them systematically higher rates.

              2. Ban racial profiling (including zip codes) and spread the additional risk out across all races.

              As a society, we choose #2.

              • by AHuxley ( 892839 )
                Re "There are two ways to deal with this FACT:"

                Let the bank work it out after doing an interview looking over the persons past banking and loans history...
                They do have a loan history...
                A loan they actually paid back...on time ... and in full...

                Re "spread the additional risk out across"...

                Why should the bank, its owners, investors, bank users have well under stood risks about people who dont pay loans back "spread" over their cost of banking?
                • Why should the bank, its owners, investors, bank users have well under stood risks about people who dont pay loans back "spread" over their cost of banking?

                  Because the law says they have to.

  • by Kohath ( 38547 ) on Friday November 22, 2019 @09:53PM (#59444864)

    Is it ethical for an NYC official to take a salary for something so unrelated to the responsibilities of NYC government and the needs of NYC residents?

    • Better question, is it ethical to take money to oversee something that doesn't exist?

    • I'd say "it can be ethical". As the use of more sohisticated AI grows, I'd suggest that it can be legally and politically and ethically necessary. There are implications of AI based tools for facial recognition, for police profiling, and for personal privacy that do deserve thought for a city populace. A city as large as NYC will hear proposals for widespread facial recognition and urban planning that do involve AI, and that do involve evaluating its safety and effectiveness. That person or relevant planner

    • right? And that "A.I." really just means complex algorithms now? I know us nerds don't like words being misused, but, well, too bad. The masses decided A.I. is anything a computer does where it isn't blatantly obvious a human pushed a button to make it happen. Annoying or not the mob has spoken, and all the grammar Nazis in the world can't stop it.

      All the position is is somebody to oversea the automated computer systems that make decisions for NYC residence and make sure those decisions are fair, equita
      • right? And that "A.I." really just means complex algorithms now?

        No. The algorithms are simple: Just matrix multiplication.

        It is the data that is complex.

    • Is it ethical for an NYC official to take a salary for something so unrelated to the responsibilities of NYC government and the needs of NYC residents?

      Can we please pause this discussion to clarify what TFA is about?

      This is NOT about NYC regulating AI research, or deciding how future robots should deal with the trolley problem.

      It is about the ethics of programs ALREADY IN USE BY NYC. Several have already been found to be "racist": Recommending more police patrols in minority neighborhoods, recommending that pretrial release and probation be denied to black inmates, etc. The purpose of this post is to have someone set policy on how to deal with these is

      • by Kohath ( 38547 )

        You don't need specialized officers with specialized expertise in each area to set policy. Especially on ethics. They can figure it out with the people they have and not waste money. No matter what they do, their policies will be called "racist": this is America.

      • It is about the ethics of programs ALREADY IN USE BY NYC. Several have already been found to be "racist": Recommending more police patrols in minority neighborhoods, recommending that pretrial release and probation be denied to black inmates, etc.

        None of those things are inherently racist. If you wanted to determine whether the AI was racist you would have to ask it whether it believes that blacks are inherently inferior. As none of the AIs I'm aware can actually form beliefs, it seems unlikely that any of them can actually be racist.

        Your examples MIGHT show racial discrimination, which is a different thing than racism. However you would first have to show that race was the determining factor in those decisions, rather than other factors.

        Example:

  • by Jarwulf ( 530523 ) on Friday November 22, 2019 @09:54PM (#59444868)
    in all the negative ways. You don't test a hypothesis. You start with a creed and then find the evidence to back it up. There is nothing that can happen that will convince the feminist, racebaiter, gender activist that their ideas are mistaken. Even a cold objective AI spitting the hard data back in their face means something is wrong with it. Even the most fundamental and obvious realities (ie that men and women are physically different) is rejected in this cult. Say what you will about creationists and global warming skeptics, and least their beliefs aren't debunked in front of their faces 24/7 everyday.
    • by AHuxley ( 892839 )
      Re " creationists and global warming skeptics"
      Would be able to work for a city and read reports.
      See the CCTV and police GUI reports of crime and task more police to the area.
      Wait and see if crime is down?
      Task more police...
      Still got crime? Study the problem more and bring in experts.
      Still got crime?
      Investigate city police for corruption.
      Wait for new reports after the police corruption is removed...
      Whats the "Algorithms Management and Policy Officer" going to do?
      Call the AI and computer reports of
    • by Kjella ( 173770 )

      Even the most fundamental and obvious realities (ie that men and women are physically different) is rejected in this cult.

      Sometimes it's possible to have two thoughts at the same time, that the men's world record for bench press is 740lb and the women's 600lb but the vast, vast majority of men like 99.9%+ can't do 600lb bench presses. The highest ranked female chess player is in 85th place but 99.9%+ of us rank below that. Yes, some are complete loons that sex is purely a social construct and that if we just treated everyone equally women would bench 740lb and men would give birth. Occasionally they find ways to push for more

    • in all the negative ways. You don't test a hypothesis. You start with a creed and then find the evidence to back it up. There is nothing that can happen that will convince the feminist, racebaiter, gender activist that their ideas are mistaken. Even a cold objective AI spitting the hard data back in their face means something is wrong with it. Even the most fundamental and obvious realities (ie that men and women are physically different) is rejected in this cult. Say what you will about creationists and global warming skeptics, and least their beliefs aren't debunked in front of their faces 24/7 everyday.

      Give it another 5 - 10 years, at which point, we'll have ample training data to unlearn the bias for which you speak. Then what you've written here will register as a non-zero on hate speech.

  • Ah, no (Score:3, Funny)

    by Empiric ( 675968 ) on Friday November 22, 2019 @10:11PM (#59444912)
    It will be amusing watching the "AI's" choke on the Fact-value distinction [wikipedia.org] and more so, to witness politicians choking second-hand.

    "Ethics" based on data analysis is the exact equivalent of "just making stuff up".
    • by AHuxley ( 892839 )
      Need to repair a road.
      The city wasted the road repair money on other city services for that year.
      The engineers, experts, workers again list the road as needing repair...
      The AI suggests road repair work as a top spending topic.
      Given data from other US cities with great roads and other US cities that fail to repair roads...
      The Algorithms Management and Policy Officer steps in and stops the collection of road related data sets.
      The other city services have tax payers priority as they reduce "discriminatio
      • by Empiric ( 675968 )

        "The AI suggests road repair work as a top spending topic."

        This is the part that can never be reached by data, AI or otherwise. That roads should exist and be smooth or at least driveable, and that's better than the military blowing up all the roads and nobody driving ever again, cannot be determined by data. There is a -value- determination that precedes everything else, and is implied even if not stated in any "we should do X".

        This dilemma (at least it is one for secularists) is very long-standing in ph

        • by AHuxley ( 892839 )
          Its not human perception when a cit only has so much money to spend.
          Whats it going to be? Repair the roads? Look after the subway?
          Spend on other city services that year?
          The federal gov won't step in with magic free "omniscient" money anymore for decades of failed NY city spending.
          The city cant tax citizens anyone. The wealthy people will find another great US city with less tax.
          Been wealthy they have the cash and freedom to "move" out of NY.
          Again the city collects all kinds of data on services,
          • by Empiric ( 675968 )

            Whats it going to be? Repair the roads? Look after the subway? Spend on other city services that year?

            Exactly. Now you have the idea.

            The value determination is that the road needs to be fixed.

            That is a good value determination, but it isn't data that gets you there. Another person could look at the same data and state that no roads need to be fixed, that if nobody can get to their jobs and the economy collapses and everyone starves, that's "good for the environment", and subjectively, a "good thing overall". I would totally disagree, but data won't prove my case. The raw data says only there currently exist roads. Why there should continue to be roads ultima

            • by AHuxley ( 892839 )
              Re "but it isn't data that gets you there"
              The city will have an AI for city work...
              Unless it wants to go back to paperwork...
              Re "Another person could look at the same data and state that no roads need to be fixed"
              So the city can no longer trust its own method of reporting data, accepting data from around the city, the people who have to act and budget on the data they get...
              Jobs and the economy collapses - citizens who vote tend to notice. Not getting to work on time over a year is also going to be s
              • Dude, you need to learn how to write paragraphs of formed thought. Not choppy individual sentences. You're being very didactic.

        • You love to make things unnecessarily complicated I see.

          If the benefits of the road repair outweigh the cost, it ought to be done. Now do some calculations (or reasonable approximations thereof) and you know the answer.

          No observations of what "is", via simple human perception, or via massive "AI" Big Data algorithms, tells you how things "ought" to be

          And yet, no philosopher has ever starved to death because he couldn't determine whether he ought to eat something.

          • by Empiric ( 675968 )

            You love to make things unnecessarily complicated I see.

            No, you just have no valid basis for the entire structure of your mind, because you've neglected the personal effort of having a valid thought process.

            If the benefits of the road repair outweigh the cost, it ought to be done. Now do some calculations (or reasonable approximations thereof) and you know the answer.

            And here's you just illustrating and repeating the void in your thought process. "Outweigh". How do you determine when anything "outweighs" anything else, objectively? Whatever rationale you provide of some reason that it "outweighs" it, I'll just repeat why is that objectively better for it to be the case? "Because otherwise X"... "Why is X bad?"... "Be

  • If you asked New Yorkers where the city should be spending its tax dollars I doubt an AI ethicist position would be in the top 10000.

    NYC has serious transportation and education problems and this is where DeBlasio wants to spend money?

    So what big shot's son is getting this job?

  • I think not including an AI in the ethics committee is unethical.
  • They should appoint someone to oversee people pissing on the sidewalk. The stench is pretty bad and getting worse over time.

  • by Crashmarik ( 635988 ) on Saturday November 23, 2019 @03:16AM (#59445418)

    And give jobs to useless friends of politicians.

    Not as good as having the mayor's wife was half a billion but every bit counts.

  • In other words, NYC has created a commission that will alter statistics to fit their idea of political correctness.

God doesn't play dice. -- Albert Einstein

Working...