AI is only as biased as the data you train it with.
Computers cannot be racist if you don't program/train them to be racist.
The problem is some people think that facts are racist and will cry bloody murder if cops do something like focus patrols on poor black neighborhoods with high crime rates.
It amazes me that some law abiding citizens will complain about police trying to make their neighborhood safer
by Anonymous Coward writes:
on Wednesday April 07, 2021 @06:58AM (#61246128)
"Facts". If higher policing leads to a higher crime rate (because more people are... checked) the numbers become a fact. And if (conveniently) the neighborhood is predominantly black you have data that indicates that black people are indeed significantly more criminal than other demographics. The other way around we could drop the crime rate to 0 if we stop policing aforementioned neighborhoods. So these numbers don't mean anything really. It's a well known issue if you are really familiar with the topic and not just trying to sound smart. Law abiding citizens don't complain about police trying to make their neighborhood safer. That's BS. They complain about the usage of excessive force by badly trained cops and the inconsistent treatment of different demographics.
What does any of this bullshit even mean? The training in question is training of AI, not cops, and there wasn't even any AI to train. Any who are these "witch hunters", and how does your racist injection of one example change the argument?
Since the comment you responded to clearly needs to be explained to you, it addresses a basic truism that data that may be used to train AI can itself be inherently tainted with bias, resulting in a biased AI model. The OP is providing an example of that, and you're an
If higher policing leads to a higher crime rate (because more people are... checked) the numbers become a fact.
This is only applies crimes where police have to catch someone in the act. Crimes that leave evidence like murder, property damage, arson, kidnapping, etc. can be reliably tracked. Dropping the police presence to zero doesn't make these crimes disappear, you're still gonna have dead bodies rolling into the morgue and fire marshals finding arson.
You have to be careful though. Does the neighborhood have a high crime rate because there are more criminals or because more crimes are detected because it gets more patrols?
I agree with your basic sentiment however one should never forget that you get what you measure. This is always the problem of social science and making decisions based on gathered data. Unlike in the lab its rarely the case the measuring was truly done in the same way at the same frequency.
That's one of those questions that have become rhetorical, and because of that, unhelpful.
The problem is with the current definition of "being racist". It is now so broad that anyone qualifies unless they prove themselves unguilty, and even then, a mere allegation from a random nobody and you're back on square "racist".
To correctly brand someone racist you have to prove they feel one race is superior to another, and that they act(ed) from that belief. And in the case of "systemic racism", like painting
So you make a claim about an individual that is based on their ethnicity and you can't back it up by sufficient and unambiguous evidence, then it's racism.
And if you're capable of rational thought you should realize that such racist reasoning is fallacious. It's a variant of an association fallacy, like if you claimed that Hitler ate sugar, so people who eat sugar are like Hitler. Though it's even worse than that, because at least eating sugar is a active choice that people
You very specifically make it about a single individual when most racism goes against larger groups being, in general, more criminal.
If, for example, someone makes a claim that 70% of all blacks are criminal, then that person can point to a black criminal and say he was right, while you can point to a black law-abiding citizen and say you were right. Neither one of you have done anything to deal with the claim of 70% of a group being criminal - you both just cherry-picked an individual that fit your narrati
I would have thought that "sufficient and unambiguous evidence" as well as association fallacy would cover that as well, but apparently it doesn't.
If you make the claim claim that "70% of all blacks are criminal" you need to back that up as well. Beyond that, groups as loosely defined as ethnicity (non 'voluntary membership') usually do not act as a single entity. Groups still comprised of individuals. So even if you could back that claim up, it would still not apply to the individual. It would only apply t
In theory, racism isn't complicated. In practice, currently in the US, "racism" gets put before everything else, gets assumed even if there are other good reasons readily plausible. So that's the other extreme, and a problem because it obscures the real problems and obstructs better approaches.
Oh, and I said at the start that the very definition of racism was stretched well beyond its original meaning. Your answer is in essence that anything you can't otherwise convincingly justify is therefore an issue o
So you make a claim about an individual that is based on their ethnicity
*Solely* on their ethnicity? Or merely *including* their ethnicity? Considering that ethnicity (or even more prominently, race) is easily observable and tied into multiple other variables that are important, should you exclude it from classifiers merely for involving the R-word?
like if you claimed that Hitler ate sugar, so people who eat sugar are like Hitler
Easy answer: eating sugar is non-specific, so eating sugar is not a useful feature for selecting a more-Hitler-like subset of a population.
(Of course, after any easy test, if a false positive is undesirable, you still need to do a go
If you can prove with evidence beyond a doubt that ethnicity or race are relevant, go ahead. At least I would no longer consider it racist if relevance can be established and we can deduce that the events in question were caused by race or ethnicity.
For example if the allegation is a racially motivated crime, by definition race might be a factor. It 'might', evidence is still required.
If you can prove with evidence beyond a doubt that ethnicity or race are relevant, go ahead.
From the classifier's perspective, that's a simple matter of statistical significance that is easily enough decided.
and we can deduce that the events in question were caused by race or ethnicity.
Classifiers don't deal with causality, that's not their job (which is to extract the most useful information from a limited amount of data). In fact that's one of the reasons why their decisions can't be racism -- they're *not* ascribing any causal consequences to any immutable traits (because they don't deal with causality). By definition racism *requires* you to make that causal connection.
Racism can also be prosecuting a black person for a crime they genuinely committed, while letting a white person who did the same thing get away with it.
Crime rates often go up when the neighbourhood creates a panopticon of doorbell cameras and CCTV. Some places are installing private automatic licence plate readers on entry roads.
Suddenly people are reporting more and more crimes, stuff that was happening anyway but went unnoticed or unreported. Littering, pavement fouling, speeding, kids accidentally breaking something with their ball, minor vehicle collisions that would have gone untraced etc.
It can actually have quite negative effects as people get more
"Large" crimes that lead to people becoming hospitalized, killed, visual property damage and so on can be traced more easily. But all the "small" stuff that is more likely to be reported at a higher rate is difficult for comparison.
So like with other situations as well, where the method to measure a quantity does influence the very outcome of the measurement, we need to work around the issue. We'd need some kind of double blind study. But how would we do that?
Because the needs of state governments to have real representation themselves isnt addressed now that we have the direct election of senators. Most of the voting public does not have a clear understanding of what the administrative needs of states are and where responsibility federal versus state divides. Unfunded mandates are good example of things that would happen a lot less if States had representation.
Secondly it turns the senate into body far to easily influenced by a few big often out of state monied
Murders and robbery/theft/vandalism of businesses tends to be pretty fucking static in terms of percentage of crimes reported to the police regardless of police presence. This is because bodies tend to turn up unless specifically executed by a criminal organization willing to go to great pains to hide said bodies, and businesses, unlike individuals, are unlikely to suffer repercussions from criminals for reporting said crimes.
You have to be careful though. Does the neighborhood have a high crime rate because there are more criminals or because more crimes are detected because it gets more patrols?
If you are talking drug use, for example, then more policing definitely means higher crime statistics. The only crime with highly reliable data, that can be compared between regions, is homicide. Murders are the only crime with anything like 100% reporting rate, little ambiguity in definitions, and well investigated. Do those neighbourhoods have a higher homicide rate?
I can't find neighborhood data, but it seems blacks in Utah are more that ten times more likely to be victims of homicide than whites or
Does the neighborhood have a high crime rate because there are more criminals or because more crimes are detected because it gets more patrols?
That's easy, cross reference it with how often crimes show up in other statics. Whether the police arrest someone or not, a murder is still going to put a dead body in the morgue. Fire marshals are still going to report arson. Insurance companies are still going to pay for car thefts and property damage. Even if you had zero data from the police you can still track lots of criminal activity.
The problem isn't a lack of reliable data. We do have fairly reliable metrics. We just don't like what they tell us.
As a white guy in early 40's if I get pulled over, I just worry about a ticket. So when I get pulled over I am calm, polite and rational (I usually stop myself from instinctively saying thank you after he give me a ticket.). If you are black guy, and you get pulled over, They may arrest you, and pull you into questioning, because you "match" the profile of an other criminal, or worse. So when you are pulled over you are more anxious and scared. Which then makes the police even more alert.
So these populations may get more patrols where it is easier to get caught. Also the population is scared of the police, so they try really hard to solve problems without them. These areas get a bad reputation so it is more difficult to find a job or start a business their, so people are impoverish, thus may need to stretch laws in order to survive. Which increased their chance of getting caught and creates a viscous circle.
The problem with racism today isn't like the 1980's "very special episode" where there is that guy who just hates people because of the color of their skin. But a very complex set of conditions where a bias from experience feeds back to itself to increase a bias.
More patrols don't increase a crime rate. If anything, they decrease it by making criminals more wary of committing a crime. Crime rates (we're talking violent and property crime here) are almost exclusively a result of VICTIMS reporting crime to the police. Trying to dismiss crime rates as a racist bias completely dismisses the people who live in these communities that are victimized by the criminals in these communities.
AI is only as biased as the data you train it with.
Computers cannot be racist if you don't program/train them to be racist.
The problem is the people training them don't know they're programming/training them with biased data. So the outcome of the computer is racist, whether you thought you were doing it or not.
The problem is your data may be filled with Biases already, So it learns to follow the the trend of the data that is given.
If you want to train an AI to say find the best job for a candidate, and you populate data of equal weight from 1900-today. It will probably take the person gender and race into account as a factor, and apply them to the stereotypical jobs they had performed in the past. Vs. realizing that the data is bias, and coding factors such as weight and saying ignore such information to help
A man is not complete until he is married -- then he is finished.
That's not how AI works (Score:0, Troll)
Computers cannot be racist if you don't program/train them to be racist.
The problem is some people think that facts are racist and will cry bloody murder if cops do something like focus patrols on poor black neighborhoods with high crime rates.
It amazes me that some law abiding citizens will complain about police trying to make their neighborhood safer
Re:That's not how AI works (Score:4, Informative)
Re: (Score:3)
What does any of this bullshit even mean? The training in question is training of AI, not cops, and there wasn't even any AI to train. Any who are these "witch hunters", and how does your racist injection of one example change the argument?
Since the comment you responded to clearly needs to be explained to you, it addresses a basic truism that data that may be used to train AI can itself be inherently tainted with bias, resulting in a biased AI model. The OP is providing an example of that, and you're an
Re: (Score:2)
If higher policing leads to a higher crime rate (because more people are... checked) the numbers become a fact.
This is only applies crimes where police have to catch someone in the act. Crimes that leave evidence like murder, property damage, arson, kidnapping, etc. can be reliably tracked. Dropping the police presence to zero doesn't make these crimes disappear, you're still gonna have dead bodies rolling into the morgue and fire marshals finding arson.
Re:That's not how AI works (Score:5, Insightful)
You have to be careful though. Does the neighborhood have a high crime rate because there are more criminals or because more crimes are detected because it gets more patrols?
I agree with your basic sentiment however one should never forget that you get what you measure. This is always the problem of social science and making decisions based on gathered data. Unlike in the lab its rarely the case the measuring was truly done in the same way at the same frequency.
Re: (Score:2)
Or more crimes are prosecuted because the police / juries are racist?
Re: (Score:1, Flamebait)
That's one of those questions that have become rhetorical, and because of that, unhelpful.
The problem is with the current definition of "being racist". It is now so broad that anyone qualifies unless they prove themselves unguilty, and even then, a mere allegation from a random nobody and you're back on square "racist".
To correctly brand someone racist you have to prove they feel one race is superior to another, and that they act(ed) from that belief. And in the case of "systemic racism", like painting
Re: (Score:3)
So you make a claim about an individual that is based on their ethnicity and you can't back it up by sufficient and unambiguous evidence, then it's racism.
And if you're capable of rational thought you should realize that such racist reasoning is fallacious. It's a variant of an association fallacy, like if you claimed that Hitler ate sugar, so people who eat sugar are like Hitler. Though it's even worse than that, because at least eating sugar is a active choice that people
Re: (Score:1)
You very specifically make it about a single individual when most racism goes against larger groups being, in general, more criminal.
If, for example, someone makes a claim that 70% of all blacks are criminal, then that person can point to a black criminal and say he was right, while you can point to a black law-abiding citizen and say you were right. Neither one of you have done anything to deal with the claim of 70% of a group being criminal - you both just cherry-picked an individual that fit your narrati
Re: (Score:2)
If you make the claim claim that "70% of all blacks are criminal" you need to back that up as well.
Beyond that, groups as loosely defined as ethnicity (non 'voluntary membership') usually do not act as a single entity. Groups still comprised of individuals.
So even if you could back that claim up, it would still not apply to the individual. It would only apply t
Re: (Score:2)
In theory, racism isn't complicated. In practice, currently in the US, "racism" gets put before everything else, gets assumed even if there are other good reasons readily plausible. So that's the other extreme, and a problem because it obscures the real problems and obstructs better approaches.
Oh, and I said at the start that the very definition of racism was stretched well beyond its original meaning. Your answer is in essence that anything you can't otherwise convincingly justify is therefore an issue o
Re: (Score:2)
So you make a claim about an individual that is based on their ethnicity
*Solely* on their ethnicity? Or merely *including* their ethnicity? Considering that ethnicity (or even more prominently, race) is easily observable and tied into multiple other variables that are important, should you exclude it from classifiers merely for involving the R-word?
like if you claimed that Hitler ate sugar, so people who eat sugar are like Hitler
Easy answer: eating sugar is non-specific, so eating sugar is not a useful feature for selecting a more-Hitler-like subset of a population.
(Of course, after any easy test, if a false positive is undesirable, you still need to do a go
Re: (Score:2)
For example if the allegation is a racially motivated crime, by definition race might be a factor. It 'might', evidence is still required.
Re: (Score:2)
If you can prove with evidence beyond a doubt that ethnicity or race are relevant, go ahead.
From the classifier's perspective, that's a simple matter of statistical significance that is easily enough decided.
and we can deduce that the events in question were caused by race or ethnicity.
Classifiers don't deal with causality, that's not their job (which is to extract the most useful information from a limited amount of data). In fact that's one of the reasons why their decisions can't be racism -- they're *not* ascribing any causal consequences to any immutable traits (because they don't deal with causality). By definition racism *requires* you to make that causal connection.
Re: (Score:2)
Re: (Score:3)
Racism can also be prosecuting a black person for a crime they genuinely committed, while letting a white person who did the same thing get away with it.
Re: (Score:2)
good comment, more please
Re: (Score:2)
Crime rates often go up when the neighbourhood creates a panopticon of doorbell cameras and CCTV. Some places are installing private automatic licence plate readers on entry roads.
Suddenly people are reporting more and more crimes, stuff that was happening anyway but went unnoticed or unreported. Littering, pavement fouling, speeding, kids accidentally breaking something with their ball, minor vehicle collisions that would have gone untraced etc.
It can actually have quite negative effects as people get more
Re: That's not how AI works (Score:2)
Reported crime rates go up, but do the actual crime rates go down?
Re: (Score:2)
So like with other situations as well, where the method to measure a quantity does influence the very outcome of the measurement, we need to work around the issue. We'd need some kind of double blind study. But how would we do that?
Total and absolute surveillance comes to min
Re: (Score:3)
"...with police obliged to get involved due to there being video evidence..."
While I agree with your points, come on now. Police feel no obligation due to the existence of video evidence.
Re: (Score:2)
Unrelated: what is your beef with the 17th amendment?
Re: (Score:3)
Because the needs of state governments to have real representation themselves isnt addressed now that we have the direct election of senators. Most of the voting public does not have a clear understanding of what the administrative needs of states are and where responsibility federal versus state divides. Unfunded mandates are good example of things that would happen a lot less if States had representation.
Secondly it turns the senate into body far to easily influenced by a few big often out of state monied
Re: (Score:2)
I would subscribe to your newsletter
please post more
Re: That's not how AI works (Score:2)
Murders and robbery/theft/vandalism of businesses tends to be pretty fucking static in terms of percentage of crimes reported to the police regardless of police presence. This is because bodies tend to turn up unless specifically executed by a criminal organization willing to go to great pains to hide said bodies, and businesses, unlike individuals, are unlikely to suffer repercussions from criminals for reporting said crimes.
Re: (Score:3)
You have to be careful though. Does the neighborhood have a high crime rate because there are more criminals or because more crimes are detected because it gets more patrols?
If you are talking drug use, for example, then more policing definitely means higher crime statistics. The only crime with highly reliable data, that can be compared between regions, is homicide. Murders are the only crime with anything like 100% reporting rate, little ambiguity in definitions, and well investigated. Do those neighbourhoods have a higher homicide rate?
I can't find neighborhood data, but it seems blacks in Utah are more that ten times more likely to be victims of homicide than whites or
Re: (Score:2)
Does the neighborhood have a high crime rate because there are more criminals or because more crimes are detected because it gets more patrols?
That's easy, cross reference it with how often crimes show up in other statics. Whether the police arrest someone or not, a murder is still going to put a dead body in the morgue. Fire marshals are still going to report arson. Insurance companies are still going to pay for car thefts and property damage. Even if you had zero data from the police you can still track lots of criminal activity.
The problem isn't a lack of reliable data. We do have fairly reliable metrics. We just don't like what they tell us.
Re:That's not how AI works (Score:4, Insightful)
As a white guy in early 40's if I get pulled over, I just worry about a ticket. So when I get pulled over I am calm, polite and rational (I usually stop myself from instinctively saying thank you after he give me a ticket.).
If you are black guy, and you get pulled over, They may arrest you, and pull you into questioning, because you "match" the profile of an other criminal, or worse. So when you are pulled over you are more anxious and scared. Which then makes the police even more alert.
So these populations may get more patrols where it is easier to get caught. Also the population is scared of the police, so they try really hard to solve problems without them. These areas get a bad reputation so it is more difficult to find a job or start a business their, so people are impoverish, thus may need to stretch laws in order to survive. Which increased their chance of getting caught and creates a viscous circle.
The problem with racism today isn't like the 1980's "very special episode" where there is that guy who just hates people because of the color of their skin. But a very complex set of conditions where a bias from experience feeds back to itself to increase a bias.
Re: (Score:2)
Re: (Score:3)
AI is only as biased as the data you train it with. Computers cannot be racist if you don't program/train them to be racist.
The problem is the people training them don't know they're programming/training them with biased data. So the outcome of the computer is racist, whether you thought you were doing it or not.
Re: (Score:3)
The problem is your data may be filled with Biases already, So it learns to follow the the trend of the data that is given.
If you want to train an AI to say find the best job for a candidate, and you populate data of equal weight from 1900-today. It will probably take the person gender and race into account as a factor, and apply them to the stereotypical jobs they had performed in the past. Vs. realizing that the data is bias, and coding factors such as weight and saying ignore such information to help