UK To Use "Risk-Profiling Software" To Screen All Airline Passengers and Cargo 222
dryriver writes "The BBC reports: 'The UK branch of an American company — SAS Software — has developed a hi-tech software program it believes can help detect and prevent potentially dangerous passengers and cargo entering the UK using the technique known as 'risk profiling.' So, what exactly is risk profiling and can it really reduce the risk of international terrorism? Risk profiling is a controversial topic. It means identifying a person or group of people who are more likely to act in a certain way than the rest of the population, based on an analysis of their background and past behavior — which of course requires the collection of certain data on people's background and behavior to begin with. When it comes to airline security, some believe this makes perfect sense. Others, though, say this smacks of prejudice and would inevitably lead to unacceptable racial or religious profiling — singling out someone because, say, they happen to be Muslim, or born in Yemen. The company making the Risk-Profiling Software in question, of course, strongly denies that the software would single people out using factors like race, religion or country of origin. It says that the program works by feeding in data about passengers or cargo, including the Advanced Passenger Information (API) that airlines heading to Britain are obliged to send to the UK Border Agency (UKBA) at 'wheels up' — the exact moment the aircraft lifts off from the airport of departure. Additional information could include a combination of factors, like whether the passenger paid for their ticket in cash, or if they have ever been on a watch list or have recently spent time in a country with a known security problem. The data is then analyzed to produce a schematic read-out for immigration officials that shows the risk profile for every single passenger on an incoming flight, seat by seat, high risk to low risk.'"
Still the same profiling bullshit (Score:4, Interesting)
Now seriously, ! I'm tired of seeing the same bullshit again and again, just with different icing, to justify what's flawed from the very start. This shows that people taking decisions are tied to their own irrationally feelings and not paying attention to what science tells them. [schneier.com]
I once read a scientific paper which recommends, if I remember correctly, randomly selecting 8% of the passengers for extended verification. This procedure has the advantage of transmitting zero information to the bad guys. If you start profiling, you give them a chance to test the system.
Re:It Believes (Score:5, Interesting)
All risk-profiling does is make you *think* you're more likely to catch certain people. In fact, what it does is provide a list of constraints that those people will actively avoid triggering and, thus, stand much less chance of being caught. Do you really think any terrorist is thinking of using a liquid bomb since the liquid-size limitation rules came in force? No, they'd do something else just to avoid detection.
The *only* way, if you don't want to check everyone out manually each time, is to do entirely random security checks. Stick your guys on the frontline to catch anything "funny" but flag 1 in 10 people who go through completely at random and make it a condition of their employment that your security guys must check those people, young or old, rich or poor, first class or economy, in a wheelchair or with a false leg or completely healthy, no excuses.
All this does is catch the stupid terrorists who would be caught anyway, while giving the sensible ones a perfect opportunity to knowingly and predictably reduce their risk by huge amounts.
What risk category are you going to enter? Travelled to dodgy countries recently? A stayover for a time in a country will soon time that out so it's not relevant. Or just use a local rather than a foreigner. Age range? That's just getting into the "children / old people can't be terrorists" mentality, which is a stupid place to go. Race? Religion? Credit card history? All of the people you would catch from things like that should be caught ANYWAY by just decent security in the first place. All the rest, that you miss, will deliberately be missed by profiled screening.
At least with random screening you stand a chance of catching someone that's avoiding your profiling, and a chance of spotting new trends ("Here, John, isn't that the third guy we've stopped who's had a little vial in his bag?"), and a chance of actually scaring off terrorists / smugglers / etc. from trying in the first place.
But all this is moot while you only enforce a decade-old security policy based on a single (unsuccessful) incident, rather than thinking about what's actually likely to be dangerous and what's not.
I can't take 100ml of water in a single bottle (but I can take more of "baby milk", so long as I drink from it first - and that check is as rigorous as security watching me put it to my lips and then looking away!), but I can take several bottles that won't be inspected.
I can also take large poles in a rucksack, and various amount of improvised weapons, and hell I know someone who went through Heathrow three times while carrying CS spray (which is illegal to possess in the UK, let alone on the plane). It wouldn't be hard to fashion an instrument from perfectly ordinary hand luggage capable of levering open the cabin door and threatening the pilot (and UK cockpits are not armed and don't have armed officers onboard) if that was your intention.
If you want security, automated profiling is like shouting "friend or foe?!". Nobody with any brains is ever going to shout foe (or be flagged by your profiling) if they have hostile intent.
Want to improve security? Scrap the enormous queues at every major UK airport - by scrapping all the stupid hand luggage restrictions (obviously keep things like "explosives" on the list, though!) and other crap (grab a tray, take off your belt, your shoes, put your laptop separately in here, etc.), and with all the time you spare your security people can have a 10 second chat with each passenger as they go through the gates rather than just dumbly standing there "checking" your passport (which is basically a "computer says no" exercise) or having 4-5 of them wave you through the metal detector while they have a chat.
Let them stop anyone they like and send them to a private queue for proper pat-down (out of the main queue, away from accomplices, not backing up the frontline guys), and also have automated gates that send 1-in-10 or 1-in-50, or whatever ratio, of people that way completely at r
I could have worked for one of these outfits (Score:5, Interesting)
A couple of years ago I went for an interview for one of these companies rather naively. Their product wasn't described as profiling, surveillance or monitoring but "adaptive security". After I finally cut through all the bullshit and worked out what they were actually selling, I bailed on it (with a proverbial "fuck you stasi bastards" and loss of the job agent). However I couldn't help noticing one thing:
The management staff were utterly convinced that this was the best way to go and that the entire world's problems were going to be solved by profiling in this way. I'm not talking about it being the marketing pitch, but actually some kind of crazy psychopathic paranoia about their own mortality in the hands of terrorists. I cannot fathom how these guys actually operate with this mindset at all. It was rather shocking actually and has permanently destroyed my acceptance of capitalism. It was literally like OCP or Weyland corporation were real for a few minutes.
Someone needs to legislate this out of existence because we're fucked if society ends up at the hands of nutjobs like them.
Great idea! (Score:5, Interesting)
Additional information could include a combination of factors, like whether the passenger paid for their ticket in cash, or if they have ever been on a watch list
Great idea, that way anybody that has ever been put on a watch list can be harassed for ever! Not because a court of law determined they did anything wrong, no, but because they're on a list (or have been on one). You see, they probably did something wrong or else they wouldn't have been on that list in the first place...
Never mind the fact that this is all done in secret, with no judicial oversight, no accountability and no way to appeal those decisions and that people basically end up on those lists for exercising their political rights.
Try working as a journalist/filmmaker and reporting on the global war on terror, try actively opposing the US drone war or try supporting wikileaks (or any organization that the US has secretly decided they do not like) and see how quickly you end up on those watch lists.
Of course, you'll never know you're on one of those lists until the next time you try flying to the US, then you'll be detained and questioned (not to mention laptop seizure etc.). It happened many times to Jacob Appelbaum [wikipedia.org], a Tor developer, it happened to Imran Khan [guardian.co.uk], one of the most popular politician in Pakistan and it happened repeatedly to Laura Poitras [salon.com], an Oscar-and Emmy-nominated filmmaker. These people are spied on and harassed because of their political opinions, thanks to the global surveillance state we now live in.
How submissive have we become that as people living in democracies we even accept the existence of "watchlists"?
Wtf has capitalism got to do with it? (Score:2, Interesting)
"has permanently destroyed my acceptance of capitalism"
Since when did an economic model have any relevance on what security software a company is developing? You think the russians are just sitting around writing screensavers full of fluffy bunny rabbits?
"I bailed on it (with a proverbial "fuck you stasi bastards" "
Very mature.
Jeez, I've never read such a lot of lefty student tosh in all my life.
Re:It Believes (Score:5, Interesting)
Credit risk profiling is part of my job and these models do indeed wok. Unfortunately, they need large sample sizes to be effective. Unless the UKBA has intercepted more than 1,000 terrorists about to jump on a plane, I'd be very sceptical indeed.
Another big concern is that these models all assume that the future is the same as the past. Feeding the model data on Islamic terrorists isn't likely to help you detect extreme right nationalist groups, for example. As conflict moves around the world, there's a risk that the model will find last year's terrorist-turned-nobel-peace-prize-winner and completely ignore the perpetrator of next year's atrocity.