... as an FBI whistleblower and witness for several US official inquiries into 9/11 intelligence failures, I fear that terrorists will succeed in carrying out future attacks – not despite the massive collect-it-all, dragnet approach to intelligence implemented since 9/11, but because of it. This approach has made terrorist activity more difficult to spot and prevent.
Almost no one now remembers the typical response of counter-terrorism agency officials when asked why, in the spring and summer of 2001 in the lead-up to 9/11, they had failed to read and share intelligence or take action when “the system was blinking red” (the actual title of chapter eight of the US’s 9/11 commission’s report) and when the US director of central intelligence and other counter-terrorism chiefs were said to have had “their hair on fire”.
The common refrain back then was that, pre 9/11, intelligence had been flowing so fast and furiously, it was like a fire hose, “and you can’t get a sip from a fire hose”. Intelligence such as the Phoenix memo – which warned in July 2001 that terrorist suspects had been in flight schools and urgently requested further investigation – went unread.
Although “can’t get a sip” was a somewhat honest excuse, it was undercut when the Bush administration, days after the attacks, secretly turned on their illegal “Presidential Surveillance Program” to collect more, by a factor of thousands, of the communications of innocent American citizens, as well as those of billions of people around the globe.
So the “fire hose” turned into a tsunami of non-relevant data, flooding databases and watch lists. The CIA had only about 16 names on its terrorist watch list back in September 2001 and probably most were justified, but there’s no way the million names reportedly now on the “terrorist identities datamart environment” list can be very accurate. The decision to elevate quantity over quality did nothing to increase accuracy, unblock intelligence stovepipes or prevent terrorist attacks.
In fact, years ago a study commissioned by Homeland Security and conducted by the National Academy of Sciences found that no existing computer program was able to distinguish the real terrorists – those who would go on to commit violent acts – from all the “false positives” .
This was corroborated when NSA director Keith Alexander and others, under great pressure to justify their (illegal) “bulk” collection of metadata, pressed underlings to produce 54 examples to prove that “total information awareness” type collection “worked” to identify and stop real terrorism, only to have the proffered NSA examples fall apart under scrutiny, leaving only one flimsy case of a taxi driver in San Diego who had donated a few thousand dollars to al-Shabab-connected Somalians.
Governments rely on costly “security theatre” – the practice of investing in countermeasures to provide the feeling of improved security while doing little or nothing to actually achieve it. But it seems to do more to dupe fearful taxpayers into believing that massive, unwieldy “intelligence” systems will protect them, than to intimidate would-be attackers or reduce terrorist organisation recruitment.
After Edward Snowden described just how massive and irrelevant the US and UK monitoring had become, people started to grasp the significance of the saying: “If you’re looking for a needle in a haystack, how does it help to add hay?”
The fearful citizen may not realise how difficult it is to search and analyse content due to sheer volume. They want to believe in the magic of data-mining to somehow predict future criminal behaviour. If only more contractors are hired and more money is spent to increase monitoring, if only laws can be passed forcing internet companies to constantly surveil every post and kitten image, coded and uncoded, in a multitude of languages, for signs of danger, the Orwellian argument goes, we will find the enemies.
But the real purpose in the egregiously stupid push to assign Facebook the fool’s errand of monitoring everything seems to be to spread the blame. Leaving aside the privacy implications, what people need to grasp is that this is the kind of security thinking that doesn’t just fail to protect us, it makes us less safe.