There are some technical topics of interest still in the NSA/Snowden/PRISM fuss. As we know, there have been (at least) two NSA programs that have been publicized recently: one program in which they collect the customer billing records from every phone call of most, if not all, cell phone service providers, and the second a program with the overall name PRISM, where Internet traffic is collected at various large providers like Google and Facebook.
This is becoming a really fertile place for finding people in politics and journalism who really don’t have the beginning of a clue about the issues, technical or social, that go with what they’re attempting to discuss. So, once more into the breach, dear friends, as we try to make sense of what PRISM is really about.
The various releases about PRISM in particular give a picture of an interesting system in which various aspects of data collected, like the famous customer billing records from Verizon, along with data from Google and Facebook and a raft of others are sent through several layers of systems to organize and understand them. If you recall my first NSA article, these are the tools of the expert jigsaw-puzzlers, trying to make a whole bearer bond out of the scraps of bank paper that were extracted from the vacuum bags of the collectors.
The first step is that someone of interest is identified. According to an article in the Washington Post, there are about 120,000 targets of interest — which, honestly, doesn’t seem like a surprising number, although there are the usual outbreaks of fainting spells about it. Those targets are described in some fashion — I’m sure the details are considered very sensitive, but you know it’s a bunch of rules like “has many contacts in Yemen” and “some of the Yemeni contacts are people we’ve identified with Islamist leanings” — and those rules are put into a first layer of the system, called PRINTAURA. But look at the slide — the target selectors are put into this Uniform Targeting Tool, and then go through two review processes depending on whether it’s direct surveillance or stored data like the CDR cell phone records. So, it appears that effectively the FISA-court review happens when they pick the selectors.
Coincidentally, this appears to be exactly what they’ve been telling Congress, including the original statements Clapper made.
Now, that depends on the fiction that I mentioned in the first article that when you hoover up the original source material, it’s “acquisition” and not “collection,” but if you grant that, it sounds like Clapper wasn’t actually lying about it. They don’t “look at” everyone’s information, they have an automated selection to pull out stuff connected to targets of interest.
These selectors, I’m sure, are tuned to avoid false negatives, so some information on U.S. persons is collected too. This matches Clapper’s original statement that the NSA doesn’t collect any data at all on millions of Americans “wittingly.” Oh, they acquire it, but — again see the slide — the arrows marked collection come after the targeting rules are used. They’re the outcome of the targeting rules. Any data on U.S. persons not in the list of targets of interest is collected as a side effect.
This information is further parsed and passed around through a bunch of other systems with the usual collection of sort of nerdy names — NSA is the home of the nerd spies, the people who fancy themselves James Bonds work for CIA. (Those who know can get some more information from those slides that I’m not going to describe. I could really wish the slides were being redacted a bit better. But notice that black blob in the middle; something was redacted out on the slide. I wonder how it is that Snowden with his infinite access got redacted slides?) It ends up on the right side of the diagram going into areas where, no doubt, the data’s made available for the analysts at CIA and FBI.
Join the conversation as a VIP Member