why unix | RBL service | netrs | please | ripcalc | linescroll
hosted services

hosted services

Recently having a conversation with a colleague at work about how easy the NSA/GCHQ have it. Thinking about encryption, life is always easier if you know what the underlying data is going to be. Say, you know that someone was sending some data to third party, they keep both the original and the encrypted data side by side, then the computer gets confiscated. It'd be easy to reverse the private key since you know the source data.

Imagine then, almost all SSL/GPG streams are gziped. Yep. That's right, gzip has a repeating header. The argument for using gzip is that the underlying data is harder to locate since it's contents is pseudo random in distribution.

The file header will have the first four bytes as 0x1f 0x8b 0x08. The following byte will no doubt be predictable too based on the way that browsers behave, less predictable by GPG.

However, things get even more interesting with HTTP data. Almost all the SSL data will begin with protocol headers, so, GET/POST/HEAD/DELETE Host: Accept: Expires: ETag: Cache-Control: all there to be seen. What's worse with this is that you're damned if you do, and damned if you don't as the source data is known, it doesn't help if you compress it, but you're likely to be worse off.

So, what about going to Amazon and getting the home page, or google and doing a search, I'm sure that things like favicon.ico will be requested and as such some nice plain text to work with.

I can't claim to be an encryption specialist, but it all sounds rather fishy to me and I hope this has provoked some thoughts.