Posted by: Eric Hansen
encryption, security, vulnerable
There’s a lot of talk saying that you need to use at least 1024-bit keys for encryption to be beneficial now, due to the power of technology and what’s been developed. While I agree that the lower the bit strength, the easier it CAN be to break, I do not think there’s a set “standard” of sorts of what’s too weak or strong, and here’s why.
Yes, having a huge key bit (higher than 1024, for the sake of this argument) means it will take longer for your key to be discovered (64-bit keys are 20 digits long if you’re using pure integers, for example). But, how does this ensure that the encryption scheme is secure? You can use a 64-bit key, or a 256-bit key, but if you use the basic XOR encryption scheme, it’s not going to make a difference one way or another. The strength of the encryption is not dependent on the size of the key used, but on the mathematics. This is why XOR encryption is inherently bad, and why most encryption schemes use more advanced calculations.
For example, I recently wrote a basic version of the Diffie-Hellman Key Exchange. The biggest research I gained from this is that while the bigger the key strength is, the longer it will take to calculate the variables, the encryption takes the same amount of time. You’re not gaining or losing anything when using a 8192-bit key in the realm of security, if you’re using the same scheme as if you were going to do with a 32-bit key, besides the length of time it takes to generate the key itself.
For I.T. security to become a more richer experience, we shouldn’t focus on the biggest keys to use, but the power of the encryption we’re using the keys for. Look at the Viegenre Cipher; simple in design, but as it uses a one-time pad, nearly impossible to crack as long as the key isn’t repeated.