Talk:Key stretching

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Worth 72 bits?[edit]

Another way to think of it is that 65000 rounds in the loop means about 216 operations, which means the stronger key is "worth" about 16 bits more in key strength. If the weak key is a 56-bit "export key" then after key strengthening it is worth 56+16 = 72 bits.

This part seems just wrong, since the strength of a 56-bit key is only worth 56-bit when attacked with a brute force exhaustive search. If you use a dictionary attack then a real password usually isn't as strong as the derived key could potentially be (in this case 56-bit). Therefore it's not the output key that is strengthened but the input, which is much weaker.

In other words: you can't exceed the strength of the output key, only the weak input.

If you'd try to strengthen a totally random 56-bit key by repeatedly hashing it and then cutting it down to 56-bit again the cost of an attack wouldn't change, since you could just perform an exhaustive search on the output key instead of the input.

Thus i'll remove that paragraph.

--89.55.230.143 17:18, 7 March 2007 (UTC)[reply]

Well, you missunderstood that paragraph. You are right that "You can't exceed the strength of the output key, only the weak input." However, the output key is a "stronger key" and should be the full result of the key strengthening. That is, 128 bits or more. We should NOT cut the output key down to 56-bits after strengthening it. The export laws (thankfully) do not require that.
I put the paragraph back into the text and changed "it" in the sentence to say "the stronger key" to make the paragraph clearer. So the sentence is now: "If the weak key is a 56-bit "export key" then after key strengthening the stronger key is worth 56+16 = 72 bits."
We can extend the paragraph to explain that the output stronger key should NOT be cut down to 56 bits again but should be kept as 128 bits or more. If you find it necessary to make it understandable?
--David Göthberg 11:44, 4 April 2007 (UTC)[reply]

+1.5 years/bit[edit]

So far computer speed has doubled about once per 1.5 years. (See Moore's law.) This means that each 1.5 years one more bit of key strength is possible to crack. This means that the 16 extra bits of strength is worth about 16×1.5 = 24 years later cracking.

That's true only for key that can be cracked in long periods (years, not months).

Example 1:

Let's imagine n-bit key can be cracked in 1 minute. Now, let's add 1 bit to the key: (n+1)-bit key will be cracked in 2 minutes, not 1.5 years + 1 minute.

Example 2:

Let n-bit key can be cracked in t years with computation power growing 2x every 1.5 years (, where t = time in years). First, let's find how many "years" (in terms of current computers) will be "calculated" in t real years:
For example, if key can be cracked in 15 years with computer speed growing, current computers will crack it in:
(impressive, isn't it?)
Now let's add 1 bit to the key. It requires 2x calculations(but not computation power!). Let's calculate how fast it can be cracked:
Examples:
.
.
.
.
.
So, everything is correct, but for longer periods. :)
lim 18:19, 31 August 2007 (UTC)[reply]
No, you have misunderstood what that paragraph means. Say you have a X bit "weak key". And that your attacker with his supercomputer can crack that key in about 2 months. But the attacker only thinks the kind of target you are are worth about 1 month of supercomputer time. So you are to expensive for them to attack so they will use their supercomputer for other targets. (Easier targets or targets more worth to attack for them.) But in about 1.5 years they will have a twice as fast supercomputer. And then it will only take them about 1 month to crack your key. Then they think it is worth it and will crack your key. But if you have applied 216 key strengthening rounds to your key they will not be able to crack that "stronger key" even with their new supercomputer. Since it would take them 216 months to crack with that supercomputer. (That is 5461 years mind you.) So they will instead have to wait 16 * 1.5 years = 24 years until they have bought a fast enough supercomputer to do the cracking in one month of supercomputer time. --David Göthberg 00:08, 1 September 2007 (UTC)[reply]
Ok, it was too simple for me to understand. :) Still, what I said is also true. — lim 11:26, 1 September 2007 (UTC)[reply]

Inaccurate statement about Moore's Law[edit]

The article states that "Moore's law asserts that computer speed doubles roughly every 1.5 years." This is incorrect. Moore predicted that transistor count would double every 2 years. David House, a colleague at Intel, further predicted that this meant performance would double every 1.5 years. Thus, it was a prediction, not an assertion. And it was House, not Moore who made the prediction. --2601:200:4500:5F7:55D3:43CF:BB91:F205 (talk) 17:32, 20 February 2020 (UTC)[reply]

Consistency[edit]

Hi, unless i'm missing something (and while this is somewhat trivial) the below excerpt (from the article)

 key = hash( password + salt )
 for 1 to 65000 do
   key = hash( salt + key )

should be changed to:

 key = hash( password + salt )
 for 1 to 65000 do
   key = hash( key + salt )

for concatentation consistency... again maybe it's stupid... just my comment...

purpleidea (talk) 06:00, 27 December 2007 (UTC)[reply]

First a disclaimer: I have studied computer security and encryption since 1988 but I am not a crypto analyst.
As far as I can see it doesn't matter much in what order we hash together the key and the salt. Both ways prevent dictionary attacks and rainbow table attacks etc. However, we did take those examples from reports written by professional cryptographers (see the references). But this article and that code has been edited many times by many editors since then so I don't remember which order those reports used. But putting the key first like you did looks better together with the other examples so I don't feel the need to change it back.
--David Göthberg (talk) 12:03, 6 March 2008 (UTC)[reply]
A hash function can be resistant to random-prefix preimage attacks, random-suffix preimage attacks, both… or neither. A "good" cryptographic hash function is obviously supposed to be resistant to both. So I'd say the order shouldn't matter in practice, if the hash function is worth its salt (scnr). Aragorn2 (talk) 12:28, 27 May 2019 (UTC)[reply]

Name of the algorithm and is it used in practice?[edit]

Hi - can someone say if this algorithm of iterating salting and hashing as above has a name? Is it used in practice? —Preceding unsigned comment added by 79.166.67.127 (talk) 23:22, 24 February 2009 (UTC)[reply]

No, I have never heard a specific name for it. I guess you could call it something like: "key strengthening/stretching with the salt added in all rounds".
But there are some standardised key strengthening variants, such as PBKDF2 usually used in PKCS #5 / RFC 2898, and the names of those standards are often used instead of the name "key strengthening/stretching". I took a look at RFC 2898 and it seems that PBKDF2 just adds the salt once before all the rounds.
And yes, key strengthening is used in a lot of systems. For instance TLS/SSL uses it. And that is the standard used in all modern web browsers for encrypted connections. That is, when the web page address starts with "https:" and you get the padlock in the bottom of the browser window. But I don't remember if TLS/SSL adds the salt in each round or just once before all the rounds.
--David Göthberg (talk) 17:15, 26 February 2009 (UTC)[reply]
Where does TLS use passphrases? Or does it use key strentghening for generated keys? (what's the point?) -- intgr [talk] 08:23, 27 February 2009 (UTC)[reply]
intgr: Hehe, I expected someone to ask that. Remember, key strengthening can be used on any kind of key data, not only passwords/passphrases.
TLS/SSL doesn't usually use passphrases to produce the shared secret for a session, although there is an extra standard for that too.
But TLS do use key strengthening in every session. Here's a somewhat simplified description of what TLS does. It was some years since I studied the TLS standard and source code, but as far as I remember:
1: When TLS connects it sends a session IV in each direction. That is both the client and the server produces a big random value that it sends to the other end. (I think I remember that the IVs are 96 bits each.) This is needed to stop replay attacks.
2: They also do a "key negotiation" usually using certificates and public-key cryptography such as RSA to create a shared secret. Depending on what kind of key negotiation is used, the shared secret can become the same in each session between the same two nodes, since the same public keys are used each time.
3: Then both ends concatenate the shared secret and both the session IVs. This is the "shared session secret", which thanks to the IVs is unique to this session and thus stops replay attacks. Actually, they make two session secrets, one for each direction of communication, by also adding a small value that indicates the direction.
4: Then TLS applies key strengthening on the two shared session secrets to make them stronger. This might seem unnecessary if the key negotiation used strong enough public keys, but in cryptography we usually use the "belt-and-suspenders" approach, since you never know how much an attacker can crack the algorithms you use. Also, this mixes the secrets very well with the session IVs and the direction markers. Thus making the two secrets "sufficiently independent" from each other and from other session secrets (from other sessions). This means that the session IVs + direction markers together can be seen as the "salts" for the key strengthening.
5: Then the two strengthened shared session secrets are fed as seeds to two instances of a cryptographically secure pseudo-random number generator (CSPRNG). One for each direction. Those CSPRNGs are then used during the session to produce all the keys, IVs and MAC keys needed for encrypting and MACing data blocks. That is, TLS uses a new encryption key, block IV and MAC key for each data block it sends.
6: Here's some extra technical details for the crypto geeks: Since the block IVs are produced by the CSPRNGs the IVs don't need to be sent on the wire. Thus saving bandwidth. And using fresh secret IVs for each block is pretty nifty, since that makes the encryption stronger than just the key length of the cipher. And using fresh keys and IVs for each block means that if an attacker can crack your cipher, then he still only gets the keys for one block. And then he also has to crack the CSPRNG to get the keys for the other blocks, or crack every block itself.
Actually, step 4 and 5 are partly intertwined but I'll spare you the details of that. Anyway, this means TLS uses state of the art to produce its session keys. (But it doesn't mean that TLS is state of the art in other aspects: It doesn't use "full stealth" and it only uses one layer of encryption. But that is another story.)
--David Göthberg (talk) 10:21, 27 February 2009 (UTC)[reply]
I can't find the key strengthening that you mention in the TLS standard. Can you give an explicit reference (e.g. page number and/or explicit name of the strengthend key)? The intermediate keys are long (e.g. pre_master_key is 48 bytes), an key strengthening of secrets that are much longer than necessary doesn't make sense to me. There is some hashing involved. E.g. the computation of verify_data requires to hash all previously exchanged messages together. However, the purpose of this step is to prevent man-in-the-middle attacks by making sure that both parties have the same view of the protocol. 81.62.32.36 (talk) 11:04, 28 February 2009 (UTC)[reply]

Update[edit]

The article might need a minor update to reflect increases in computing power - it currently cites "today" as 2006. Socrates2008 (Talk) 11:49, 6 March 2008 (UTC)[reply]

References?[edit]

Socrates2008: You keep slabbing {{nofootnotes}} and {{refimprove}} on this article. Why?

This article covers a method that is so simple that any one with a basic understanding of cryptography understands how it works and why it is secure. We have two perfectly good references to papers that covers the content of the article. Papers written by some of the most respected cryptographers there is.

Do you mean that we should reference every paragraph in the article to the corresponding sections in those two papers? I haven't heard before that we should "deep link" references like that.

By the way, this is a wiki, you can edit it. So instead of just complaining, why don't you fix it yourself? And why do you do your complaints in the article instead of here on the talk page?

Or is it that you don't understand the method and why it is secure? If that is the case then I can set you up with some good crypto literature so you can read up on it.

--David Göthberg (talk) 11:49, 22 March 2008 (UTC)[reply]

Kindly read verifiability rather than having a go at other editors about the poor referencing in the article. Thank you. Socrates2008 (Talk) 03:21, 23 March 2008 (UTC)[reply]
So please specify, exactly what part of the article are you challenging? And is that part lacking from the two main references that we have put in this article? Or is it that you don't find the authors of those references ( Bruce Schneier, David Wagner etc) to be reliable sources when it comes to cryptography? You know, they are among the most trusted crypto analysts there are. (But I guess you don't know, right?)
--David Göthberg (talk) 04:09, 23 March 2008 (UTC)[reply]
No footnotes. Therefore not possible to determine which parts of the article are referenced and which parts not. Socrates2008 (Talk) 05:32, 23 March 2008 (UTC)[reply]
(Yes, I know this is a dead discussion. I just want to add this for the sake of other editors.)
I think that "not possible" is a bit strong, seeing as only three references are listed, each of them are available for free online, and they aren't whole books. Still, adding inline citations would certainly make it easier for a reader or editor to quickly check what's sourced, and it could become more important if more information is added from other sources. 68.123.111.95 (talk) 02:53, 12 July 2010 (UTC)[reply]

Is salting considered key strengthening?[edit]

In a recent edit, a new section called 'salting' was added to the article; however, does this actually constitute key strengthening? As far as I can tell, salting is primarily a defense against time-memory tradeoffs (like rainbow tables), not about slowing down individual tests.

This means that it doesn't comply with the definition given in the lead section, "[...] to make a weak key such as a password or passphrase stronger, i.e. more costly to test combinations through brute force or a dictionary attack. They operate by increasing the time it takes to test each possible key". Either the 'salting' section should be removed, or the lead section should be updated. -- intgr [talk] 14:37, 9 April 2008 (UTC)[reply]

Right, I wouldn't consider salting to be "key strengthening/stretching", although it does make the key stronger. As you pointed out, salting is to prevent rainbow tables and similar (prevent using stored data), and key strengthening is about causing more CPU work. But both should of course be combined, as we show in the second and third examples. And I agree, the salting section should go. It was fine as it was just before that, with salting in example two and three and a little explanation about salting with a link to the salt article.
By the way, I saw that the user that added the salting section also added a "Main article: hash chain". But those examples are not hash chains. Hash chains are just really bad CSPRNGs. The only thing they have in common with key strengthening is that they use a hash in a loop.
--David Göthberg (talk) 15:32, 9 April 2008 (UTC)[reply]
I did a copy edit to clean things up some. I gave salting a brief mention as a related technique. I also took out the stuff on export control. I'm not aware of key strengthening actually being used to defeat export key size limits and I strong suspect the export control agencies would simply refuse to license systems that used key strengthening or further limit key size when it was used.--agr (talk) 16:42, 9 April 2008 (UTC)[reply]
Well, I used to work with building crypto software for export. So I have read the crypto export regulations here in the EU (which seems to be a direct copy of the Wassenaar Arrangement) and I have several times been in meetings with the export control authority here in my country (Sweden). You don't need a "license" to export systems that has a shorter key length than what the export control restrictions mandate. So it is not about "licensing", it is about following the rules. Pretty much all ciphers internally use some key setup and key expansion. The rules do not state anything about such internal key expansion and the export authorities both in North America and the EU have always allowed export of ciphers like DES that internally used a larger key size than the export limit prescribed. (The limits were increased some years ago so now even the internal key size in DES is below the limit, but the internal size of for instance AES is still above the limits, but may be used as long as it is fed a shorter key.)
But sure, if key strengthening became too common then the export control authorities probably will create new rules that limits it in some way.
--David Göthberg (talk) 17:54, 9 April 2008 (UTC)[reply]

Strength and Time Section Errors[edit]

This section has 2 errors.

First, it states that the slowest personal computers can do 65,000 guesses per second, but highly optimized hardware can only go 5 times as fast (300,000 keys per second)? This seems bogus.

Second, it seems to confuse multiplication and division in the paragraph, which is a serious error: "Such a design can try about 300,000 keys/second for the algorithm proposed above. It's worth noting that the key stretching still slows down the attacker in such a situation, i.e. a $5000 design attacking a straight SHA-1 hash would be able to try 300,000*2^16 = 20 billion keys/second." If the hardware can try 300,000 keys per second, but the key stretching is at 65000 (2^16), then it an attacker can only try about 4.5 keys per second (300,000 DIVIDED-BY 2^16). Javaman411 (talk) 14:35, 1 August 2011 (UTC)[reply]

The 300,000 figure is for testing stretched keys. Unstretched keys could be tested around 2^16 times faster. It looks fine to me. Hiiiiiiiiiiiiiiiiiiiii (talk) 14:28, 19 October 2011 (UTC)[reply]

PBKDF2 is not for generating a hash of a password[edit]

"Note that PBKDF2 is not for generating a hash of a password[dubious ], but an encryption key from a password. PBKDF2 can be used for both if the number of output bits is less than or equal to the internal hashing algorithm used in PBKDF2 which is usually SHA-1 (160 bits)."

The original purpose of PBKDF2 is to generate encryption keys which is not the same as generating a password hashes.

function pbkdf2_sha1($pw, $salt, $rounds, $outputBits)
{
    $ret = "";
    $loops = ceil($outputBits / 160);
    for ($loop = 0; $loop < $loops; $loop++)
    {
        $hash = $hashOut = hash_hmac('sha1', $salt . pack('N', $loop), $pw, true);
        for ($a = 1; $a < $rounds; $a++)
        {
            $hash = hash_hmac('sha1', $hash, $pw, true);
            for ($b = 0; $b < 20; $b++)
                $hashOut{$b} = chr(ord($hashOut{$b}) ^ ord($hash{$b}));
        }
        $ret = $ret . $hashOut;
    }
    return substr($ret, 0, ceil($outputBits / 8));
}

Let's say someone does "pbkdf2_sha1($pw, $salt, 1000, 256)" this means the server will need to do 2000 SHA1-HMACs, but an attacker will only need to generate "pbkdf2_sha1($pw, $salt, 1000, 160)" to test a password which is only 1000 SHA1-HMACs. This is why PBKDF2 is not for generating a hash of a password... unless "the number of output bits is less than or equal to the internal hashing algorithm used in PBKDF2". An alternative is to use PBKDF2 to generate an encryption key and encrypt a known value such as "$hash = mcrypt_ecb(MCRYPT_RIJNDAEL_256, pbkdf2_sha1($pw, $salt, 1000, 256), 'a known value', MCRYPT_ENCRYPT);"

Someone should remove the dubious tag. I originally wrote that so I feel that I shouldn't remove the tag.

SteveT84 (talk) 03:08, 6 August 2011‎ (UTC)[reply]


I consider hashing a password to be used for password authentication which is not necessarily clear, and I didn't realize this until I just ran across this article again. I've fixed it and removed the dubious tag.

SteveT84 (talk) 00:53, 13 August 2012 (UTC)[reply]

Hashing speed, 2017[edit]

I see in the comments above that they assume things in the "Strength and time" section, like that the numbers mean "streched" or "normal" hashes, etc.

I can't tell (despite that when I read "can do about 65000 sha1 hash per second" it means to me that the computer computes that amount of hashes, and not that it computes that amount of streched values from keys), but here are some actual numbers of real hashing:

sha1: 68.000.000.000 hash/s
sha256: 23.000.000.000 hash/s
sha512: 8.600.000.000 hash/s
sha3: 6.500.000.000 hash/s
bcrypt(5): 105.700 hash/s

(for work factor 15 it's hash/sec)

sha256crypt: 3.100.000 hash/s
And with stretching:
pbkdf2-sha1(1000 r): 26.000.000 hash/s
pbkdf2-sha256: 9.400.000 hash/s
pbkdf2-sha512: 3.400.000 hash/s

Also, one commenter mentioned that using future telling skills we expect that hash values improve 55% annually, which is exciting and scary at once. --grin 13:51, 10 October 2017 (UTC)[reply]

Not "vulnerable" to hardware hashing[edit]

The claim that hardware hashing somehow defeats key stretching is untrue. Granted, a dedicated hardware hasher is drastically cheaper than a software implementation on a general-purpose CPU, but those extra 16 bits of effective key length still increase the cost of attack 65536 times, no matter if you run on an ASIC or a cluster of ZX-Spectrums. You pay less for an operation, but the ratio of numbers of operations to do remains the same.

Memory-intensive hashes are great for increasing the cost of attack, but they still scale the same. -KiloByte (talk) 18:30, 18 May 2019 (UTC)[reply]

ahmin[edit]

12345 193.186.196.233 (talk) 10:02, 21 March 2023 (UTC)[reply]