-
Simon Tatham authored
Now we don't even bother with picking an mp_int base value and a small adjustment; we just generate a random mp_int, and if it's congruent to anything we want to avoid, throw it away and try again. This should cause us to select completely uniformly from the candidate values in the available range. Previously, the delta system was introducing small skews at the start and end of the range (values very near there were less likely to turn up because they fell within the delta radius of a smaller set of base values). I was worried about doing this because I thought it would be slower, because of having to do a big pile of 'reduce mp_int mod small thing' every time round the loop: the virtue of the delta system is that you can set up the residues of your base value once and then try several deltas using only normal-sized integer operations. But now I look more closely, we were computing _all_ the residues of the base point every time round the loop (several thousand of them), whereas now we're very likely to be able to throw a candidate away after only two or three if it's divisible by one of the smallest primes, which are also the ones most likely to get in the way. So probably it's actually _faster_ than the old system (although, since uniformity was my main aim, I haven't timed it, only noticed that it seems to be fast _enough_).
Simon Tatham authoredNow we don't even bother with picking an mp_int base value and a small adjustment; we just generate a random mp_int, and if it's congruent to anything we want to avoid, throw it away and try again. This should cause us to select completely uniformly from the candidate values in the available range. Previously, the delta system was introducing small skews at the start and end of the range (values very near there were less likely to turn up because they fell within the delta radius of a smaller set of base values). I was worried about doing this because I thought it would be slower, because of having to do a big pile of 'reduce mp_int mod small thing' every time round the loop: the virtue of the delta system is that you can set up the residues of your base value once and then try several deltas using only normal-sized integer operations. But now I look more closely, we were computing _all_ the residues of the base point every time round the loop (several thousand of them), whereas now we're very likely to be able to throw a candidate away after only two or three if it's divisible by one of the smallest primes, which are also the ones most likely to get in the way. So probably it's actually _faster_ than the old system (although, since uniformity was my main aim, I haven't timed it, only noticed that it seems to be fast _enough_).