Enigmatic Code

Programming Enigma Puzzles

Bit Twiddling

Every so often I click on the Random Post link at the top right of the site, and have another look at a previously published problem. And sometimes I’ll think about attacking a problem in a different way than my original solution, and post an additional comment.

I recently revisited Enigma 69: Maximum Queen moves to see if I could think of a better algorithm to enable me to expand the analysis beyond a 5×5 board. As it happened I didn’t come up with a better algorithm, but I did refine the one I’d previously published to be more efficient. By eliminating symmetrically equivalent positions, using bit vectors to represent the board, and pre-computing moves I was able to get my program running about 150 times faster than my simple implementation. While this isn’t enough to rescue the pathologically slow nature of the algorithm as the size of the board increases, it does bring the solution for the 6×6 board into a feasible time frame.

After 14 hours of CPU time (less actual time, as we can run multiple copies of the program to consider different numbers of Queens) I was able to confirm that 128 moves was indeed maximal for the 6×6 board, with the Queens arranged in a “border” configuration (and I was able to do some analysis, but not an exhaustive treatment, of the 7×7, 8×8 and 9×9 boards).

As the board was now represented by a bit vector, I was looking for an efficient way to generate all k-bit patterns in an n-bit vector. And that was when I stumbled on the Bit Twiddling Hacks page. Right down at the end of the page is a section called “Compute the lexicographically next bit permutation”.

The page is primarily concerned with C (or C++) code, but the implementation in Python is as follows.

Suppose we start with the variable v holding an integer with k bits set when represented in binary. Then the following code fragment computes w, which is the next largest integer after v that also has k bits set in it’s binary representation.

t = (v | (v - 1)) + 1
w = t | ((((t & -t) // (v & -v)) >> 1) - 1)

So, if we want to generate all 8-bit numbers with exactly 2 bits set, we can simply start with the smallest number with 2 bits set (11 in binary, which is 3 in decimal), and keep applying the the code above until we exceed 255 (the largest 8-bit number).

v = 3
while v < 256:
  print(f"{v:08b} = {v}")
  t = (v | (v - 1)) + 1
  v = t | ((((t & -t) // (v & -v)) >> 1) - 1)

When run (under Python 3.6) we get the following result:

00000011 = 3
00000101 = 5
00000110 = 6
00001001 = 9
00001010 = 10
00001100 = 12
00010001 = 17
00010010 = 18
00010100 = 20
00011000 = 24
00100001 = 33
00100010 = 34
00100100 = 36
00101000 = 40
00110000 = 48
01000001 = 65
01000010 = 66
01000100 = 68
01001000 = 72
01010000 = 80
01100000 = 96
10000001 = 129
10000010 = 130
10000100 = 132
10001000 = 136
10010000 = 144
10100000 = 160
11000000 = 192

Which is a complete list of all 28 8-bit integers with exactly 2 bits set, in numerical order. And if we didn’t limit the numbers to 8-bits the program would continue generating integers with exactly 2 bits set indefinitely.

The [[ bit_permutations() ]] generator function is available in the enigma.py library, but if you are trying to speed up a program it will probably be faster to just use the code fragment inline.

6 responses to “Bit Twiddling

  1. Hugh Casement 20 May 2017 at 12:44 pm

    That’s interesting, Jim, but I don’t understand the notation.
    What do | and // and >> mean?
    Would an implementation in Basic (or Fortran or Algol or Pascal) be very long-winded?

    • Jim Randell 20 May 2017 at 1:51 pm

      Here’s a quick description of these operators.

      In Python (and C) the | is the “bitwise or” operator. It operates on integers by considering their binary representation and producing the corresponding integer that has as 0 in a particular position if both corresponding positions are 0, otherwise the result has a 1 in that position. So:

      27 = 0b00011011
      12 = 0b00001100
           0b00011111 = 31 = 27 | 12

      >> is the “bitshift right” operator. It moves all the bits one position to the right (0 comes into the highest bit and the units bit is lost). Shifting right by k bits is equivalent to integer division by 2^k

      27 = 0b00011011
           0b00000110 = 6 = 27 >> 2

      The << operator is “bitshift left”. All bits are moved one position left (0 come into the units bit). Shifting left by k bits is equivalent to integer multiplication by 2^k.

      Finally the // operator (in Python) is simply integer division. The result is always an integer and any fractional part of the result is discarded:

      27 // 4 = 6

      (One caveat in Python is that integer division always rounds down to a lower number, which means when dividing negative numbers you get: (–27) // 4 = –7).

      I grew up with BBC Basic, which has AND, OR and DIV, but I don’t think it has bitshift operators, but we can use DIV 2 instead of >> 1.

      The following produces the correct output on the BBC Micro Emulator I use (BeebEm3):

      10  V = 3
      20  PRINT V
      30  T = (V OR (V - 1)) + 1
      40  V = T OR ((((T AND -T) DIV (V AND -V)) DIV 2) - 1)
      50  IF V < 256 THEN GOTO 20

      (This Wikipedia page has some more information on bitwise operations [ https://en.wikipedia.org/wiki/Bitwise_operation ]).

      • Hugh Casement 20 May 2017 at 2:39 pm

        Thanks a lot, Jim.
        After a while I more or less worked it out, but you beat me to it.
        I assume DIV is what I know as \.  Anyway, it seems to work.

        An even more useful algorithm would be to return the number of bits set in a given integer.  I suppose we could shift right successively and count what falls off the end.

        • arthurvause 20 May 2017 at 4:26 pm

          The best method to count bits is to use Brian Kernighan’s method, described in the Bit Twiddling Hacks that Jim references, and in the Python Bit Manipulation wiki

          def bitCount(int_type):
            count = 0
              int_type &= int_type - 1
              count += 1
          • Hugh Casement 21 May 2017 at 5:26 pm

            Thanks, Arthur.
            Certainly neater than my method!

          • Jim Randell 25 May 2017 at 3:12 pm

            Re: bit counting

            Kernighan’s method is neat. When I had to address the bit counting problem for work I went with the 256 entry look-up table for the lowest 8-bits, then shift the next 8 in, and so on. So for a 32-bit integer we would go through a loop 4 times with (mask, look up, add, shift) operations. I wonder how this would have compared for speed in general use with Kernighan’s method, which would go through a loop up to 32 times with (subtract, mask, increment) operations.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: