### Random Post

### Recent Posts

### Archives

### Categories

- article (11)
- enigma (1,158)
- misc (2)
- project euler (2)
- puzzle (40)
- site news (44)
- tantalizer (42)
- teaser (3)

### Site Stats

- 178,002 hits

Advertisements

Programming Enigma Puzzles

20 May 2017

Posted by on Every so often I click on the **Random Post** link at the top right of the site, and have another look at a previously published problem. And sometimes I’ll think about attacking a problem in a different way than my original solution, and post an additional comment.

I recently revisited **Enigma 69: Maximum Queen moves** to see if I could think of a better algorithm to enable me to expand the analysis beyond a 5×5 board. As it happened I didn’t come up with a better algorithm, but I did refine the one I’d previously published to be more efficient. By eliminating symmetrically equivalent positions, using bit vectors to represent the board, and pre-computing moves I was able to get my program running about 150 times faster than my simple implementation. While this isn’t enough to rescue the pathologically slow nature of the algorithm as the size of the board increases, it does bring the solution for the 6×6 board into a feasible time frame.

After 14 hours of CPU time (less actual time, as we can run multiple copies of the program to consider different numbers of Queens) I was able to confirm that 128 moves was indeed maximal for the 6×6 board, with the Queens arranged in a “border” configuration (and I was able to do some analysis, but not an exhaustive treatment, of the 7×7, 8×8 and 9×9 boards).

As the board was now represented by a bit vector, I was looking for an efficient way to generate all *k*-bit patterns in an *n*-bit vector. And that was when I stumbled on the **Bit Twiddling Hacks** page. Right down at the end of the page is a section called *“Compute the lexicographically next bit permutation”*.

The page is primarily concerned with C (or C++) code, but the implementation in *Python* is as follows.

Suppose we start with the variable `v` holding an integer with *k* bits set when represented in binary. Then the following code fragment computes `w`, which is the next largest integer after `v` that also has *k* bits set in it’s binary representation.

t = (v | (v - 1)) + 1 w = t | ((((t & -t) // (v & -v)) >> 1) - 1)

So, if we want to generate all 8-bit numbers with exactly 2 bits set, we can simply start with the smallest number with 2 bits set (11 in binary, which is 3 in decimal), and keep applying the the code above until we exceed 255 (the largest 8-bit number).

v = 3 while v < 256: print(f"{v:08b} = {v}") t = (v | (v - 1)) + 1 v = t | ((((t & -t) // (v & -v)) >> 1) - 1)

When run (under *Python* 3.6*)* we get the following result:

00000011 = 3 00000101 = 5 00000110 = 6 00001001 = 9 00001010 = 10 00001100 = 12 00010001 = 17 00010010 = 18 00010100 = 20 00011000 = 24 00100001 = 33 00100010 = 34 00100100 = 36 00101000 = 40 00110000 = 48 01000001 = 65 01000010 = 66 01000100 = 68 01001000 = 72 01010000 = 80 01100000 = 96 10000001 = 129 10000010 = 130 10000100 = 132 10001000 = 136 10010000 = 144 10100000 = 160 11000000 = 192

Which is a complete list of all 28 8-bit integers with exactly 2 bits set, in numerical order. And if we didn’t limit the numbers to 8-bits the program would continue generating integers with exactly 2 bits set indefinitely.

I might add this function to the **enigma.py** library (if I can think of a good name for it), but if you are trying to speed up a program it will probably be faster to just use the code fragment inline.

Advertisements

29 June 2016

Posted by on In my previous article (**Solving Alphametics with Python**) I explored some possibilities for improving on the speed of a simple algorithm for solving general Alphametic expressions using Python’s **eval()** function. I implemented a way of solving Alphametics without calling **eval()** for every potential letter-to-digit assignment, and also a way of reducing the number of possibilities we need to consider in the common case of expressions involving equality.

That article received a comment from Arthur Vause, in which he shared his own program that addresses the problem of repeated calls to **eval()** by writing code that generates a program that considers the possibilities, and then getting Python to execute *that*.

I’ve taken this idea a bit further and instead of having code that generates a relatively short program, instead it can generate a series of nested loops and if statements with all the logic of the valid/invalid assignments built into the structure of the generated program itself. This gives us something that may not look elegant, but executes quickly, (and can be handled by **PyPy** without problems).

So, for a problem like **Enigma 1530**:

TOM × 13 = DALEY

my code produces a program like this:

def solve(): for T in [1, 2, 3, 4, 5, 6, 7, 8, 9]: for O in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]: if O != T: for M in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]: if M != T and M != O: TOM = M + O*10 + T*100 x = TOM * 13 (x, Y) = divmod(x, 10) if Y != T and Y != O and Y != M: (x, E) = divmod(x, 10) if E != T and E != O and E != M and E != Y: (x, L) = divmod(x, 10) if L != T and L != O and L != M and L != Y and L != E: (x, A) = divmod(x, 10) if A != T and A != O and A != M and A != Y and A != E and A != L: (x, D) = divmod(x, 10) if D != T and D != O and D != M and D != Y and D != E and D != L and D != A and D != 0 and x == 0: yield { 'A': A, 'D': D, 'E': E, 'L': L, 'M': M, 'O': O, 'T': T, 'Y': Y }

(This is a slightly simplified version, but you get the idea).

Firstly, this code isn’t elegant. But we don’t really care about that in this instance. No-one is going to look at it, it’s just going to be executed. (I know I have asked you to look at it in this post).

Here’s an explanation of how the generated program works:

(Lines 2-6)

In this example, the only letters we need to consider the values of are T, O, and M, and instead of using **itertools.permutations()** and rejecting assignments with invalid digits we don’t need to generate possibilities with invalid assignments in the first place.

For example, we don’t need to consider the possibility of T being zero (because leading zeros are not allowed), so we don’t put 0 in the possible values of T. Any other invalid digits assignments can be removed in the same way.

(Lines 7-8)

Once we have the values of T, O, M, we construct the value of TOM and calculate the left-hand side of the expression (TOM × 13).

(Lines 9-18)

We then try and match this result to the value we are looking for (DALEY), by repeatedly examining the final digit. For each digit we examine we either match it an existing assignment (if it is a letter we have already seen), or we check that it is a new assignment (if it is a letter we haven’t already seen).

In this case, all the letters are new, so we just need to check that their values don’t reuse any of the digits we have already assigned. When we get to the leading digit (D) we also need to check that it is non-zero, and that we have used up all the digits of the result.

(Line 19)

If we make it through all that we have found a solution, so we return a dictionary mapping the letters to the digit values.

This approach can easily deal with multiple expressions in a single program — once we have finished with one expression we just generate the code for the next one until all expressions are exhausted. If we reach the end when the code is executed all the expressions are satisfied, and we can return the dictionary for the solution.

I’ve implemented this approach in the **enigma.py** library (from version 2016-06-25), as the function **substituted_expression()** and changed the **SubstitutedExpression()** solver to use this.

Comparative timings are given in the table below, using the examples from the previous article.

The columns are:

1 = Raymond Hettinger’s code, using repeated calls **eval()** for a single expression, multiple expressions are chained together using “and”. The code is modified so that single digit 0 values are allowed. Note: This can’t deal with expressions in bases other than 10.

2 = The previous version of the **enigma.****py** code, as described in the previous article. It uses a single call to **eval()** for a single expression, but can chain individual expression solutions together to solve problems involving multiple expressions.

3 = The current version of the **enigma.py** code, as described in this article. Multiple expressions can be evaluated in a single call to **eval()**.

4 = The **SubstitutedSum()** solver form the **enigma.py** library. This can only deal with expressions that are simple sums (but can chain expressions together, operate in different bases and allow initial assignments and invalid assignments).

1 2 3 4 test 94.4s 4.6s 0.32s 0.19s (LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL) 75.0s 0.44s 0.07s 0.07s (HMPDM + BHPHM = RCDHA) (RBAD + PQHD = AADD) 92.7s 4.2s 0.37s ---- (SAND + SUN + SEX + SEA = IBIZA) (SAND > SUN > SEX > SEA) 2.6s 0.93s 0.11s ---- (PAY + ASP + YES + ERR + RYE + SPA = YAPS) (P < A < Y < E < R < S) 32.0s 0.22s 0.04s ---- (TOM * 13 = DALEY) 45.7s 0.26s 0.04s ---- (CUT + UTC + TCU = MEDS) ((RIO + IOR + ORI) * MEDS = OMTTOUI) 47.0s 1.1s 0.14s ---- ((ENI * GMA) % 1000 = MES) (ENI + GMA = SUM) (I > 2 * U) 109.2s 8.6s 0.99s ---- (ALPHA + BETA + GAMMA = DELTA) (ALPHABET % 26 = 0) (ALPHABET % 24 = 0) (GAMMA % 3 = 0) 85.3s 0.19s 0.04s ---- (ENIG * MA == AM * gIne) (E - N - M == M - n - e) (N == n + n) (IN == nI + nI) ---- 7.6s 0.55s 0.31s (GOLD + DALEY = THOMAS) [base=11] ---- 55.6s 0.96s 0.08s (FAREWELL + FREDALO = FLINTOFF) [base=11] 107.9s 2.9s 0.40s 0.07s (ELGAR + ENIGMA = NIMROD) [O=0] 134.6s 1.3s 0.24s 0.09s (WILKI + NSON = JONNY) [no digit is 9] 87.2s 0.48s 0.05s 0.08s (1939 + 1079 = 6856) [digits cannot stand for themselves] 78.4s 0.55s 0.08s ---- (BRAIN + STRAIN + AGAIN = ENIGMA) (is_cube(ATE)) 74.1s 2.6s 0.16s ---- (ETA + BETA + THETA = DELTA) (is_prime(PHI)) (is_prime(PSI)) 109.6s 0.72s 0.05s ---- (SEVEN - THREE = FOUR) (is_prime(SEVEN)) (is_prime(FOUR)) (is_prime(RUOF)) (is_square(TEN)) 55.5s 0.44s 0.06s ---- (M * TIMES = ENIGMA) 92.0s 12.7s 0.61s 0.10s (SAINT + GEORGE = DRAGON) (E % 2 = 0) 59.0s 0.45s 0.06s ---- (AB * CDE = FGHIJ) (AB + CD + EF + GH + IJ = CCC)

As we see, the approach described in this article is generally hundreds of times faster than the repeated **eval()** approach, and in some tests more than 2000× faster. It is also generally tens of times faster than the approach described in the previous article. In fact it solves each of the example problems in less than 1s and can solve all of the given example Alphametics in less than 6s total runtime.

If you want to see the actual code that is generated by the solver for a particular problem you can pass a value of 3 or more to the **verbose** parameter of the solver. e.g.:

% python -m enigma Alphametic -v3 "TOM * 13 = DALEY" ...

And the code (as well as some other debugging information) will be output before the solutions.

Thanks for reading.

22 June 2016

Posted by on (See also: **Solving Alphametics with Python, Part 2**).

Many *Enigma* puzzles use **Alphametics** (or **Cryptarithms** — see the *Wikipedia* page on *Verbal Arithmetic* for other terms), where a sum (or other expression) is given with the numbers consistently substituted for letters (or sometimes other symbols).

For example, **Enigma 63**, consists of solving the following Alphametic sum:

LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL.

By the time I came to solving this puzzle I had got a bit bored with writing programs to solve specific Alphametic sum problems, and decided it would be more fun to write a generic solver that could be used to solve any such substituted sum problem. The result was the `SubstitutedSum()` class in the **enigma.py** library, which allows various variations on Alphametic sums to be solved in a fairly efficient way.

(The code itself considers the columns of the sum moving from right to left, and when dealing with each column it looks at the possible substitutions for the symbols remaining unassigned in that column, and when it finds a suitable assignment it moves on to the next column to the left).

Later, I added the ability to invoke it directly from the command line, so for simple problems (like **Enigma 63**) it is not even necessary to write a program:

% python -m enigma SubstitutedSum "LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL" LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL 8308440 + 8333218 + 8302040 + 8333260 = 33276958 / B=3 E=2 G=5 K=7 L=8 M=9 Q=4 R=0 S=1 V=6

This runs in 0.14s, which was actually faster than my original programmed solution.

This is all very well (and fast) for Alphametic problems which are simple addition sums, but what about more complication problems. **Enigma 1530**, for instance, is a multiplication problem, and while it can be solved using **SubstitutedSum()** by turning the multiplication sum into a repeated addition I was interested in a solver that will work on arbitrary Python expressions.

There is such a solver (originally by Raymond Hettinger) which appears in several places on the web. The gist of it is to generate all the possible assignments of letters to digits, then substitute them into the expression and call Python’s **eval()** function on it. In pseudo-code:

# solve <expr> as an alphametic expression def solve(expr): # extract the letters from expr letters = <letters occurring in expr> # consider possible assignments for each letter for digits in permutations('0123456789', len(letters)): # replace letters with corresponding digits new_expr = <expr with <letters> replaced by <digits>> if eval(new_expr): # output the solution <output solution>

This is obviously much slower than a specialised solver, as it can consider up to 10! = 3,628,800 different assignments of letters to digits, and then it calls **eval()** on each one. Each call to **eval()** will involve a parsing, compiling and finally execution stage.

While this is undoubtedly a very concise piece of code, that does indeed allow for the solving of arbitrary Alphametic expressions there are a couple of problems with it in general use.

Firstly (as previously mentioned), it is a bit slow. Generally it takes about a minute to consider all the possible assignments of 10 different digits to 10 letters (depending on the complexity of the expression).

Usually, my first weapon of choice to speed up Python programs is to try running them using **PyPy**. Unfortunately programs that make heavy use of **eval()** don’t benefit from the speed increases of **PyPy**, and my test program ran about 2.5× *faster* under CPython than under **PyPy**.

Secondly it isn’t that easy to extend it to incorporate the additional features that I added in to **SubstitutedSum()** (like solving in a different base, or specifying which assignments are allowed (to allow solutions using leading zeros), etc).

The code as published removes leading zeros from consideration (and the way it is written it will also exclude 0 itself as a standalone digit, which may or may not be what you want). If you try to allow leading zeros you run into problems. A literal of the form 072 is a Syntax Error in Python 3, and in Python 2 it evaluates to 58 as it is assumed to be octal, neither of which is particularly helpful behaviour in this situation.

Instead my **SubstitutedExpression()** code takes the initial expression, and replaces the *words* in it with placeholder arguments. It then uses **eval()** once to parse and compile a function that takes the numerical values of each word in the original expression. It then generates all the possible letter-to-digit assignments, generates the numbers that correspond to the words in the expression and passes those to the compiled function to see if the original expression is satisfied.

The pseudo-code for this is:

# solve <expr> as an alphametic expression def solve(expr): # replace the words in expr with placeholders (body, words) = replace_words(expr) # make an executable function from <body> fn = eval("lambda <args>: <body>") # extract the letters in the words letters = <letters in <words>> # consider possible assignments for each letter for digits in permutations(irange(0, 9), len(letters)): # map the words to numbers using the digits args = list((<word> with <letters> replaced by <digits>) for <word> in <words>) # compute the result of the expression if fn(args): <output solution>

This works well, and allows us to use **PyPy** to gain a speed increase, and also to consider numbers in bases other than 10 and to allow leading zeros, as well as supporting the other bells and whistles carried over from the **SubstitutedSum()** solver.

It is however still slower than it needs to be in many cases. For instance, in **Enigma 1530**, we need to solve the (partial) Alphametic multiplication sum: TOM × 13 = DALEY. The whole expression has 8 letters that need to be substituted for digits, potentially requiring us to look at 10×9×8×7×6×5×4×3 = 1,814,400 possible assignments. But if we look at just the left hand side of the sum, there are only 3 letters in: TOM × 13, which only requires looking at 10×9×8 = 720 possible assignments. For each of those assignments we can look at the resulting product and then try and match it against the Alphametic result of: DALEY. Thereby reducing the number of possibilities we need to consider by a factor of 2,520.

The pseudo-code now looks something like this:

# solve <expr> = <value> as an alphametic equation def solve(expr, value): # replace the words in expr with placeholders (body, words) = replace_words(expr) # make an executable function from <body> fn = eval("lambda <args>: <body>") # extract the letters in the words letters = <letters in <words>> # consider possible assignments for each letter for digits in permutations(irange(0, 9), len(letters)): # map the words to numbers using the digits args = list((<word> with <letters> replaced by <digits>) for <word> in <words>) # compute the result of the expression result = fn(args) # does the result match <value> solution = match(value, result) if solution: <output solution>

(While this approach appears to limit us to only considering Alphametic expressions of the form: <expr> = <value>, where <value> is a single literal (either an Alphametic “word” or an integer literal), this is not really a restriction, since boolean expressions evaluate to 0 or 1 we can treat any expression that doesn’t follow this pattern as: bool(<expr>) = 1, and we are back to an expression of the form: <expr> = <value>).

This is basically the approach I have used in the implementation of the **SubstitutedExpression()** solver in the **enigma.py** library (and I’ve allowed **Alphametic()** to be used as an alias for it to save typing). It can be invoked directly from the command line, so for many problems it is not necessary to write a separate program.

I have also allowed for several Alphametic expressions to be solved simultaneously. The solution (mapping from letters to digit values) that is generated from the first is used as a starting point for the second expression, and so on until all of the expressions are satisfied. The solver includes a heuristic algorithm to decided what order the expressions should be evaluated to try and minimise the number of possible assignments considered. (Although you can turn off the reordering if you prefer).

These are the command line arguments that can be used:

% python -m enigma Alphametic --help usage: SubstitutedExpression [<opts>] <expression> [<expression> ...] options: --symbols=<string> (or -s<string>) = symbols to replace with digits --base=<n> (or -b<n>) = set base to <n> --assign=<letter>,<digit> (or -a<l>,<d>) = assign digit to letter --digits=<digit>,... or --digits=<digit>-<digit> (or -d...) = available digits --invalid=<digit>,<letters> (or -i<d>,<ls>) = invalid digit to letter assignments --first (or -1) = stop after the first solution --reorder=<n> (or -r<n>) = allow reordering of expressions (0 = off, 1 = on) --verbose[=<n>] (or -v[<n>]) = verbosity (0 = off, 1 = solutions, 2 = more) --help (or -h) = show command-line usage

Here are some examples of Alphametic *Enigma* problems that can be solved directly using the **SubstitutedExpression()** solver from the command line. (I use the **Alphametic** alias to make the command line more compact):

First we solve a simple Alphametic sum — **Enigma 63**:

LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL.

% pypy -m enigma Alphametic "LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL" (LBRLQQR + LBBBESL + LBRERQR + LBBBEVR = BBEKVMGL) (8308440 + 8333218 + 8302040 + 8333260 = 33276958) / B=3 E=2 G=5 K=7 L=8 M=9 Q=4 R=0 S=1 V=6

This runs in 4.8s, so the **SubstitutedSum()** solver is over 30× faster in this case.

A more complex problem, consisting of two simultaneous Alphametic sums is given in **Enigma 5**:

HMPDM + BHPHM = RCDHA,

RBAD + PQHD = AADD.

% pypy -m enigma Alphametic "HMPDM + BHPHM = RCDHA" "RBAD + PQHD = AADD" (HMPDM + BHPHM = RCDHA) (RBAD + PQHD = AADD) (24504 + 12524 = 37028) (3180 + 5620 = 8800) / A=8 B=1 C=7 D=0 H=2 M=4 P=5 Q=6 R=3

This runs in 0.46s. The same problem can be handled by the **SubstitutedSum()** solver in 0.045s, which is 10× faster.

**Enigma 1407** has an Alphametic sum and an additional condition:

SAND + SUN + SEX + SEA = IBIZA,

SAND > SUN > SEX > SEA.

% pypy -m enigma Alphametic "SAND + SUN + SEX + SEA = IBIZA" "SAND > SUN > SEX > SEA" (SAND + SUN + SEX + SEA = IBIZA) (SAND > SUN > SEX > SEA) (9304 + 970 + 956 + 953 = 12183) (9304 > 970 > 956 > 953) / A=3 B=2 D=4 E=5 I=1 N=0 S=9 U=7 X=6 Z=8 (9306 + 970 + 954 + 953 = 12183) (9306 > 970 > 954 > 953) / A=3 B=2 D=6 E=5 I=1 N=0 S=9 U=7 X=4 Z=8

This takes 4.1s vs. 0.34s for my programmed solution.

**Enigma 1664** is a similar problem:

PAY + ASP + YES + ERR + RYE + SPA = YAPS,

P < A < Y < E < R < S.

% pypy -m enigma Alphametic "PAY + ASP + YES + ERR + RYE + SPA = YAPS" "P < A < Y < E < R < S" (PAY + ASP + YES + ERR + RYE + SPA = YAPS) (P < A < Y < E < R < S) (123 + 291 + 369 + 688 + 836 + 912 = 3219) (1 < 2 < 3 < 6 < 8 < 9) / A=2 E=6 P=1 R=8 S=9 Y=3

This runs in 0.9s vs. 0.03s for my programmed solution.

**Enigma 1530** is a multiplication problem:

TOM × 13 = DALEY.

% python -m enigma Alphametic "TOM * 13 = DALEY" (TOM * 13 = DALEY) (796 * 13 = 10348) / A=0 D=1 E=4 L=3 M=6 O=9 T=7 Y=8

This takes 0.08s vs. 0.04s for a programmed solution using **SubstitutedSum()**, so it is only 2× faster.

**Enigma 323** has two Alphametic sums, although the exact answer is not given for the second one:

CUT + UTC + TCU = MEDS,

RIO + IOR + ORI = <missing>,

MEDS × <missing> = OMTTOUI.

% pypy -m enigma Alphametic "CUT + UTC + TCU = MEDS" "(RIO + IOR + ORI) * MEDS = OMTTOUI" (CUT + UTC + TCU = MEDS) ((RIO + IOR + ORI) * MEDS = OMTTOUI) (487 + 874 + 748 = 2109) ((563 + 635 + 356) * 2109 = 3277386) / C=4 D=0 E=1 I=6 M=2 O=3 R=5 S=9 T=7 U=8

This runs in 0.25s, compared to 0.04s for my programmed solution.

**Enigma 253** has a sum, a product and a condition.

ENI + GMA = SUM,

ENI × GMA = ??MES,

I > 2 × U.

This one is faster if we order the expressions ourselves.

% pypy -m enigma Alphametic --reorder=0 "(ENI * GMA) % 1000 = MES" "ENI + GMA = SUM" "I > 2 * U" (ENI + GMA = SUM) ((ENI * GMA) % 1000 = MES) (I > 2 * U) (279 + 156 = 435) ((279 * 156) % 1000 = 524) (9 > 2 * 3) / A=6 E=2 G=1 I=9 M=5 N=7 S=4 U=3

This runs in 1.2s. If we let the solver choose the ordering of the clauses it takes 2.2s. My programmed solution (which uses **SubstitutedSum()**) takes 0.09s.

**Enigma 1261** has an Alphametic sum with several additional conditions:

ALPHA + BETA + GAMMA = DELTA,

24 and 26 divide ALPHABET,

3 divides GAMMA.

Again, we get a faster solution if we order the clauses ourselves:

% pypy -m enigma Alphametic --reorder=0 "ALPHABET % 26 = 0" "ALPHABET % 24 = 0" "ALPHA + BETA + GAMMA = DELTA" "GAMMA % 3 = 0" (ALPHABET % 26 = 0) (ALPHABET % 24 = 0) (ALPHA + BETA + GAMMA = DELTA) (GAMMA % 3 = 0) (56375904 % 26 = 0) (56375904 % 24 = 0) (56375 + 9045 + 15225 = 80645) (15225 % 3 = 0) / A=5 B=9 D=8 E=0 G=1 H=7 L=6 M=2 P=3 T=4

This takes 8.6s. If we let the solver choose the ordering of the clauses it take 13s. These compare with 0.06s for my programmed solution.

**Enigma 245** uses some symbols that are not standard capital letters. We use “e”, “n” and “g” to stand for the reversed capitals, and we can specify these on the command line argument:

ENIG × MA = AM × gIne,

E − N − M = M − n − e,

N = n + n,

IN = nI + nI.

% pypy -m enigma Alphametic --symbols=ENIGMAeng "ENIG * MA == AM * gIne" "E - N - M == M - n - e" "N == n + n" "IN == nI + nI" (ENIG * MA == AM * gIne) (E - N - M == M - n - e) (N == n + n) (IN == nI + nI) (6895 * 12 == 21 * 3940) (6 - 8 - 1 == 1 - 4 - 0) (8 == 4 + 4) (98 == 49 + 49) / A=2 E=6 G=5 I=9 M=1 N=8 e=0 g=3 n=4

It runs in 0.2s compared with 0.03s for my programmed solution.

We can solve problems in number bases other than 10, e.g. **Enigma 1581:**

THOMAS − DALEY = GOLD (in base 11).

I’ve rearranged the expression into an addition sum to reduce the number of free symbols on the left hand side of the expression (GOLD + DALEY only has 7 free symbols, THOMAS − DALEY has 10).

% pypy -m enigma Alphametic --base=11 "GOLD + DALEY = THOMAS" (GOLD + DALEY = THOMAS) (639A + A7985 = 103274) / A=7 D=10 E=8 G=6 H=0 L=9 M=2 O=3 S=4 T=1 Y=5

This runs in 9.5s vs. 0.33s using **SubstitutedSum()** (which can also solve problems in different bases).

Note that the letter A appears both as a symbol to be substituted and a digit in base 11 numbers, so there is the possibility of confusion. We could eliminate it completely by using lowercase letters for the symbols:

% pypy -m enigma Alphametic --base=11 --symbols=adeghlmosty "gold + daley = thomas" (gold + daley = thomas) (639A + A7985 = 103274) / a=7 d=10 e=8 g=6 h=0 l=9 m=2 o=3 s=4 t=1 y=5

A similar approach works for **Enigma 1663:**

FAREWELL + FREDALO = FLINTOFF (in base 11).

% pypy -m enigma Alphametic --symbols=adefilnortw --base=11 "farewell + fredalo = flintoff" (farewell + fredalo = flintoff) (6157A788 + 6573189 = 68042966) / a=1 d=3 e=7 f=6 i=0 l=8 n=4 o=9 r=5 t=2 w=10 (61573788 + 657A189 = 68042966) / a=1 d=10 e=7 f=6 i=0 l=8 n=4 o=9 r=5 t=2 w=3

This runs in 55s. The **SubstitutedSum()** solver takes 0.08s.

**Enigma 1361** gives us one of the assignments to start with:

ELGAR + ENIGMA = NIMROD.

We can add the assignment as an additional clause:

% pypy -m enigma Alphametic "ELGAR + ENIGMA = NIMROD" "O = 0" (ELGAR + ENIGMA = NIMROD) (O = 0) (71439 + 785463 = 856902) (0 = 0) / A=3 D=2 E=7 G=4 I=5 L=1 M=6 N=8 O=0 R=9

This runs in 2.6s.

Or we can solve it in a single expression and assign the letter O using a command line option:

% pypy -m enigma Alphametic --assign=O,0 "ELGAR + ENIGMA = NIMROD" (ELGAR + ENIGMA = NIMROD) (71439 + 785463 = 856902) / A=3 D=2 E=7 G=4 I=5 L=1 M=6 N=8 O=0 R=9

Which also runs in 2.6s.

The **–assign** option also works in the **SubstitutedSum()** solver, which solves this problem in 0.07s.

We can specify which digit are available — in **Enigma 1272** the digit 9 is not used:

JONNY = WILKI + NSON.

We also re-order the sum so that the expression is on the left hand side of the equals sign.

% pypy -m enigma Alphametic --digits=0,1,2,3,4,5,6,7,8 "WILKI + NSON = JONNY" (WILKI + NSON = JONNY) (48608 + 3723 = 52331) / I=8 J=5 K=0 L=6 N=3 O=2 S=7 W=4 Y=1 (48708 + 3623 = 52331) / I=8 J=5 K=0 L=7 N=3 O=2 S=6 W=4 Y=1

This runs in 1.3s compared to 0.05s using the **SubstitutedSum()** solver.

**Enigma 171** is a problem that uses numbers for symbols, but the numbers cannot stand for themselves:

1939 + 1079 = 6856.

We can handle this directly in the solver:

% pypy -m enigma Alphametic -s01356789 -i0,016 -i1,1 -i3,3 -i5,5 -i6,6 -i7,7 -i8,8 -i9,9 "1939 + 1079 = 6856" (1939 + 1079 = 6856) (2767 + 2137 = 4904) / 0=1 1=2 3=6 5=0 6=4 7=3 8=9 9=7

This runs in 0.5s vs. 0.08s using the **SubstitutedSum()** solver.

However there is possible ambiguity here. The value on the right hand side of the expression is interpreted either as a “word” (composed of symbols) or an integer literal (composed of numbers). In this case we could interpret the value in either way. The solver will interpret it as a “word” if it is entirely composed of symbols, otherwise it will try and parse it as a number (in the specified base). If both these approaches fail it will raise an error.

It might be better to rename the symbols to avoid any clash. Here is the same problem with the letters *abcdefgh* used instead of the digits 01356789.

% pypy -m enigma Alphametic -sabcdefgh -i0,abe -i1,b -i3,c -i5,d -i6,e -i7,f -i8,g -i9,h "bhch + bafh = egde" (bhch + bafh = egde) (2767 + 2137 = 4904) / a=1 b=2 c=6 d=0 e=4 f=3 g=9 h=7

**Enigma 94** has an Alphametic sum and an additional condition.

BRAIN + STRAIN + AGAIN = ENIGMA,

ATE is a perfect cube.

Any of the functions defined in **enigma.py** is available for use in expressions.

% pypy -m enigma Alphametic "BRAIN + STRAIN + AGAIN = ENIGMA" "is_cube(ATE)" (BRAIN + STRAIN + AGAIN = ENIGMA) (is_cube(ATE)) (98234 + 518234 + 27234 = 643702) (is_cube(216)) / A=2 B=9 E=6 G=7 I=3 M=0 N=4 R=8 S=5 T=1

This runs in 0.6s, compared to 0.04s for my programmed solution using **SubstitutedSum()**.

**Enigma 1627** has an Alphametic sum and some additional conditions.

ETA + BETA + THETA = DELTA,

PHI is prime,

PSI is prime.

% pypy -m enigma Alphametic "ETA + BETA + THETA = DELTA" "is_prime(PHI)" "is_prime(PSI)" (ETA + BETA + THETA = DELTA) (is_prime(PHI)) (is_prime(PSI)) (250 + 8250 + 54250 = 62750) (is_prime(149)) (is_prime(139)) / A=0 B=8 D=6 E=2 H=4 I=9 L=7 P=1 S=3 T=5

This runs in 2.5s, compared with 0.045s for my programmed solution.

**Enigma 1180** has a subtraction sum, and then several constraints on various numbers formed from substituted words:

SEVEN − THREE = FOUR,

SEVEN is prime,

FOUR is prime,

RUOF is prime,

TEN is square.

% pypy -m enigma Alphametic "SEVEN - THREE = FOUR" "is_prime(SEVEN)" "is_prime(FOUR)" "is_prime(RUOF)" "is_square(TEN)" (SEVEN - THREE = FOUR) (is_prime(SEVEN)) (is_prime(FOUR)) (is_prime(RUOF)) (is_square(TEN)) (62129 - 58722 = 3407) (is_prime(62129)) (is_prime(3407)) (is_prime(7043)) (is_square(529)) / E=2 F=3 H=8 N=9 O=4 R=7 S=6 T=5 U=0 V=1

This runs in 0.72s compared with 0.06s for my programmed solution.

**Enigma 1000** is a division sum:

ENIGMA / M = TIMES.

We can specify this as a full expression (using == as the right hand side of the expression is not a “word” or an integer literal):

% pypy -m enigma Alphametic --invalid=0,EMT "divmod(ENIGMA, M) == (TIMES, 0)" (divmod(ENIGMA, M) == (TIMES, 0)) (divmod(180426, 2) == (90213, 0)) / A=6 E=1 G=4 I=0 M=2 N=8 S=3 T=9

This runs in 7.8s.

Or we can re-write the expression to make it more friendly for the solver:

% pypy -m enigma Alphametic --invalid=0,EMT "M * TIMES = ENIGMA" (M * TIMES = ENIGMA) (2 * 90213 = 180426) / A=6 E=1 G=4 I=0 M=2 N=8 S=3 T=9

This runs in 0.45s, which compares to 0.042s for my programmed solution.

Here’s the solver let loose on a couple of recent **Sunday Times Teaser** puzzles:

SAINT + GEORGE = DRAGON,

GEORGE is even.

% pypy -m enigma Alphametic "SAINT + GEORGE = DRAGON" "E % 2 = 0" (SAINT + GEORGE = DRAGON) (E % 2 = 0) (72415 + 860386 = 932801) (6 % 2 = 0) / A=2 D=9 E=6 G=8 I=4 N=1 O=0 R=3 S=7 T=5

This runs in 8.8s, compared with 0.1s for a programmed solution.

AB × CDE = FGHIJ,

AB + CD + EF + GH + IJ = CCC.

% pypy -m enigma Alphametic "AB * CDE = FGHIJ" "AB + CD + EF + GH + IJ = CCC" (AB * CDE = FGHIJ) (AB + CD + EF + GH + IJ = CCC) (52 * 367 = 19084) (52 + 36 + 71 + 90 + 84 = 333) / A=5 B=2 C=3 D=6 E=7 F=1 G=9 H=0 I=8 J=4

This runs in 0.5s vs. 0.15s for my programmed solution.

We can see, then, that many *Enigma* puzzles (and other similar puzzles) can be solved directly using the **SubstitutedSum()** and **SubstitutedExpression()** solvers, often with only a few seconds of run time, and while that may be slower in execution time than writing a custom program to solve the puzzle, the amount of time saved in not writing a custom program is more than made up for.

The Alphametic solvers are available as part of the **enigma.py** library for solving puzzles in Python, and will be kept up to date with improvements there. I will try and keep the calling interface to the solvers the same as that described above.

Enjoy.

31 December 2015

Posted by on There are now 923 *Enigma* puzzles on the site. There is a complete archive of all puzzles published from January 1979 – September 1985 and also from May 2002 – December 2013. Which is about 52% of all *Enigma* puzzles published in *New Scientist*, and leaves around 860 puzzles to add to the site.

In 2015 I added 160 *Enigma* puzzles (as well as a handful of puzzles from other sources). Here’s my selection of the ones I found most interesting to solve programatically this year:

**Enigma 252: Three-point circle****Enigma 257: Hexa-draughts****Enigma 258: Monkey business****Enigma 266: Twelve trees****Enigma 278: Uncle Pinkle’s new system****Enigma 287: GAGA’s friends****Enigma 288a: Multiplets****Enigma 288b: Christmas viewing****Enigma 289: All for one****Enigma 293: Red is not a colour****Enigma 295: The max-multiple game****Enigma 321: Going to pieces****Enigma 325: Clear thin circuit**

**Enigma 1190: Triple duel****Enigma 1194: Only two coins****Enigma 1221: Flower beds****Enigma 1225: Rows are columns****Enigma 1226: Megafactors****Enigma 1240: Stack ’em high****Enigma 1241: Jigsaw squares****Enigma 1244: All in one****Enigma 1247: Recurring decimal****Enigma 1248: Rows and columns****Enigma 1249: Root routes****Enigma 1251: Jigsaw of rectangles****Enigma 1252: Cards on the table****Enigma 1253: Votes and taxes****Enigma 1254: Piles of money****Enigma 1260: Latin fives**

During 2015 I switched to posting puzzles twice a week (on Monday and Friday, with the occasional extra posting on Wednesdays if I had something interesting to post), so there are around 8 years worth of puzzles to go.

Thanks to everyone who has contributed to the site in 2015, either by adding their own solutions (programmatic or analytical), insights or questions, or by helping me source puzzles from back-issues of *New Scientist*.

Here is the **2015 Annual Report** for *Enigmatic Code* generate by **WordPress**.

21 October 2015

Posted by on Following on from my previous articles on programming the Analytical Engine (see Part 1 and Part 2), in this post we will actually translate Ada Lovelace’s diagram that is considered to be the first published program into a directly executable program for the Analytical Engine emulator, where each row in the diagram corresponds directly to a statement in the assembly language program.

As suggested in Part 2, I have written a simple assembler to make it a bit easier to write programs for the Analytical Engine emulator. It allows you to specify the arithmetic operations for the Engine as a single statement (rather than the usual 4 cards), and it keeps track of symbolic labels and automatically computes branch offsets for you.

The assembler is implemented by the `assemble()` function, which I’ve included in the latest version of **analytical_engine.py**.

The `assemble()` function takes a multi-line string representing the program, and returns a list of cards suitable for use as a program in the `AnalyticalEngine` class. It also returns a map from the labels used in the assembler program to the index of the corresponding card in the program.

The factorial program from Part 2 would look like this using the assembler:

from analytical_engine import AnalyticalEngine, Column, assemble n = 40 (program, _) = assemble(""" :init SET 0 <- {n} SET 1 <- 1 SET 2 <- 1 :loop # operation 1: v[2] *= v[0] MUL 0 2 -> 2 # operation 2: v[2] -= 1 SUB 0 1 -> 0 # branch if non-zero to operation 1 BRN loop # end HALT """.format(n=n)) # initialise the engine p = AnalyticalEngine(vars=3, number=Column(digits=50), trace=1) # load the program to compute factorial(n) p.load_program(program) # run the program p.run() # the result is in v[2] print("factorial({n}) = {f}".format(n=n, f=p.v[2]))

**[Program 6 - factorial3.py]**

The embedded program for the Analytical Engine is highlighted.

Note that I have used Python’s `format()` function to interpolate the (Python) variable `n` into the string before it is passed to the assembler. So the assembler just sees “… `SET 0 <- 40` …”.

By setting the `trace=1` flag on the emulator the actual program cards loaded into the emulator are printed out before execution (as well as the trace of the execution itself).

The format that the assembler accepts should be fairly self-explanatory, from the example above, and from the implementation of Ada’s program given below, but the main points are:

- Comments (as in Perl and Python) are indicated by a
`#`(hash) and continue to the end of the line. - A line that starts with a
`:`(colon) declares a label (which can be used as the target of a branch). - Statements start with an op-code, followed by a number of space separated arguments.
- The multiple cards required for an arithmetical operation (
`ADD`,`SUB`,`MUL`,`DIV`) are replaced by a single statement. Each takes two arguments to indicate the source of the operands: either the index of a variable in the store, or the special token`DATA`, which indicates that the value is loaded from the input data stack. Additional arguments are used to indicate indices in the store where the result is to be stored. - The offset in branch statements can be specified using a symbolic label, by using a symbol that is declared as a label elsewhere in the program. (The label can appear either before or after the branch statement).
- The arrows (
`<-`and`->`) used in the`SET`statement to indicate the value being assigned to a variable in the store, and in arithmetic operations to indicate the variables in the store that the result of the operation is assigned to, are entirely optional. (They are removed during the parsing process, but I think it makes the program a little more readable).

So we can now write Ada’s program for Bernoulli Numbers like this:

from analytical_engine import AnalyticalEngine, Column, assemble from enigma import raw_input, printf # assemble the program (program, labels) = assemble(""" :init SET 0 <- 0 SET 1 <- 1 SET 2 <- 2 SET 3 <- 1 :start MUL 2 3 -> 4 5 6 SUB 4 1 -> 4 ADD 5 1 -> 5 DIV 4 5 -> 11 DIV 11 2 -> 11 SUB 13 11 -> 13 SUB 3 1 -> 10 BRZ finish ADD 2 7 -> 7 DIV 6 7 -> 11 MUL DATA 11 -> 12 ADD 12 13 -> 13 SUB 10 1 -> 10 BRZ finish :loop SUB 6 1 -> 6 ADD 1 7 -> 7 DIV 6 7 -> 8 MUL 8 11 -> 11 SUB 6 1 -> 6 ADD 1 7 -> 7 DIV 6 7 -> 9 MUL 9 11 -> 11 MUL DATA 11 -> 12 ADD 12 13 -> 13 SUB 10 1 -> 10 BRN loop :finish SUB 0 13 PRINT ADD 1 3 -> 3 SET 7 <- 0 SET 13 <- 0 HALT """) # initialise the engine p = AnalyticalEngine(vars=14, number=Column(digits=10, dp=40), trace=1) # load the program p.load_program(program) # indices B[k] k = 1 # input data, initially empty, but each result is added after computation data = [] # instruction to start execution at start = labels['init'] # run the program while True: # load the data and run the program p.load_data(data) p.run(start) # get the computed result from the output transcript r = (p.output[-1] if p.output else None) # display the computed result printf("B[{k}] = {r}") # run the program again? try: raw_input('\n[paused] >>> ') except EOFError: printf() break # add the result to the data and run it again data.append(r) start = labels['start'] k += 2

**[Program 7 - ada6.py]**

Again, the highlighted section is the embedded program for the Analytical Engine, and the statements in the program correspond to the lines in Ada’s diagram, with extra statements for the initialisation, branching and output added. So the first row in Ada’s diagram corresponds to the “`multiply`” statement after the “`:start`” instruction (line 12 of the above Python program).

We can use the output of the assembler to provide the indices of the cards where we wish to start and subsequently re-start execution.

This, then, is a directly executable representation of the first published program.

14 October 2015

Posted by on Yesterday was Ada Lovelace Day, so following on from my previous post about Ada’s program for computing Bernoulli Numbers, I decided to implement an emulation of the Analytical Engine so that I can directly run the “assembly language” program I produced in that post, rather than translate it into a modern programming language in order to run it.

I haven’t finished reading all the materials I have found about the Analytical Engine, but I think I’ve learned enough to have a crack at doing an emulation sufficient for my needs.

So without further ado, here’s my emulation of the Analytical Engine in Python:

# emulation of the Analytical Engine class HaltException(Exception): pass class AnalyticalEngine(object): # vars = number of variables # number = a function to implement the variables def __init__(self, **kw): # number of variables in the store self.vars = 20 # a method to implement the variables self.number = float # set options for (k, v) in kw.items(): if hasattr(self, k): setattr(self, k, v) else: print('AnalyticalEngine: ignoring "{k}={v}"'.format(k=k, v=v)) self.reset() # reset the machine def reset(self): # representation of zero self.zero = self.number(0) # the variables in the store self.v = [self.zero] * self.vars # the registers in the mill self.index = 0 self.input = [None, None] self.result = None # current operation self.op = None # the program and program counter self.program = None self.pc = None # input data self.data = None self.dc = None # output transcript self.output = None # load the program def load_program(self, program): self.program = program self.pc = 0 # load the data def load_data(self, data): self.data = data self.dc = 0 # run the program (starting at instruction start) def run(self, start=0): print(">>> Running Analytical Engine") self.output = [] self.pc = start try: while True: # get the next instruction assert not(self.pc < 0), "Invalid PC" p = self.program[self.pc] # execute it getattr(self, p[0])(*p[1:]) # increment the program counter self.pc += 1 except Exception as e: print(">>> {e}".format(e=e)) print(">>> Execution halted") # implement the opcodes # SET <i> <x> # set variable <i> in the store to value <x> def SET(self, i, x): self.v[i] = self.number(x) # ADD # set the engine to perform addition def ADD(self): self.op = (lambda a, b: a + b) # SUB # set the engine to perform subtraction def SUB(self): self.op = (lambda a, b: a - b) # MUL # set the engine to perform multiplication def MUL(self): self.op = (lambda a, b: a * b) # DIV # set the engine to perform division def DIV(self): self.op = (lambda a, b: a / b) # execute an operation def _execute(self): self.result = self.op(self.input[0], self.input[1]) # load value v into the input register def _load(self, v): self.input[self.index] = v # loading the second register triggers the execution if self.index == 1: self._execute() # next time load the other input register self.index ^= 1 # LOAD <i> # load the input register from variable <i> in the store def LOAD(self, i): self._load(self.v[i]) # LOAD_DATA # load the input register from next value in the input data stack def LOAD_DATA(self): self._load(self.data[self.dc]) self.dc += 1 # STORE <i> # store the result to variable <i> in the store def STORE(self, i): self.v[i] = self.result # PRINT # print the result def PRINT(self): print(self.result) self.output.append(self.result) # HALT # halt execution of the engine def HALT(self): raise HaltException("HALT instruction encountered") # branch def _branch(self, offset): self.pc += offset # BRZ <offset> # branch if zero: # if the result is zero move the program instructions by <offset> def BRZ(self, offset): if self.result == self.zero: self._branch(offset) # BRN <offset> # branch if non-zero: # if the result is non-zero move the program instructions by <offset> def BRN(self, offset): if self.result != self.zero: self._branch(offset)

`[Program - analytical_engine.py]`

Click the link for an expanded version of this implementation that includes code to trace the execution of the Engine. [link]

Before we look at running a program on the Analytical Engine, some notes:

- This is an emulation of the function the Analytical Engine, not a simulation of the mechanical workings.
- You can specify the number of variables (columns) in the store of the Analytical Engine with the
`vars`parameter to the`AnalyticalEngine`class. One of Babbage’s designs was for a store with 1000 columns. - You can specify an implementation for the columns using the
`number`parameter to the`AnalyticalEngine`class. Later on I’ll implement a class that more closely emulates the behaviour of the columns in the design for the Engine, but for now we can use Python’s`float`built-in (to use floating point numbers) or`fractions.Fraction`for exact results. - The program is presented as a sequence of instructions (which represent the variable cards and operation cards used to program the Analytical Engine). Each instruction consists of a sequence where the first element is the “op code” and the remaining elements are the corresponding parameters. The program is loaded into the Engine using the
`load_program()`method, and can then be executed using the`run()`method. - The “op codes” are the names of methods on the
`AnalyticalEngine`class. I use upper case methods to indicate the instructions. - I’ve implemented a stack of “input data” cards as I suggested in my comment on the “Running the first program post” [link]. The data is loaded into the Engine using the
`load_data()`method, and the op code used in the program cards to load a register in the mill from the input data is`LOAD_DATA`(rather than`LOAD input`as I suggested in the comment). - I’ve implemented two conditional instructions: “BRZ
*offset*” (branch if zero) and “BRN*offset*” (branch if non-zero). In each case the result register of the mill is checked, and if the condition holds the program counter is moved by*offset*instructions. The*offset*may be positive (to implement conditional execution) or negative (to implement looping). - The Engine will continue to run the program until a (Python) exception occurs, at which point the Engine will stop. There it a
`HALT`instruction provided specifically to raise an exception that will stop the Engine.

This is my own interpretation of the Analytical Engine, for a more rigorous interpretation and Java implementation see the pages at fourmilab.ch.

Below is a simple program to compute *factorial(12)*. The Analytical Engine program is wrapped up in a Python program that creates a suitable instantiation of the `AnalyticalEngine` class (with 3 variables in the store, and using Python’s built-in `int` class to implement the columns).

We then load the program for the Analytical Engine, run it and the computed result is in variable 2 from the store.

from analytical_engine import AnalyticalEngine # initialise the engine p = AnalyticalEngine(vars=3, number=int) # load the program to compute factorial(12) n = 12 p.load_program([ # initialisation ['SET', 0, n], ['SET', 1, 1], ['SET', 2, 1], # operation 1: v[2] *= v[0] ['MUL'], ['LOAD', 0], ['LOAD', 2], ['STORE', 2], # operation 2: v[2] -= 1 ['SUB'], ['LOAD', 0], ['LOAD', 1], ['STORE', 0], # branch if non-zero to operation 1 ['BRN', -9], # end ['HALT'], ]) # run the program p.run() # the result is in v[2] print("factorial({n}) = {f}".format(n=n, f=p.v[2]))

**[Program 3 - factorial1.py]**

When run you see output like this:

$ python factorial1.py >>> Running Analytical Engine >>> HALT instruction encountered >>> Execution halted factorial(12) = 479001600

And indeed this is the correct value for *factorial(12)*.

The difficulty in writing these programs is making sure the branch offsets go to the right place. As statements occupy multiple cards it is easy to branch into the middle of a statement with unexpected results. (What’s really needed is an assembler that can compile a statement like “`mul(0, 2, 2)`” and produce the four instructions that make up the first operation. This would make a program correspond much more closely with the diagrams published to describe the operation of the Analytical Engine. Such an assembler could also allow for symbolic labels to be used in branches so the offsets are computed automatically. I might add this functionality to a future version of **analytical_engine.py**).

Here is the full program to execute Ada Lovelace’s algorithm for computing Bernoulli Numbers. As you can see the actual program for the Analytical Engine is a direct translation from the assembly language program I gave in my comment to the last post (except I’ve actually had to work out the correct offsets for the “branch” instructions).

In this instance we use an AnalyticalEngine with 14 variables, and the numbers are implemented using Python’s `fractions.Fraction` objects, which give exact results for the computation.

The controlling loop collects the results as they are generated and adds them to the input data stack, and then re-runs the program to compute the next Bernoulli Number in the sequence. It pauses between each run of the program to allow the user examine the output and to halt the program if necessary.

(Note that the enclosing Python program uses a couple of utility routines from the **enigma.py** library, available here [link]).

from analytical_engine import AnalyticalEngine from fractions import Fraction from enigma import raw_input, printf # initialise the engine p = AnalyticalEngine(vars=14, number=Fraction) # load the program p.load_program([ # initialisation ['SET', 0, 0], ['SET', 1, 1], ['SET', 2, 2], ['SET', 3, 1], # operation 1 ['MUL'], ['LOAD', 2], ['LOAD', 3], ['STORE', 4], ['STORE', 5], ['STORE', 6], # operation 2 ['SUB'], ['LOAD', 4], ['LOAD', 1], ['STORE', 4], # operation 3 ['ADD'], ['LOAD', 5], ['LOAD', 1], ['STORE', 5], # operation 4 ['DIV'], ['LOAD', 4], ['LOAD', 5], ['STORE', 11], # operation 5 ['DIV'], ['LOAD', 11], ['LOAD', 2], ['STORE', 11], # operation 6 ['SUB'], ['LOAD', 13], ['LOAD', 11], ['STORE', 13], # operation 7 ['SUB'], ['LOAD', 3], ['LOAD', 1], ['STORE', 10], # branch if zero to operation 24 ['BRZ', +66], # operation 8 ['ADD'], ['LOAD', 2], ['LOAD', 7], ['STORE', 7], # operation 9 ['DIV'], ['LOAD', 6], ['LOAD', 7], ['STORE', 11], # operation 10 ['MUL'], ['LOAD_DATA'], ['LOAD', 11], ['STORE', 12], # operation 11 ['ADD'], ['LOAD', 12], ['LOAD', 13], ['STORE', 13], # operation 12 ['SUB'], ['LOAD', 10], ['LOAD', 1], ['STORE', 10], # branch if zero to operation 24 ['BRZ', +45], # operation 13 ['SUB'], ['LOAD', 6], ['LOAD', 1], ['STORE', 6], # operation 14 ['ADD'], ['LOAD', 1], ['LOAD', 7], ['STORE', 7], # operation 15 ['DIV'], ['LOAD', 6], ['LOAD', 7], ['STORE', 8], # operation 16 ['MUL'], ['LOAD', 8], ['LOAD', 11], ['STORE', 11], # operation 17 ['SUB'], ['LOAD', 6], ['LOAD', 1], ['STORE', 6], # operation 18 ['ADD'], ['LOAD', 1], ['LOAD', 7], ['STORE', 7], # operation 19 ['DIV'], ['LOAD', 6], ['LOAD', 7], ['STORE', 9], # operation 20 ['MUL'], ['LOAD', 9], ['LOAD', 11], ['STORE', 11], # operation 21 ['MUL'], ['LOAD_DATA'], ['LOAD', 11], ['STORE', 12], # operation 22 ['ADD'], ['LOAD', 12], ['LOAD', 13], ['STORE', 13], # operation 23 ['SUB'], ['LOAD', 10], ['LOAD', 1], ['STORE', 10], # branch if non-zero to operation 13 ['BRN', -45], # operation 24 ['SUB'], ['LOAD', 0], ['LOAD', 13], # print ['PRINT'], # operation 25 ['ADD'], ['LOAD', 1], ['LOAD', 3], ['STORE', 3], # reset working variables ['SET', 7, 0], ['SET', 13, 0], # end ['HALT'], ]) # indices B[k] k = 1 # input data, initially empty, but each result is added after computation data = [] # instruction to start execution at, initially 0, but subsequently 4 start = 0 # run the program while True: # load the data and run the program p.load_data(data) p.run(start) # get the computed result from the output transcript r = (p.output[-1] if p.output else None) # display the computed result printf("B[{k}] = {r}") # run the program again? try: raw_input('\n[paused] >>> ') except EOFError: printf() break # add the result to the data and run it again (from instruction 4) data.append(r) start = 4 k += 2

**[Program 4 - ada5.py]**

By running this we can check that the Engine does indeed produce the list of Bernoulli Numbers:

B[1] = 1/6, B[3] = −1/30, B[5] = 1/42, B[7] = −1/30, B[9] = 5/66, B[11] = −691/2730, …

The computation that Ada outlines in *Note G* is the calculation of B[7] = −1/30. This involves 10 ADD operations, 11 SUB operations, 8 MUL operations and 7 DIV operations. The Analytical Engine was estimated to do be able to perform an addition or subtraction in about 1 second, but multiplication and division could take up to a minute (the actual time would vary depending on the operands), and also branching would take some time to step through the required number of cards. So altogether we might expect the Engine to take around 15 to 20 minutes to compute B[7]. And as we compute further numbers in the sequence the number of operations and the corresponding time would increase.

So far we have been using Python’s built-in number classes to implement the columns, which can provide arbitrary precision. But the actual columns for the Analytical Engine would have been able to store 50 decimal digits, and a sign (to indicate if the number was positive or negative).

Here’s an implementation of a class suitable for use as the `number` parameter when constructing an `AnalyticalEngine` instance, that more closely replicates the columns of the Analytical Engine.

# implementation of the columns in the Analytical Engine class OverflowException(Exception): pass # a column with <digits> whole decimal digits and <dp> fractional decimal digits def Column(digits=50, dp=0): shift = (10 ** dp) overflow = (10 ** (digits + dp)) - 1 fmt = '<{s}{m:0' + str(digits) + 'd}' + ('.{d:0' + str(dp) + 'd}' if dp > 0 else '') + '>' class Column(object): # create a value, and check for overflow def __init__(self, n=0, shift=shift): if shift: n *= shift if abs(n) > overflow: raise OverflowException("Overflow in column") self.n = n # output format def __repr__(self): n = self.n (m, d) = divmod(abs(n), shift) s = ('-' if n < 0 else '+') return fmt.format(s=s, m=m, d=d) # arithmetic operations def __add__(self, value): return Column(self.n + value.n, shift=0) def __sub__(self, value): return Column(self.n - value.n, shift=0) def __mul__(self, value): return Column((self.n * value.n) // shift, shift=0) def __div__(self, value): return Column((self.n * shift) // value.n, shift=0) # Python 3 uses __truediv__ __truediv__ = __div__ # branch tests def __eq__(self, value): return self.n == value.n def __ne__(self, value): return self.n != value.n return Column

Notes:

- We can specify the number of whole decimal digits for the column with the
`digits`parameter. The columns can also be used to implement fixed point decimal numbers by specifying how many additional*fractional*decimal places are required with the`dp`parameter. - I’m not sure what happened when an overflow happened in the Analytical Engine, so I throw an exception.
- The columns support the four arithmetical operations that the Analytical Engine supports (addition, subtraction, multiplication and division), and also the two conditionals for the branches (is zero, is non-zero).
- The columns display as a fixed with string “<{sign}{digits}.{digits}>”.

The **Column** class is included in **analytical_engine.py**.

So we can use the following program to compute *factorial(40)* using columns containing 50 decimal digits:

from analytical_engine import AnalyticalEngine, Column # initialise the engine p = AnalyticalEngine(vars=3, number=Column(digits=50)) # load the program to compute factorial(40) n = 40 p.load_program([ # initialisation ['SET', 0, n], ['SET', 1, 1], ['SET', 2, 1], # operation 1: v[2] *= v[0] ['MUL'], ['LOAD', 0], ['LOAD', 2], ['STORE', 2], # operation 2: v[2] -= 1 ['SUB'], ['LOAD', 0], ['LOAD', 1], ['STORE', 0], # branch if non-zero to operation 1 ['BRN', -9], # end ['HALT'], ]) # run the program p.run() # the result is in v[2] print("factorial({n}) = {f}".format(n=n, f=p.v[2]))

**[Program 5 - factorial2.py]**

$ python factorial2.py >>> Running Analytical Engine >>> HALT instruction encountered >>> Execution halted factorial(40) = <+00815915283247897734345611269596115894272000000000>

And similarly by using `number=Column(10, 40)` in the implementation of Ada’s program we can compute Bernoulli numbers to 40 decimal places.

In divisions we will potentially lose precision in the final decimal places of the result, and as later numbers in the sequence are calculated from earlier numbers we will see the precision deteriorate as we go on.

Here are the results of running the program:

B[1] = <+0000000000.1666666666666666666666666666666666666666> B[3] = <-0000000000.0333333333333333333333333333333333333332> B[5] = <+0000000000.0238095238095238095238095238095238095233> B[7] = <-0000000000.0333333333333333333333333333333333333302> B[9] = <+0000000000.0757575757575757575757575757575757575464> B[11] = <-0000000000.2531135531135531135531135531135531131568> B[13] = <+0000000001.1666666666666666666666666666666666593360> B[15] = <-0000000007.0921568627450980392156862745098037431432> B[17] = <+0000000054.9711779448621553884711779448621498550887> B[19] = <-0000000529.1242424242424242424242424242422111802933> B[21] = <+0000006192.1231884057971014492753623188306059856832> B[23] = <-0000086580.2531135531135531135531135525557263098091> B[25] = <+0001425517.1666666666666666666666666299288036638424> ...

By the time we reach B[25] the last 15 digits of our 40 digits of decimal precision have been lost (B[25] = 8553103/6 = 1425517 + 1/6), but the computation is still correct to 25 decimal places.

**Note:** The Python programs presented should run under Python 2 or Python 3, but as I used the new-style `print()` function, so if you are running them under Python 2 you will need to put the following line at the start of the programs:

from __future__ import print_function

24 September 2015

Posted by on [**Note:** WordPress sometimes has problems including program code in posts, so I’ve made the code used in these articles available on GitHub – [link]]

I recently watched the BBC4 documentary *Calculating Ada: The Countess of Computing* (Hannah Fry’s 10-part radio series *Computing Britain* is also an interesting listen)[¹]. And although I know that Ada Lovelace is credited with publishing the first computer program, I hadn’t ever seen the form that this program took before. It’s not hard to track down – on Ada’s *Wikipedia* page there is an image of the diagram ([image] – “Diagram for the computation by the Engine of the Numbers of Bernoulli”).

The diagram itself was part of a note (*Note G*) that was attached to a translation of a French transcript (by an Italian military engineer named Luigi Frederico Menabrea – who would later become the Italian Prime Minister) of a seminar given by Charles Babbage at the University of Turin in 1840. It was the first and only time Babbage gave a public lecture on the *Analytical Engine*. Ada was commissioned to translate the paper into English. After nine months of working she published her translation, along with a set of notes (which were more than twice as long as the original paper). This was the first (and, for a hundred years, the only) text on computer programming. It is in one of the notes that the diagram that is considered to be the worlds first published computer program appeared. (The full text of Ada’s paper is available here [link], along with a large version of the diagram [link]).

Here it is:

It’s not what we think of now as a computer program – after all it pre-dates computer programming languages. Instead it essentially the description of a sequence of operations performed by the *Analytical Engine*. So it is more like a trace of the program execution than the source code of the program itself, and it only includes the actual arithmetic operations that the engine performs. The control flow of the execution is dealt with in the attached text.

Of course the Analytical Engine was never completed, so the actual form of the instructions is not completely clear. However, I’m not trying to construct copy of the punched cards necessary to run Ada’s program. I’m attempting to transliterate her algorithm into a modern programming language in order to understand it, and the Analytical Engine, better.

Like Ada I have found that my own notes have become somewhat longer than I anticipated.

First I’ll explain what the program does. The purpose of the program is to compute *Bernoulli Numbers*. Ada’s program is concerned with calculating the odd-numbered Bernoulli Numbers[²], and is based on the following equation (given in Note G) for calculating B[2*n* − 1], given the already computed values of B[1], B[3], …, B[2*n* − 3]:

A[0] + A[1]B[1] + A[3]B[3] + … + B[2

n− 1] = 0

where the values of the A[] coefficients are defined as follows:

A[0] = (−1/2) (2

n− 1)/(2n+ 1)

A[1] = 2n/2 =n

A[3] = 2n(2n− 1)(2n− 2) / (2.3.4)

A[5] = 2n(2n− 1)(2n− 2)(2n− 3)(2n− 4) / (2.3.4.5.6)

…

For *k* > 1, each A[*k*] is derived from A[*k* − 2] by multiplying the next two decreasing terms into the numerator, and the next two increasing terms into the denominator. (Note that for each value of *n* the sequence of A[] coefficients are different. We might more properly think of them as A[*n*, *k*]).

Ada’s program works by starting at *n*=1, the equation for deriving B[1] is:

A[0] + B[1] = 0

and:

A[0] = (−1/2) (2

n− 1)/(2n+ 1) = −1/6

So:

B[1] = 1/6

We then consider *n*=2, using the previously computed value for B[1] we have:

A[0] + A[1]B[1] + B[3] = 0

and:

A[0] = (−1/2) (2

n− 1)/(2n+ 1) = −3/10

A[1] = 2

So:

B[3] = −1/30

We then consider *n*=3, and use the previously computed value for B[1] and B[3] to calculate B[5]. And the procedure can be repeated as many times as we like (or until we run out of space to store the results – the proposed design for the Analytical Engine would have had room for 1,000 variables, each containing 50 decimal digits). Each time we use the results from all the previous runs to calculate the next number in the sequence.

To explain the implementation of the algorithm here is my own implementation of it using modern *Python* idioms. This program is constructed using the same basic structure as Ada’s program.

from fractions import Fraction # my implementation of the algorithm def bernoulli(): # results Bs = list() # start at n=1 n = 1 # calculate the sequence while True: # result = A0 r = -Fraction(2 * n - 1, 2 * n + 1) / 2 # A1 = n A = n # for each B[k] already determined calculate the corresponding A[k] for (i, B) in enumerate(Bs): if i > 0: # multiply in the 2 additional terms j = 2 * i - 1 A *= Fraction(2 * n - j, 2 + j) j += 1 A *= Fraction(2 * n - j, 2 + j) # add A[k] * B[k] into the result r += A * B # the computed bernoulli number is -r B = -r # return the number yield B # add it to the result list Bs.append(B) # increase n n += 1 # run N iterations of the algorithm N = 10 # allow N to be specified as a command line argument import sys if len(sys.argv) > 1: N = int(sys.argv[1]) k = 1 # ada's numbering (normal numbering is k + 1) K = 2 * N + 1 # max k for r in bernoulli(): print("B[{k}] = {r}".format(k=k, r=r)) k += 2 if k == K: break

**[Program 1 - ada1.py]**

Before we examine a more direct implementation of Ada’s diagram, first some notes on my own implementation:

- (Line 1) I’m using Python’s
`Fraction`library to generate the Bernoulli sequence. This makes it easy to check the computation against existing lists (which give them in rational form), and provides exact answers. The Analytical Engine would have computed the numbers as fixed point decimal fractions. (So small errors would accumulate in the final decimal places, and as these results are fed back into further computations the errors would compound. It would be best to use more decimal places that required in the results so that suitably accurate truncated results can be generated). - (Line 4) The
`bernoulli()`function will generate successive Bernoulli numbers. I’ve implemented it as Python co-routine, so results are returned with the`yield`operator as they are determined. It’s up to the calling code to consume as many results as it requires. The diagram given by Ada describes the generation of a single Bernoulli number. And while her program itself sets up some of the data required for the next run, it would also require some additional modification of the cards before it is run again to compute the next number in the sequence. - (Line 18) I’ve simplified the calculation of A1 from (2
*n*/2) to just (*n*). - (Line 41) We can specify how many numbers in the sequence we want the program to compute (the default is 10). They are printed out according to Ada’s numbering scheme.

The program is certainly non-trivial. It involves two loops (one nested in the other) and conditional execution, as well as the storing of the computed results to use in subsequent calculations.

The diagram published by Ada shows the computation of B[7], the fourth number in the sequence. The table consists of the following columns:

1. Line number.

2. Operation.

3. Operands.

4. Assignment of results.

5-6. Comments.

7-9. Parameters.

10-19. Variables.

20-23. Results.

Only columns 2, 3 and 4 are instructions to the computer, the rest are essentially supporting documentation to explain the program to the reader.

There are a few apparent errors (the first programming bugs) in the originally published diagram:

- At line 4 the operation is given as v5 ÷ v4, although the comments and text make it clear that this should be v4 ÷ v5.
- At line 23 the formula for A3 has two 3’s in the denominator, instead of a 3 and a 4 (as in the previous line). This is just an error in the comment and would not affect execution of the program.
- At line 25 it looks like v6 and v7 are reset to their initial values (of zero). In fact it’s v7 and v13 that need to be reset for the algorithm to work correctly. In the accompanying text Ada refers to resetting v6, v7,
*and*v13 to zero (although it is not necessary to zero v6, as the first thing the program does (operation 1) is to assign 2*n*to v4, v5, and v6).

These are all “typos” rather than “thinkos”, and may not even have been present in Ada’s original notes.

Here is my Python program with the `bernoulli()` function replaced with a modified version that reflects a more direct transliteration of Ada’s description:

from collections import defaultdict from fractions import Fraction # ada lovelace's algorithm in Python def bernoulli(): # initially all varibles in the store are at 0 v = defaultdict(int) # program start # initialise the variables v[1] = Fraction(1) # constant v[2] = Fraction(2) # constant v[3] = Fraction(1) # n = 1 # outer loop (compute B[2n - 1]) while True: # pseudo-block to permit "break" while True: # 0: set index register i = 1 # 1: v4 = v5 = v6 = 2n v[4] = v[5] = v[6] = v[2] * v[3] # 2: v4 = 2n - 1 v[4] = v[4] - v[1] # 3: v5 = 2n + 1 v[5] = v[5] + v[1] # 4: v11 = (2n - 1)/(2n + 1) (the diagram seems to say v[5] / v[4]) [FIX] v[11] = v[4] / v[5] # 5: v11 = (1/2) (2n - 1)/(2n + 1) v[11] = v[11] / v[2] # 6: v13 = -(1/2) (2n - 1)/(2n + 1) = A0 v[13] = v[13] - v[11] # 7: v10 = n - 1 v[10] = v[3] - v[1] # branch if zero to operation 24 if v[10] == 0: break # 8: v7 = 2 v[7] = v[2] + v[7] # 9: v11 = (2n)/2 = A1 [why not just set v[11] = v[3] instead of 8 & 9 ?] v[11] = v[6] / v[7] # 10: v12 = A1 * B1 v[12] = v[20 + i] * v[11] i += 1 # 11: v13 = A0 + A1 * B1 v[13] = v[12] + v[13] # 12: v10 = n - 2 v[10] = v[10] - v[1] # branch if zero to operation 24 if v[10] == 0: break # for each computed result, B = B3 [1], B5 [2], ... while True: # 13: v6 = 2n - 1 [1], 2n - 3 [2], ... v[6] = v[6] - v[1] # 14: v7 = 3 [1], 5 [2], ... v[7] = v[1] + v[7] # 15: v8 = (2n - 1)/3 [1], (2n - 3)/5 [2], ... v[8] = v[6] / v[7] # 16: v11 = (2n)/2 * (2n - 1)/3 [1], (2n)/2 * (2n - 1)/3 * (2n - 3)/5 [2], ... v[11] = v[8] * v[11] # 17: v6 = 2n - 2 [1], 2n - 4 [2], ... v[6] = v[6] - v[1] # 18: v7 = 4 [1], 6 [2], ... v[7] = v[1] + v[7] # 19: v9 = (2n - 2)/4 [1], (2n - 4)/6 v[9] = v[6] / v[7] # 20: v11 = (2n)/2 * (2n - 1)/3 * (2n - 2)/4 = A3 [1], (2n)/2 * (2n - 1)/3 * (2n - 2)/4 * (2n - 3)/5 * (2n - 4)/6 = A5 [2], ... v[11] = v[9] * v[11] # 21: v12 = A3 * B3 [1], A5 * B5 [2], ... v[12] = v[20 + i] * v[11] i += 1 # 22: v13 = A0 + A1 * B1 + A3 * B3 [1], A0 + A1 * B1 + A3 * B3 + A5 * B5 [2], ... v[13] = v[12] + v[13] # 23: v10 = n - 3 [1], n - 4 [2], ... v[10] = v[10] - v[1] # branch if non-zero to operation 13 if v[10] == 0: break # terminate the pseudo-block break # 24: result (-v13) is copied into the results r = v[20 + i] = v[20 + i] - v[13] # the result could be printed here # we pass it back to the calling routine yield r # 25: increase n, and reset working variables v[3] = v[1] + v[3] v[7] = v[13] = 0 # branch to operation 0 # run N iterations of the algorithm N = 10 # allow N to be specified as a command line argument import sys if len(sys.argv) > 1: N = int(sys.argv[1]) k = 1 # ada's numbering (normal numbering is k + 1) K = 2 * N + 1 # max k for r in bernoulli(): print("B[{k}] = {r}".format(k=k, r=r)) k += 2 if k == K: break

**[Program 2 - ada2.py]**

Ada’s diagram shows how the program would run when *n*=4, and it not a direct indication of the punched cards that would be fed to Analytical Engine. Of course as the Analytical Engine was never completed it’s actual instruction set is not something we can know for certain. But the way it is described as working is as follows:

- An operation card indicates the operation that the engine is to perform. (One of +, −, ×, ÷).
- A variable card indicates which variable is transferred from the store (memory) to the first Ingress Axis of the mill (CPU). This provides the first operand.
- A second variable card indicates which variable is transferred from the store to the second Ingress Axis if the mill. This provides the second operand.
- The operation is then performed, the result appears in the Egress Axis of the mill.
- A variable card indicates which variable in the store the contents of the Egress Axis are transferred to.

At stage 5 there may be multiple variable cards instructing that the result is stored in several variables, as we see in the first operation of Ada’s table. If we were to write an assembly language version of this operation it would look something like this:

# operation 1

MUL

LOAD v[2]

LOAD v[3]

STORE v[4]

STORE v[5]

STORE v[6]

And each of these opcodes corresponds to a card that acts as instructions to the Analytical Engine.

Ada’s accompanying text describes the need for some common programming idioms that we take for granted in modern programming languages, but are not present in the diagram (which just shows the arithmetic operations performed).

First there is the need for conditional execution. She mentions that when computing B[1] operation 6 would complete the calculation. In my program at line 47 I exit the block when v10 becomes 0. Similarly, as Ada says, operation 12 works in a similar way. Again in my program I conditionally exit the block if v10 becomes 0.

Secondly there is the need for repeated execution of a sequence of instructions (looping). This is dealt with extensively in the notes where the repeated execution of operations 13-23 is described.

Both these could be achieved with a conditional branch instruction. Although the mechanism for branching is not explicitly mentioned in the paper it is clear that the Analytical Engine implements such a function. For example, if the Analytical Engine set a flag when a result was 0 (as microprocessors often do), then there could be an instruction to skip forward or backwards a certain number of program steps and resume operation from there. The instructions were to be provided on a punched cards, so this would require the card reader to skip forward or rewind a certain number of cards.

(In the Fourmilab simulation of the Analytical Engine a “run-up lever” is used. This is set by an operation if the sign of the result of the operation being different from the sign of the first operand (or if an overflow occurs). A conditional branch can then be made based on the state of the lever. So, we could test if a variable was zero by subtracting 1 from it. The only way the sign will change is if the variable initially held the value of 0 (so the result is −1), and the lever will be set. As Ada doesn’t show these operations in her table I’m going to simplify by referring to a “branch if zero” instruction. In a similar way branching on other conditions can be implemented).

This would allow us to place an instruction after operation 7 and operation 12 of “branch if zero to operation 24”. The inner loop would be implemented by placing an instruction after operation 23 of “branch if non-zero to operation 13”. The outer loop would be implemented by placing an instruction after operation 25 of “branch always to operation 1” (or more likely to branch if *n* was less than a predefined constant value, so that a Bernoulli sequence of the specified length is generated).

The text also hints at the notion of *indirection*, where one variable (or register) can tell us which of a number of other variables we need to access. This is necessary as each time we iterate through the operations 13-23, we have computed the A[] coefficient after executing operation 20, but then in operation 21 we multiply this the corresponding number already computed in the B[] sequence. The first time round we would need B[3], stored in variable v22 (as specified in the operands for operation 21 in the diagram), but the next time round the loop we would need the B[5] value stored in v23, and so forth. Likewise when the final result (B[7]) is calculated it is stored by operation 24 into v24 and *n* is increased. The next time we get to operation 24 the final result will be B[9] and it will be stored into v25. Although the diagram itself does not indicate how the instructions of operations 21 and 24 are modified as the program is executed it is clear that it is an issue that Ada has considered.

From the way Ada describes the program it seems that way this would happen is that for each value of *n* the program is run once. Then it must be set up slightly differently for next run. Ada notes that exactly the same sequence of operation cards can be reused, but the variable cards must be modified slightly on each iteration to achieve the required results.

One way the Analytical Engine could do indirection would be to allow the index of a variable to be specified in another variable, so instead of a “LOAD v[12]” (load the Ingress Axis from variable 12 in the store), we would have something like “LOADI v[12]” (load the Ingress Axis from the variable in the store whose index is stored in v12). So if v[12] = 21, “LOADI v[12]” would be the same as “LOAD v[21]”.

But variables in the store hold 50 decimal digits and indices into the store are only 3 decimal digits, so it is a bit odd. Instead we might consider that the Analytical Engine could be equipped with a few special purpose three decimal digit index registers, these could then be used to provide an additional offset in a load instruction.

For the purposes of Ada’s program it would be necessary to only have a single additional index register. We could imagine a that “LOADI v[20]” would correspond to “LOAD v[20 + *i*]” where *i* represents the value in the index register. So if *i* = 1 the instruction “LOADI v[20]” would be the same as “LOAD v[21]”. And there would be a corresponding “STOREI” instruction. To increase the offset we would need an “INC” instruction, which would increment the number in the index register by 1. This would be sufficient to implement Ada’s program.

But perhaps this approach is too influenced by modern computing. A different approach would be to allow the LOAD and STORE instructions to get a variable from an alternate stack of cards. “LOAD alt” and “STORE alt”. That way the alternate stack of cards could be filled with the addresses of v21 to (say) v30. And in operation 10 the address that the first LOAD card uses isn’t punched on the card itself, instead it is read from the alternate stack (it’s the first card in the stack so it would be v21), and the stack moves to the next card. When we reach operation 21 we would again use a “LOAD alt” instruction to get the index of the variable from the alternate stack. The first time round the loop it will be v22, then v23 and so on. When the loop ends, and we have processed all the previously calculated results the next card in the alternate stack will be the correct address to store the newly computed number. So operation 24 uses “STORE alt” and the number is stored.

Finally at the end of the diagram as well as looping back to the beginning we also rewind the alternate card deck and we are ready to go again without needing to alter any of the cards. After 10 iterations the first 10 Bernoulli numbers will have been stored in the variables referenced by the alternate stack, and the program can halt.

To produce the transliteration of Ada’s program that can run without intervention I have adopted the addition of an *index register* to allow indirection, although, as far as I know, this is not historically accurate.

Finally Ada notes that the process described in her diagram produces results of the correct magnitude, but not the correct sign. If there is no simple way to change the sign of a value, this can easily be remedied by subtracting the result (in v[13]) from the variable that the result is to be stored in (as it will initially be 0), and then storing it back into the result variable. Consequently I’ve adjusted operation 24 from “v[24] = v[13] + v[24]” to be “v[24] = v[24] − v[13]”, the sign is then correctly set for value to be reused by the algorithm.

Writing an equivalent pseudo-assembly language version of my transliteration would look something like this:

# initialise the variables SET v[1] 1 # constant 1 SET v[2] 2 # constant 2 SET v[3] 1 # n = 1 # operation 0: initialise the index register SET i 1 # operation 1 MUL LOAD v[2] LOAD v[3] STORE v[4] STORE v[5] STORE v[6] # operation 2 SUB LOAD v[4] LOAD v[1] STORE v[4] # operation 3 ADD LOAD v[5] LOAD v[1] STORE v[5] # operation 4 DIV LOAD v[4] LOAD v[5] STORE v[11] # operation 5 DIV LOAD v[11] LOAD v[2] STORE v[11] # operation 6 SUB LOAD v[13] LOAD v[11] STORE v[13] # operation 7 SUB LOAD v[3] LOAD v[1] STORE v[10] # branch if zero BRANCH ZERO (operation 24) # operation 8 ADD LOAD v[2] LOAD v[7] STORE v[7] # operation 9 DIV LOAD v[6] LOAD v[7] STORE v[11] # operation 10 MUL LOAD v[20 + i] # indirection LOAD v[11] STORE v[12] INC i # index register is incremented # operation 11 ADD LOAD v[12] LOAD v[13] STORE v[13] # operation 12 SUB LOAD v[10] LOAD v[1] STORE v[10] # branch if zero BRANCH ZERO (operation 24) # operation 13 SUB LOAD v[6] LOAD v[1] STORE v[6] # operation 14 ADD LOAD v[1] LOAD v[7] STORE v[7] # operation 15 DIV LOAD v[6] LOAD v[7] STORE v[8] # operation 16 MUL LOAD v[8] LOAD v[11] STORE v[11] # operation 17 SUB LOAD v[6] LOAD v[1] STORE v[6] # operation 18 ADD LOAD v[1] LOAD v[7] STORE v[7] # operation 19 DIV LOAD v[6] LOAD v[7] STORE v[9] # operation 20 MUL LOAD v[9] LOAD v[11] STORE v[11] # operation 21 MUL LOAD v[20 + i] # indirection LOAD v[11] STORE v[12] INC i # increment index register # operation 22 ADD LOAD v[12] LOAD v[13] STORE v[13] # operation 23 SUB LOAD v[10] LOAD v[1] STORE v[10] # branch if non-zero BRANCH NON-ZERO (operation 13) # operation 24 SUB LOAD v[20 + i] # indirection LOAD v[13] STORE v[20 + i] # indirection # the result can be output at this point PRINT # operation 25 ADD LOAD v[1] LOAD v[3] STORE v[3] # reset working variables SET v[7] 0 SET v[13] 0 # repeat # (or if v[3] < N) BRANCH ALWAYS (operation 0) HALT

Each program line would correspond to a punched card that would be fed to the Analytical Engine, so this is the closest we can get to what the original stack of punched cards, and hence the first program, would have looked like.

**Note:** Whilst writing up this article I have found a lot more information out about the Analytical Engine than when I started, and there is much more for me to go through. I intend to investigate the Analytical Engine further, and I will correct any glaring errors written above as I become more familiar with the subject. Please let me know of any problems you find and I will endeavour to sort them out.

— J.M.R.

Here are some useful resources that I have found about Ada and the Analytical Engine:

- http://www.fourmilab.ch/babbage/contents.html
- http://www.cs.virginia.edu/~robins/Ada_and_the_First_Computer.pdf
- http://h14s.p5r.org/2012/12/bernoulli.html

[¹] Apologies if you can’t access these programmes. BBC iPlayer is only available in the UK, and programmes are only available for a limited time.

[²] Ada’s indices for the Bernoulli Numbers are one less than the indices now generally used.

31 December 2014

Posted by on **Happy New Year from Enigmatic Code!**

There are now 763 Enigma puzzles on the site. There is a complete archive of all puzzles published from 1979 – 1983 and also from 2004 – 2013. Which is about 43% of all Enigma puzzles, and leaves just over 1,000 to add to the site.

In 2014 I added 182 puzzles to the site – here my selection of the ones I found most interesting to solve programatically:

**Enigma 166: Water storage****Enigma 170: Mini-multicolour****Enigma 184: Product sum****Enigma 188: Subtract away regardless****Enigma 203: Round walk****Enigma 209: Double number slab****Enigma 216: Point to point****Enigma 227: Set a tease**(amazingly I’m choosing a long division puzzle)**Enigma 229: Five-point circle**(see also Enigma 136)**Enigma 241: The other magic cuboctahedron**(see also Enigma 225)

**Enigma 1267: Prime progression****Enigma 1271: Roman ring****Enigma 1275: Back to basics****Enigma 1277: Colourfields****Enigma 1295: United could win**(amazingly I’m choosing a football puzzle)**Enigma 1309: Fill, cut and fit****Enigma 1319: Latin Christmas tree****Enigma 1320: Around the tree****Enigma 1322: Geometric runs****Enigma 1328: Prime pyramid****Enigma 1335: I know where you live!****Enigma 1337: A powerful square****Enigma 1356: Magic spot**

Of course the most challenging problem of 2014, but certainly not the most fun, was getting a phone line and internet service at my new house. But I did manage to keep up with the posting schedule (although somewhat erratically) during the 3 months that I was without internet access.

Thank you to everyone who has contributed to the site by adding their own solutions (programmed or otherwise) or their insights and questions.

Here is the 2014 Annual Report for **Enigmatic Code** created by **WordPress**.

1 January 2014

Posted by on There are now 581 *Enigma* puzzles posted to the site, with a complete archive of puzzles from February 1979 to April 1982, and October 2005 up to the final *Enigma* puzzle in December 2013. This is just under one third of all *Enigma* puzzles published in **New Scientist**, and leaves around 1,200 more to find and add to the site.

In 2013 I added 286 *Enigma* puzzles to the site, the most challenging ones have, of course, been added to the list of **Challenging Puzzles**, but here is a list of those that I found the most fun to solve during 2013.

**New Puzzles:** (Originally published in **New Scientist** in 2013)

**Enigma 1734: Friendly factors**/ 30th January 2013

**Modern Classic Puzzles:** (Originally published between 2005 and 2007)

**Enigma 1359: The weaker sex**/ 24th September 2005**Enigma 1363: Hat stand**/ 22nd October 2005**Enigma 1375: Patterns of colour**/ 21st January 2006**Enigma 1379: Counting answers**/ 18th February 2006**Enigma 1386: Fair shares**/ 8th April 2006**Enigma 1387: Something to declare**/ 15th April 2006**Enigma 1399: A row of numbers**/ 8th July 2006**Enigma 1404: No repeats**/ 12th August 2006**Enigma 1416: Filling in the squares**/ 4th November 2006**Enigma 1431: Patience**/ 24th February 2007**Enigma 1444: Backslide**/ 26th May 2007**Enigma 1446: Double entendre**/ 9th June 2007

**Ancient Classic Puzzles:** (Originally published between 1979 and 1982)

**Enigma 37: Blocked**/ 1st November 1979**Enigma 54: Grid halving**/ 6th March 1980**Enigma 58: Octos**/ 3rd April 1980**Enigma 65: Plus and minus**/ 3rd July 1980**Enigma 69: Maximum Queen moves**/ 31st July 1980**Enigma 70: Find the catch**/ 7th August 1980**Enigma 73: Quartetting**/ 28th August 1980**Enigma 77: Missing triangles**/ 25th September 1980**Enigma 82: Quad-spoiling**/ 30th October 1980**Enigma 86: Septos**/ 27th November 1980**Enigma 90: Face value**/ 1st January 1981**Enigma 95: For love and money**/ 5th February 1981**Enigma 96: Different differences**/ 12th February 1981**Enigma 106: Magic box**/ 23rd April 1981**Enigma 108: Longest journey**/ 7th May 1981**Enigma 112: Two-square dissection**/ 4th June 1981**Enigma 120: No right turns**/ 30th July 1981**Enigma 128: All the junctions**/ 24th September 1981**Enigma 130: Lettered league**/ 8th October 1981**Enigma 135: Funny ending**/ 12th November 1981**Enigma 144: Spoiling triangles and quads**/ 21st January 1982**Enigma 146: Around the clock**/ 4th February 1982**Enigma 148: I spy**/ 18th February 1982**Enigma 152: The highways of Genoland**/ 18th March 1982

Sadly, I think this shows why **New Scientist** have chosen to discontinue new *Enigma* puzzles. Although almost a fifth of the new puzzles I put up on this site were originally published in 2013, only one of them made it to this list. (See also my **300 Enigma Analysis** in April 2013. The drought never ended, and at the end of play there had been a gap of 68 Enigmas that didn’t make the **Challenging** list).

In 2013 I enabled **Ratings** on the puzzles, which means you can give the puzzles you attempt a rating between 1 and 5 stars. I try to rate the puzzles as I add them to the site, and when I come across an old puzzle that I haven’t rated I rate that retrospectively as well.

Here is the 2013 stats report for **Enigmatic Code** prepared by **WordPress**.

4 April 2013

Posted by on There is now a continuous span of 300 *Enigma* puzzles starting from **Enigma 1445** (June 2007) to the most recently published puzzle, **Enigma 1743** (April 2013) on the site. So I thought it would be nice to have a little analysis.

By my own subjective measure there have been 21 “**Notable Enigmas**” during that period (for me a “Notable Enigma” is one that involves a significant and enjoyable programming challenge).

Here’s the interval between those Notable Enigmas, plotted as a bar chart. (Note that the top bar is not an interval between Notable Enigmas, rather the period since the last one).

On average there is a Notable Enigma every 14.3 puzzles [*], but they are not evenly distributed. For instance, when I started the Enigmatic Code site (November 2011), we’d recently had two Notable Enigmas quite close together (**Enigma 1661** and **Enigma 1669**), but before that we’d had a “drought” lasting for 51 puzzles (there are 51 editions of **New Scientist** published per year), and before that **Enigma 1589** broke the longest recorded drought of 53 puzzles.

Looking back I had in mind the pleasant era between **Enigma 1491** (April 2008) and **Enigma 1520** (November 2008) when I set up the site. In that period we never had to wait longer than 8 weeks for a challenging programming puzzle to come along.

We are currently in the third longest recorded “drought”. There have been 32 puzzles published since **Enigma 1712** (August 2012). But that’s not to say some recent puzzles haven’t been interesting – just not much of a programming challenge. For instance:

**Enigma 1715**(September 2012) generated some interesting discussion.**Enigma 1718**(October 2012) contained some interesting analysis.**Enigma 1719**(October 2012) led to some interesting variations on algorithms.**Enigma 1732**(January 2013) got me reading up about Harmonic Series.**Enigma 1739**(March 2013) got me looking at Pell’s Equation, Continued Fractions and their use in computing square roots.

I hope we soon end the drought have a new addition to the Notable Enigmas at the top of the list, but in the meantime I’m sure I shall continue to unearth gems hidden in the back catalogue of Enigma puzzles. Or I shall have to become less picky!

—

[*] There are also an additional 78 Enigma puzzles on this site from 1979 and 1980, when the puzzle was in its infancy. These have contributed 7 Notable Enigmas over 70 puzzles, so at the beginning Enigma had a faster rate, with Notable Enigmas turning up every 10 puzzles.

1 January 2013

Posted by on The WordPress.com stats helper monkeys prepared a 2012 annual report for this site.

Here’s an excerpt:

In 2012, there were

215new posts, growing the total archive of this site to 294 posts.The busiest day of the year was February 24th with

372views. The most popular post that day wasEnigma 45: Six squares – harder.

## Recent Comments