[Kde-games-devel] AI library in the gamelibs?

Ian Wadham iandw.au at gmail.com
Sun Jan 5 02:02:58 UTC 2014


Hi Inge,

It's nice to see you back at KDE Games … :-)

On 05/01/2014, at 4:52 AM, Inge Wallin wrote:

> On Saturday, January 04, 2014 12:06:01 Ian Wadham wrote:
> > Secondly, I wonder if the above approaches could provide a better
> > AI player in KJumpingCube (Einstein?). ATM the algorithm is basic
> > MiniMax, preceded by an algorithm to pick a set of "likely" moves.
> > Kepler AI is the original algorithm and Newton AI is one I added.
> > 
> > The problems with KJC, as I see it, are that there is usually a large
> > number of possible moves and that each move can require time
> > to play its outcome, because of the possibility of "cascading" moves
> > spreading all over the board.  I have speeded up the processes
> > involved quite a lot, but am wondering (vaguely) where to go next.
> > Newton and Kepler are fairly easy to beat at the highest level
> > (i.e. largest number of moves in the lookahead).
>  
> I'm replying to this part in private now since I think there is a good solution to have for the price of a very small development time.
>  
> I have looked at the AI code in more detail now and I am quite convinced that with just one day's work or so you can have a very much stronger AI. Mainly, I'd structure the search a bit better and use alpha-beta with iterative deepening. I'd leave out hashing and transposition table for later and also more elaborate optimizations.
>  
> To comment on your statement above, I don't think that 36 possible moves (for a game with a side of 6) is very large. Even 100, when you have a side of 10, is not that much.

The move tree has max depth of 5 or 6 and max breadth of 4 at each node,
which gives the AI up to 1024 moves to try. With more than that, I found I was
running out of CPU time on large boards with Intel i7 4 core processor.

I did not try alpha-beta pruning.  See the TODO comment in lines 266-7 of
file ai_main.cpp.  The spirit is willing …  Unfortunately, my detailed knowledge
of AI algorithms is limited to what I have read in Martin Heni's and Andreas
Beckermann's excellent book "Open Source Game Development".
 
>  If you like, I can write a new AI for you, but I see that you have used threading to make the UI responsive even while it's thinking.  For this work, I'd ignore this and you would have to plug it into the threading yourself, which should be pretty easy. Is this interesting to you?

It certainly is !!!!!!!  Please have a go.

> If it is, I think we can get one step further by improving the evaluation function.  As I understood it, you value the position for one side as:
>  
>  Val(c) = Ncubes(c)^2 + Ndots(c)
>  
> where
>   Val(c) is the value of the position for color c 
>   Ncubes(c) is the number of owned cubes for color c
>   Ndots(c) is the sum of the dots on the owned cubes for color c

I agree, but I have not been able to think of a better evaluation function.
Actually it is Val(c) = Ncubes(c)^2 + sum(Ndots(c)^2) but let's not quibble.
It came from the original code and seems a little arbitrary.  Why sum the
*squares* of the dots per cube?

Also, there can be positions where you have only one or two cubes left
and can still win.  It can be very satisfying when this happens.  I imagine
it could also be very upsetting if an AI found out how to do it … ;-)
 
> And the total value of the position is:
>  
>   ValPos(p) = Val(one) - Val(two)
>  
> Right?  That's probably a good start since I wasn't able to win even against the beginner level.

Beginner just keeps adding to the cube with the most points or picking
one at random, if two or more are equal.  I just lost to Beginner too, but
I was not thinking hard and he happened to pick a corner cube to start …
He should really be a separate "AI", not a level within Kepler or Newton,
but that is how the original code was structured.

In general, you can beat Beginner easily by just following the "Strategies
and Tips" section in the handbook, i.e. just ignore him and let him expand
away near where he starts, while you build up strength in the corners and
sides and a little in the middle.  Eventually he meets you and you swallow
him up.  I won easily second time … :-) … he chose to start in the middle.

> But I think we can do better. How do you think when you yourself is playing the game?

I think like a Go player.  Corners strong, sides next strongest.  Space your
pieces out at the start.  Do not attack your enemy too early or too closely.

> How do you select your moves?

See the Newton AI, file ai_newton.cpp, enum Value(…).

> How do you know if you're ahead?

I don't usually know.  Nor can I usually see more than a move or two ahead,
or even forecast the result of a cascade accurately.  The AI has all those
advantages over me, but I can still win sometimes.  It is easier for me to win
on a large board, where there are several battles going on and I can give
them priorities (as in Go), but the AI has to pick battles at random.

It is hard to win on a 5x5 board and I have established that a 3x3 board
is deterministic, like TicTacToe.  I forget whether first or second player
must win, but I remember that a sacrifice is required at one point and
neither Kepler nor Newton will ever choose a sacrifice.  With the 3x3
board, I patched the MiniMax to explore all possible moves to unlimited
depth and found that a win always happens within 12 or 13 moves.

> If we can answer those questions, we will have the key to a better evaluation function.

AFAIK there is no mathematical theory of KJumpingCube.

I believe it is a "theorem" that a cascade move can be followed through
in any order and will always acheive the same ending position, unless
there is a win along the way.  I have not been able to prove that, but
empirically it seems to be always the case.  So I wrote and tested a
new move procedure that would perform cascades in an order that
(IMO) is easier for the human eye to follow.

Cheers, Ian W.



More information about the kde-games-devel mailing list