h1

Scala

December 5, 2015

Its been ages since I blogged. There’s been quite a few things I can probably write about – for example, I should probably write about the chess database I’ve been working on as a side project for most of this year. But that will have to wait for another time – now I want to talk about Scala.

I suppose most things worth talking about Scala have already been written or spoken about. But as someone who’s primarily worked in Java for many years, getting a new job working with a new language full-time is quite the challenge. The ease of inter-op between Scala and Java does, admittedly, make things a lot easier. Not that I’m writing Java in Scala syntax but having the benefit of compensating for a lack of supporting tools and libraries by using the corresponding libraries directly from Java helps – true for all new languages that run on the JVM of course but the cross-language syntax is a lot easier than it is on some other languages like Clojure.

I’ve been doing this for over a month now and these are my highlights:

  • No getters, setters or lines of code dedicated to initialising fields based on constructor arguments – simple and elegant
  • An object-oriented AND functional language at the same time? Sounds weird but it somehow seems to work
  • I’m quite used to lazy evaluations while operating on lists and maps – for example with Clojure. But a method that takes in a list of java Futures as an argument and returns another list of Futures after mapping over the (values that the futures return in the) passed-in list still has me confused. When I mentioned this to a colleague, he suggested that a good learning experience is to take a list of futures and convert it to a future that returns a list. Bonkers!
  • Using Options so that we don’t have to deal with NPEs is a nice idea theoretically but in practice, it seems to bloat the code unnecessarily. It does avoid having to check for null everywhere – I’m still not convinced it works better than the Java approach but perhaps that will change as I continue to work with it.
  • My last point (for now) doesn’t really have anything to do with Scala. I hadn’t known about Swagger until I started here – very nice documentation interface for your REST APIs!
h1

Protected: Another Chess Database

January 5, 2015

This content is password protected. To view it please enter your password below:

h1

Coding challenge

November 2, 2011

Here’s my brute-force clojure solution to Cedric Beust’s latest coding challenge. The code maintains a count of how many weights each combination can measure before iterating through the map and returning the maximum – which of course turns out to be all 40!

(def total 40)

(defn check
  ([a target]
    (if (= a target)
      1
      0))
  ([a b target]
    (if (or (= 1 (check a target)) (= 1 (check b target)) (= (+ a b) target) (= (- a b) target) (= (- b a) target))
      1
      0))
  ([a b c target]
    (if (= 0 (+ (check a b target) (check a c target) (check b c target) (check (+ a b) c target) (check (+ a c) b target) (check (+ b c) a target)))
      0
      1))
  ([a b c d target]
    (if (= 0 (+ (check a b c target) (check a b d target) (check b c d target) (check (+ a b) c d target) (check (+ a c) b d target) (check (+ a d) b c target) (check (+ b c) a d target) (check (+ b d) a c target) (check (+ c d) a b target)))
      0
      1)))

(defn numberOfWeights [combo]
  (loop [x 1 sum 0]
    (if (> x total)
      sum 
      (recur (inc x) (+ sum (check (first combo) (second combo) (nth combo 2) (nth combo 3) x))))))

(defn run 
  ([a b c d result]
    (let [combo [a b c d]] 
      (if (= total (+ a b c d)) 
        (assoc result combo (numberOfWeights combo))
        result)))
  ([a b c result]
    (let [d (- total (+ a b c))]
      (if (< d 0)
        result
        (run a b c d result))))
  ([a b result]
    (loop [c b r result]
      (if (>= (+ a b c) total)
        r
        (recur (inc c) (run a b c r)))))
  ([a result]
    (loop [b a r result]
      (if (>= (+ a b) (- total 1))
        r
        (recur (inc b) (run a b r)))))
  ([]
    (loop [a 1 result {}]
      (if (>= a (- total 2))
        result
        (recur (inc a) (run a result))))))

(def main
  (let [result (run)]
    (loop [resultKeys (keys result) maxKey nil maxValue 0]
      (let [candidate (first resultKeys)]
        (if (nil? candidate)
          {maxKey maxValue}
          (if (or (nil? maxKey) (> (get result candidate) maxValue))
            (recur (next resultKeys) candidate (get result candidate))
            (recur (next resultKeys) maxKey maxValue)))))))

Running it on the clojure console gives me:

user=> (load-file "/home/raghav/spikes/clojure/weights.clj")
#'user/main
user=> main
{[3 9 27 1] 40}

 

h1

Parallel Sudoku

August 15, 2011

Following on from my previous post where I wrote a Sudoku solver in Clojure, I tried to improve the performance of the algorithm by having each branch of my search tree execute in a separate thread. I first tried using the ref construct and while it solved the easy puzzle, there was a non-trivial bug which went into some infinite loop with the hard one. I finally gave up on it and decided to use the agent construct to notify the main thread when the puzzle was solved. The main thread just hangs around waiting for the agent state to change to the right solution.

While performance between the two solutions is interchangeable on the easy puzzle, there is a noticeable difference in the performance on the hard one – the parallel solution is consistently 3.5-4 times slower! I have tried changing around some of the parameters (increasing thread pool size to 1000, decreasing it to 4 – the number of cores on my laptop, increasing the sleep interval on the main thread to try and reduce the amount of processor time it grabs etc), all to no avail. I can’t believe context switching between threads costs so much more than the benefits of a parallel search.

So, why is this slower (and by this magnitude)? Alternatively, how could I profile this application (it takes approximately 4 seconds to run) to get some hints?

Here is the relevant additional code:

(def executors (java.util.concurrent.Executors/newFixedThreadPool 100))

(defn solve-in-parallel-worker [grid counter solution-agent]
  (if (>= counter (* size size)) 
    (send-off solution-agent conj grid)
    (if (grid counter) 
      (solve-in-parallel-worker grid (inc counter) solution-agent)
      (let
        [row-number (quot counter size)
        col-number (rem counter size)
        sub-grid-number [(quot row-number sqrt) (quot col-number sqrt)]
        ]   
          (loop 
            [possibles 
              (seq (missing (clojure.set/union 
                              (row grid row-number) 
                              (col grid col-number) 
                              (sub-grid grid (sub-grid-number 0) (sub-grid-number 1))))) 
             dummy nil]
            (if (nil? (first possibles))
              nil 
              (recur 
                (next possibles) 
                (.submit executors 
                  (fn[] (solve-in-parallel-worker 
                          (assign grid row-number col-number (first possibles)) 
                          (inc counter) 
                          solution-agent))))))))))

(defn solve-in-parallel [grid]
  (let [solution-agent (agent [])]
    (.submit executors (fn[] (solve-in-parallel-worker grid 0 solution-agent)))
    (while (= @solution-agent []) 
      (Thread/sleep 1)) 
    (first @solution-agent)))
h1

Clojure

July 27, 2011

After ages and ages, I finally got into a new language. On the pretext of preparing for a brown bag session with the team, I started reading about Clojure and all of its benefits. The brown bag went off decently enough – I re-used Rich Hickey’s slides and talked about functional programming, Java inter-op and mainly the concurrency mechanisms. While preparing, I copy-pasted several bits of code, ran them etc and started getting a feel for the language. But the penny really dropped at the very end of the session when I was asked to write a simple hello world…and couldn’t! I later found (def hello (fn [] “Hello world”)) on the clojure website and promptly sent it off to the guys 🙂

Since I wasn’t totally put off by the language and even somewhat impressed by it, I decided I needed to do some hands-on development to learn the intricacies of the language. To begin with, I stayed away from all the concurrency constructs and picked a problem to try and tackle the functional elements – a sudoku problem solver. Its most likely not the most efficient algorithm though I might try to parallelize it.

  • My “grid” is actually just a vector of 81 elements.
  • Rows, columns and sub-grids are views of the appropriate indices on the vector.
  • setup-grid sets up the problem.
  • solve uses brute-force in a depth-first manner to solve the puzzle.
Having written no automated tests (lots of manual testing via the REPL while writing the code only), I tested it out with two examples from websudoku. The first was the “easy level” and took 98.684439 msecs whereas the “evil level” took 1139.137917 msecs. Here are the two examples:
Easy
3 - 8 2 9 1 - - -
1 7 9 - - - 2 6 -
- - - - - 6 1 - -
8 - 5 1 - - 6 - -
- - - 6 3 9 - - -
- - 6 - - 8 3 - 2
- - 1 5 - - - - -
- 5 4 - - - 8 1 7
- - - 8 1 7 4 - 6
Evil
- - 2 - - - - - 6
5 4 - 9 - - 2 - -
- - - 8 2 - - - -
- 5 - - - - 7 - -
- - 4 7 1 6 5 - -
- - 1 - - - - 8 -
- - - - 9 2 - - -
- - 9 - - 7 - 4 8
4 - - - - - 1 - -
And if you are interested, the code: (and yes, I know there are a lot of brackets)
(def size 9)
(def sqrt (Math/sqrt size))
(def values
  (loop [c 1 v (set [])]
    (if (> c size)
      v
      (recur (inc c) (conj v (str c))))))

(defn index [x y]
  (+ (* x size) y))

(defn place [grid x y]
  (-> grid (nth (index x y))))

(defn assign [grid x y val]
  (assoc grid (index x y) (str val)))

(defn setup-base-grid []
  (vec (take (* size size) (cycle [nil]))))

(defn setup-grid [init-setup]
  (loop [grid (setup-base-grid) ks (keys init-setup) vs (vals init-setup)]
    (if (and ks vs)
      (recur (assign grid ((first ks) 0) ((first ks) 1) (first vs)) (next ks) (next vs))
      grid)))

(defn row [grid x]
  (let
    [start (* x size)
     end (+ start size)]
    (subvec grid start end)))

(defn col [grid y]
  (loop [c 0 v []]
    (if (>= c size)
      v
      (recur (inc c) (conj v (place grid c y))))))
(defn sub-grid [grid x y]
  (let
    [startx (* x sqrt)
     endx (+ startx sqrt)
     starty (* y sqrt)
     endy (+ starty sqrt)]
    (loop [c startx v []]
      (if (>= c endx)
        (vec v)
        (recur (inc c) (concat v (subvec (row grid c) starty endy)))))))

(defn pretty-print [grid]
  (loop [c 0 out ""]
    (if (>= c size)
      (println (.replace (.replace (.replace out "nil" "-") "[" "") "]" ""))
      (recur (inc c) (str out (println-str (row grid c)))))))

(defn missing [section]
  (clojure.set/difference values (set section)))

(defn solve
  ([grid] (solve grid 0))
  ([grid counter]
      (if (>= counter (* size size))
        grid
        (if (grid counter)
          (solve grid (inc counter))
          (let
            [row-number (quot counter size)
             col-number (rem counter size)
             sub-grid-number [(quot row-number sqrt) (quot col-number sqrt)]]
           (loop [possibles (seq (missing (clojure.set/union (row grid row-number) (col grid col-number) (sub-grid grid (sub-grid-number 0) (sub-grid-number 1)))))]
             (if (nil? (first possibles))
               nil
               (let [possible-grid (solve (assign grid row-number col-number (first possibles)) (inc counter))]
                 (if (nil? possible-grid)
                   (recur (next possibles))
                   possible-grid)))))))))
h1

My first ever Gruenfeld

March 5, 2011

I first looked at playing the Gruenfeld about a year ago. Shockingly, since then I hadn’t had anybody playing me as white, play 1. d4 and follow up with 2. c4! Anyway, it was a double-edged sword to play a new opening on the back of two (almost 3) losses. Replay here.

1. d4 Nf6 2. c4 g6 3. Nc3 d5 4. cxd5 Nxd55. e4 Nxc3 6. bxc3 Bg7 7. Nf3 O-O
(7… c5 is the main line)

8. Qb3
(8. Be2)

8… c5 9. Bb2 Qa510. Nd2 Nc6
(10… cxd4 11. cxd4 Nc6 12. d5 Nd4 13. Qd3 Nf3+ 14. Qxf3 Bxb2)

11. Nc4 Qc7 12. d5 Ne5 13. Ne3 Ng4
(13… b5 14. Be2 (14. Bxb5 Rb8) 14…c4 15. Qc2)

14. Nxg4 Bxg4 15. h3 Bd7 16. Bb5 Bxb5 17. Qxb5 c4 18. O-O a6 19. Qa4 b5
(19… f5)

20. Qc2 Rfe8 21. Rad1
(21. a4)

21… Rad8 22. Qd2 Rd7 23. f4 Red8 24. Ba3 a5
(24… f5 25. d6 exd6 26. exf5 gxf5 27. Qd5+ Kh8)

25. e5 Rxd5
(25… b4 26. cxb4 c3) (25… Bf8)

26. Qxd5 Rxd5 27. Rxd5 b4
(27…Qb7 28. Rfd1 b4 29. Bb2 b3 30. axb3 cxb3 31. R1d4 Bh6 32. Kf2)

28. cxb4 Qb6+ 29. Kh2
(29. Kh1)

29… axb4 30. Rb1 Qe3
(30… Bh6 31. Bxb4 Bxf4+ 32. Kh1 Kg7)

31. Bxb4
(31. Rd8+ Bf8 32. Bxb4 Qxf4+ 33. Kh1 Kg7)

31… Qxf4+ 32.Kg1 Bxe5 33. Bxe7
(33. Re1)

33… h5
(33… Qh2+ 34. Kf2 Bg3+ 35. Kf3 Bc7)

34. Bc5 c3 35. Re1 Qh2+ 36. Kf1 Qh1+
(36… c2)

37. Bg1 Bc7 38. Rb5 c2 39. Rc1
(39. Kf2 h4)

39… Bf4
(39… Bh2 40. Kf2 Qxg1+ 41. Rxg1 Bxg1+ 42.Kxg1 c1=Q+)

40. Rxc2 Bh2
(40… Be3+?? 41. Rf2 Bxf2 42. Kxf2 and the queen is completely trapped!)

41. a4 Qxg1+ 42. Ke2 Qxg2+ 43. Kd1 Qf1+ 44. Kd2 Bf4+45. Kc3 Qxh3+ 46. Kb4 Bd6+ 47. Kc4 Qe6+ 48. Rd5 Qe4+ 0-1

h1

Writing a peer-peer application

February 27, 2011

After spending more than a year writing a p2p application, I thought it was time for me to recap some of our experiences and summarise some of the essentials.

Why p2p?

Scaling

Most people have the naive impression that adding more machines or making servers more powerful is enough to achieve scale. Without adequate design, this approach will fail. For example, adding more web servers to a simple web application is limited to the performance of the single database server. If you add more database servers, you have to be prepared for the overhead of data replication. A typical p2p application takes the opposite approach, where it runs on commodity hardware but is designed with the idea of leveraging the power of machines in high quantities. By its very nature, a p2p application is internet-scale.

High Availability/Failover

Due to the large number of available machines in a p2p network, the loss of a single machine (or a cluster of machines in a specific area) isn’t the end of the world. There is rarely any loss of data as it is usually replicated. Other nodes in the network, on detecting the loss of machines, immediately take over additional work until other machines re-enter the network. This is in contrast to most enterprise applications which cannot provide service if connectivity is lost to their data centre!

Distributed compute and resources

There’s been a lot of discussion in recent years about “the wisdom of crowds”. I am not qualified to judge the credibility of the argument. In a p2p network, however, this holds very true. The larger the network, the more the ability of the offered services to function like super-computers!

How p2p?

Programming paradigms

1. Many small services
To take advantage of the resources of the distributed network, it is more efficient to break up complex services into its different components. If specific resources on a node are over-extended, the node can refuse to service requests that further tax those specific resources, while still continuing to accept requests for other services. This allows work to be distributed more uniformly among the many peers.
2. Asynchronous behaviour
Since every invocation of a service could potentially be a request over the network, it is essential to write event-based code. “Waiting” for a response blocks resources on the node unnecessarily. Services are notified by the underlying framework on incoming messages and can continue their processing at that point in time.
3. Multi-threaded behaviour versus Non-blocking code
As a p2p node, there are two ways of handling incoming service requests. The first is the traditional way of receiving a message – spinning up a new thread or process to handle the request and then handling the next message on the queue. However, this leads to complex concurrent code which is never easy to test or maintain. The second, and the approach we’ve taken is to not hand off an incoming request to a worker thread. Instead, since we encourage the development of small services, we handle requests within the same thread with the understanding that any intensive operations will be performed asynchronously (locally or over the p2p network).
4. Retry mechanisms
It is paramount that we always keep in mind that there is a lot of uncertainty in a p2p network – node churn can be quite common, network connectivity can be unstable etc. Since most of the work is done asynchronously, it is hard to keep track of tasks that are still in progress. Timeouts should be identified for each task so that it can be retried if it hasn’t completed in the appropriate time. Care should also be taken that the list of queued tasks is persisted (and replicated) since it is possible for any node or nodes storing this information to lose connectivity with the rest of its peers.
5. Idempotence
It is obvious that nodes will receive identical service requests, especially with the presence of retry mechanisms. Services should either
– recognise that a request is a duplicate of another that it has already processed and respond with the pre-processed result or
– ensure that multiple invocations of the same service with the same parameters result in the same result.
6. Everything is an ID
IDs might be the only static/reliable things in a p2p network. They are usually large and will typically be between 128 and 160 bits long. Each node in the network has an ID and every data element has an ID. Every node “owns” its neighbouring ID space – in other words, an ID belongs to the node whose ID is closest to it. Assuming that we have an uniform distribution of node IDs within the entire range, each node will own an equally large ID space. When nodes join/leave the network, their neighbours automatically determine the new ID range that they own. The ID space is circular – for example, in a 4-bit ID space, the ids 0000 and 1111 are neighbours. Therefore, the p2p network is also referred to as a “ring”.
7. Backwards-compatible services
It is physically impossible to upgrade all nodes in a ring simultaneously. If the network is not in a controlled environment (for example, composed of personal computers belonging to internet users), upgrades can even be spaced out over months or years. It is therefore absolutely essential that p2p applications can ubiquitously deal with messages it receives which contain information it knows nothing about. This is another reason that we decided to model our data as JSON – this allows applications to deserialize only what the current version understands about the data.
8. Shared resources
There are some resources that might have to be shared across nodes in a p2p ring. A common example might be a pool of IP addresses that are exposed as a service provider. If a shared resource is available, one and only node in the ring must “own” it. Nodes could send various messages to each other to advertise about the current status of these resources. Alternatively, they could be managed via the DHT – this second of course gives the added benefit that it is very easy for a human to peek into the ring at any point in time to determine who owns these resources. A word of caution – shared resources resemble a crude notion of locking and is not ideal in a distributed world –  so, use it with care and sparingly!

Messaging

There are 3 different kinds of messages that can be transmitted in any p2p network. The first is a directed message to a node whereas the other two fit the pub-sub paradigm:
– Point-Point messages are directed from one node to another. The typical mechanism to achieve this is by sending a message to an ID. The node that is closest to the ID will automatically handle the message and service the request.
– Messages can be Published to a topic and all nodes that have subscribed to the same topic will receive the message.
– Messages can be Anycasted to a topic so that at least one node subscribed to the topic will receive the message.

Data storage and retrieval

Data is stored in a Distributed Hash Table
– Everything is a key-value pair.
– Data is replicated to avoid any single point of failure.

Replication

Data is usually replicated to a certain number of neighbouring nodes where the number is dependent on the stability of the network. Replication can be done in a myriad of ways depending on the characteristics of the data. We store data by serializing all updates through the node whose ID is closest to the ID of the data entity itself. We call this the “daddy node”. Once the daddy node has updated its local copy of the data, it then notifies its neighbours of the change in state so that the replicas can be updated correspondingly. If the daddy node leaves the ring, one of its neighbours will become the closest node to the ID of the data and becomes the new daddy.

Optimistic concurrency

Handling concurrent updates of data isn’t easy in a DHT. Hence, a number of p2p frameworks will avoid the problem altogether by only storing immutable data. We use optimistic concurrency to work around this issue. When you want to update a data entity, you retrieve the latest version, make your changes to it locally and then try and write it back to the DHT. If some other node sneaks in an update during this process, you get notified by the daddy node that the entity has been modified since you last read it and you attempt the same process all over again. If you still fail after a high number of retries, the update fails with an error and is probably symptomatic of a bug in code or connectivity issues within the ring.

Transaction-less

The DHT and the p2p network offer no notions of a transaction. There is no guarantee that a service won’t fail in the middle of a series of operations. In this case, you are reliant on retry mechanisms (with idempotence required for already executed operations) or recovery mechanisms that can undo the already completed work.

 

h1

Much better, but still losing…

February 27, 2011

Playing white against a higher rated player who is on quite a hot streak, I played well for a long time and the game was quite drawish. Then I missed a chance to get a slight advantage and then misplayed the ending further, allowing my opponent to win quite easily in the end. Replay here.

1. e4 c6 2. d4 d5 3. Nc3 dxe4 4. Nxe4 Nf65. Nxf6+ gxf6 6. Nf3
(6. c3)

6… Bg4 7. Be2 e6 8. h3
(8. O-O)

8… Bh5 9. Be3 Bd6 10. Qd2 Nd7 11. O-O-O Qc7 12. g4 Bg6 13. Bd3 O-O-O 14. Qe2
(14. c4)

14…Kb8 15. Kb1 Nb6 16. Bxg6 hxg6 17. Nd2
(17. c4)

17… f5 18. Nc4 f4 19. Bc1
(19.Nxd6 Qxd6 20. Bd2) (19. Nxb6 Qxb6 20. Bd2)

19… g5 20. Nxb6 Qxb6 21. Qe4! Qb5 22. h4 Qd5
(22… Rxh4 23. Rxh4 gxh4 24. Bxf4 Qd5 25. Qxd5 cxd5 26. Bxd6+ Rxd6 27. Rh1 e5 28. dxe5 Rh6)

23. Qxd5 cxd5 24. h5?!
(24. hxg5 Rhg8 25. Rh5 Rg6 26.Rdh1) The idea here was to get a passed pawn of my own to counter my opponent’s passer on f4. I briefly considered 24. hxg5 but was worried that my doubled g-pawns were too weak and could be rounded up, perhaps remembering the fiasco from my previous game where I blundered with my 24th(!) move with 24… bxa4. This time around it was in fact the better move. Black could still round it up but f4 is now weaker and I could pick up black’s king-side pawns in the process. Whereas hxg5 would have given me a slight advantage, the move played still leaves the game balanced evenly.

24… f5 25. f3
(25. Rdg1 fxg4 26. Rxg4 Be7 27. f3)

25… fxg4 26. fxg4 e5 27. dxe5 Bxe5 28. b3
(28. Rhe1 Bf6 29. Re6 Rhf8 30. c4 d4 31. Kc2)

28… Kc7 29. Ba3 Rd7 30. Bb2
(30. Rhe1)

30… Bxb2 31. Kxb2 Rf8 32. Rd2 f3 33. Rf2 Rf4 34. h6
(34. Rh3 Rxg4 35. Rfxf3 Rh7 36. Rf5 Kd6 37. Rf6+ Ke5 38. Rg6)

34… Rh7 35. Rh5 Kd6 36. Rxg5?
(36. Kc3). I have perhaps played a couple of inaccuracies until this point but the game was still even. My next couple of moves however are enough to give black a winning advantage.

36… Rxh6 37. Rf5?
(37. Kc3)

37… Rf6?!
37…Rxf5! 38. gxf5 Rh3 39. f6 Ke6 40. Kc3 Kxf6 41. Kd4 Kf5 is an easier win.

38. Rxf6+ Rxf6 39. g5 Rf4 40. Kc3 Ke5 41. Kd2 Kf5 42. g6 Ke4 43. Ke1
(43. Rf1 Rg4 44. Re1+ Kf4 (44…Kf5 45. Ke3) 45. Re6)

43… Rg4 44. Rh2 Rxg6 45. Kf2
(45. Rh7)

45… Rg2+ 46.Rxg2 fxg2 47. Kxg2 Kd4 48. Kf2 Kc3 49. Ke3 b5 50. b4 a6 51. Kf4 Kxb4 52. Ke5Kc4 53. a3 a5 54. Kd6 b4 55. a4 d4 56. Kc6 b3 57. cxb3+ Kxb3 58. Kb5 d3 59.Kxa5 d2 60. Kb5 d1=Q 61. a5 Qd8 62. a6 Qb8+ 0-1

h1

A very average showing

February 23, 2011

After the near-debacle in my previous game, I played a lot better this time around, though I still ended up losing! If it wasn’t one tactical oversight, I should have been able to hold out to a draw. Replay here (I was black).

1. e4 c5 2. d4 cxd4 3. c3 d5 4. Qxd4 e6
(4… dxe4 5. Qxe4 (5. Qxd8+ Kxd8))

5. exd5 Qxd5 6. Qxd5 exd5 7. Be3 Nf6 8. Nd2 Nc6 9. h3Be7 10. Ngf3 O-O 11. Be2 Bf5 12. O-O Rfe8 13. Nb3 Bf8 14. Rad1 a5 15. Bb5 Rec8
(15… a4 16. Nbd4 Bd7)

16. Nbd4 Bg6 17. Nh4 Ne5 18. Nxg6 hxg6 19. Nf3 Nc4
(19… Nxf3+ 20. gxf3 a4 21. Bd4 Ra5 22. Bd3)

20. Bxc4 dxc4 21. a4 Bc5 22. Bxc5 Rxc5 23. Rd4 b5 24. Nd2
(24. axb5 Rxb5 25. Rxc4 (25. Rb1 Rc8 26. Nd2 Nd5) 25…Rxb2)

24… bxa4??
24… b4 25. Nxc4 bxc3 26. bxc3 Nd5 27. Rc1 Rb8 28. Kf1 Rb3 should have been good enough for a draw

25. Nxc4 Rac8 26. Nb6 Rb8 27. Nxa4 Re5 28. Rfd1 Re2 29. Kf1 Rbe8 30. Rd8 R2e431. Rxe8+ Rxe8 32. f3 Re5 33. Rd8+ Kh7
(33… Ne8 34. Ra8 Kf8 35. Nb6 Ke7 36.Nc4) (33… Re8 34. Rxe8+ Nxe8 35. Nb6 Nd6)

34. c4 g5 35. c5 Nd5
(35… Rd5 36.Ra8)

36. c6 Ne7 37. c7 Rb5 38. Re8 1-0

h1

My first game of the year

January 30, 2011

A near-disaster! Playing over the board for the first time in 6 weeks, I played in utterly despicable fashion and it was a wonder that I didn’t lose horribly! Here’s the game (I was black)

1. e4 c5 2. Nf3 d6 3. d4 cxd4 4. c3?! Nf6?
(4… dxc3 5. Nxc3 Nc6)

5. Qc2 Qc7 6. Nxd4 a6 7. Bd3 g6 8. Bg5 Bg7 9. Nd2 O-O10. O-O Nbd7 11. f4 b5
(11… e5 12. fxe5 dxe5)

12. Rae1 Bb7
(12… e5 13. fxe5 dxe5)

13. e5 dxe5 14. fxe5 Nxe5 15. Bf4 Nfd7 16. Ne4 Rac8 17. Re3 Qb6 18. Kh1 Nc6?
I could and should have won a piece with a simple fork (18… Nxd3 19. Qxd3 e5!). Instead, I never even spotted the opportunity!

19. Nxc6 Bxc6 20. Rh3 Nf6 21. Be3 Qb7 22. Ng5 h523. Bd4 Rcd8 24. Bxg6 fxg6 25. Ne6 Qd7
(25… Be4)

26. Re3 Bd5
(26… Ng4 27.Rxf8+ Rxf8 28. Nxf8 Kxf8)

27. Nxd8 Qxd8 28. Qxg6 Bf7 29. Qg5 Qd5
(29… Bxa2)

30. Rf5 Qxa2 31. h3 Kh8??
I have gradually let my advantage slip and with this move, I go from slightly better to dead-lost! In the next couple of moves, I have to give up my queen to avoid being mated and it was only my opponent’s slight time pressure that prevented me from resigning here with a clear conscience. 31… Be6 32. Rf4 Rf7 and the game goes on.

32. Rxe7 Qb1+ 33. Kh2 Qxf5 34.Qxf5 Bg8 35. Bxf6 Rxf6 36. Qg5?
Now it is my opponent’s turn to start playing sub-par and let me still hope for a miracle. (36. Qxh5+)

36… Bf7 37. Qc5 Kh7 38. Re2 Bg6 39. Rf2 Re6 40. Qd5 Re1 41. Qb7 Kh6 42. Qxa6 Be5+ 43. g3 h4! 44. Rf6?
(44. Kg2 hxg3 45. Rf5) (44. Rf5 Bxg3+ 45. Kg2)

44… hxg3+ 45. Kg2 Re2+! 46. Kf3??
46. Kf1 g2+ 47. Kg1 Bxf6 48. Qxf6 Rxb2 49. h4 (49. c4 Rb1+ 50. Kxg2 bxc4) 49… Kh7 50.Qe7+ Kh6 51. Qg5+ Kh7 52. h5 Rb1+ 53. Kxg2 Bc2 54. Qd2 Be4+ 55. Kf2 and white is still pushing for a win though I have some hopes of maintaining a fortress. Instead, the text move is an outright blunder and white resigned after my next move.

46… Rf2+0-1