AI Evolution in the World of Bridge

How five different bridge robots stack up against human competition

In our ongoing effort to refine and enhance our AI bridge engines, we conduct nightly tournament simulations at Bridge Base Online (BBO). In these simulations, our bridge robots replay games that were originally played by human competitors. This process provides us with valuable insights into the performance of our AI in lots of different environments, ranging from high-stakes, star studded tournaments to more relaxed, open-entry games.

We are pleased to share with you some of the latest results from these simulations, offering a glimpse into the intriguing interplay between robot and human strategies in bridge.

Meet the digital bridge brains starring in these simulations:

  • GIB Advanced and Basic: These are BBO’s signature robots, familiar to most of our players. They are the mainstay of BBO, with players interacting with these two robots in almost every
    game across the platform.
  • Argine: A guest robot known for its ability to handle a variety of systems such as Acol, SEF, and Basic SAYC, featured in our special daylongs. Argine is also adept at the 2/1 system, allowing for a comprehensive comparison with other robots.
  • ACBL Ben: This experimental AI bridge engine represents a sample of our ongoing AI reseach and development. Trained on a massive dataset of 100 million hands from ACBL pair games on BBO, ACBL Ben offers a fresh perspective in the realm of digital bridge. Everyone is welcome to experience ACBL Ben during our Innovation Week from January 8 to 14, which is now extended until January 21, right here on BBO. Read more about the event here.

Each robot brings its unique strengths and strategies to the table, making these simulations a fascinating showcase of AI capabilities in the complex world of bridge.

TournamentRobot Performance
3-Day Stars & Platinum Robot Individual
48 boards total, 16 each day.
No bidding, all boards pre-bid by GIB.
For star players and those with at least 5 ACBL Platinum masterpoints.
Difficulty: ⭐⭐⭐⭐⭐

Click for Results
1. Argine: 55.29% (Top 15%)
2. Advanced GIB: 54.61% (Top 20%)
3. Basic GIB: 54.16% (Top 20%)
4. Simplified Argine: 53.35% (Top 25%)
5. ACBL-Ben: 48.62%  (Top 50%)

Robots replayed 960 boards.
There were 618 players in the tournament.
Zenith Daylong Reward
16 boards, not-best-hand
Awards BBO Points and BB$ prizes.
Difficulty: ⭐⭐⭐⭐

Click for Results
1. Advanced GIB: 54.08% (Top 30%)
2. Argine: 53.46% (Top 35%)
3. Basic GIB: 50.95% (Top 50%)
4. ACBL-Ben: 50.62% (Top 50%)
5. Simplified Argine: 50.46% (Top 55%)

Robots replayed 1,920 boards.
There were 1,691 players in the tournament.
ACBL Lifemaster Daylong
12 boards, best-hand
Awards BBO Points and ACBL Masterpoints.
Only for ACBL Life Masters.
Difficulty: ⭐⭐⭐⭐

Click for Results
1. Advanced GIB: 63.24% (Top 10%)
2. Basic GIB: 60.46% (Top 20%)
3. ACBL-Ben: 56.50% (Top 30%)
4. Argine: 56.22% (Top 30%)
5. Simplified Argine: 54.46% (Top 40%)

Robots replayed 120 boards.
There were 146 players in the tournament.
ACBL Daylong
12 boards, best-hand
Awards BBO Points and ACBL Masterpoints.
Difficulty: ⭐⭐⭐

Click for Results
1. Argine: 61.17% (Top 15%)
2. Advanced GIB: 60.93% (Top 15%)
3. Simplified Argine: 59.06% (Top 20%)
4. ACBL-Ben: 58.39% (Top 20%)
5. Basic GIB: 55.52% (Top 30%)

Robots replayed 240 boards.
There were 1,229 players in the tournament.
BBO Premium Daylong Just Declare
8 boards, no bidding, all hands pre-bid with GIB.
Awards BBO Points.
Difficulty: ⭐⭐⭐

Click for Results
1. Argine: 60.33% (Top 20%)
2. Advanced GIB: 59.77% (Top 20%)
3. Basic GIB: 58.91% (Top 25%)
4. Simplified Argine: 57.05% (Top 25%)
5. ACBL-Ben: 54.32% (Top 40%)

Robots replayed 320 boards.
There were 781 players in the tournament.
BBO Premium Daylong (MP)
8 boards, best-hand.
Awards BBO Points.
Difficulty: ⭐⭐

Click for Results
1. Argine: 57.36% (Top 30%)
2. Advanced GIB: 57.00% (Top 30%)
3. Simplified Argine: 54.99% (Top 35%)
4. ACBL-Ben: 53.22% (Top 40%)
5. Basic GIB: 52.98% (Top 45%)

Robots replayed 320 boards.
There were 834 players in the tournament.
Free BBO Daylong (MP)
8 boards, best-hand.
Free, open to all.
Difficulty: ⭐

Click for Results
1. Advanced GIB: 61.05% (Top 20%)
2. Argine: 59.42% (Top 25%)
3. ACBL-Ben: 57.29% (Top 30%)
4. Basic GIB: 56.91% (Top 30%)
5. Simplified Argine: 54.86% (Top 35%)

Robots replayed 1,600 boards.
There were 19,135 players in the tournament.

We’ve observed notable differences in performance between our experimental AI model, ACBL-Ben, trained on 100 million human hands from ACBL games on BBO, and Advanced GIB, BBO’s signature robot. Below are some conclusions focused on three critical aspects of bridge play: declarer play, defense, and bidding.

Declarer Play (based on Stars & Platinum Tournament data):


ACBL-Ben’s declarer play, especially in high contracts, is an area needing improvement compared to Advanced GIB. Interestingly, Ben fares relatively better in NT contracts, not due to its strength but because of GIB’s weaker performance in these situations. For instance, Ben averages 48.41% in suit contracts and 49.21% in NT, while GIB scores 55.93% and 50.87%, respectively. However, both perform comparably in 3NT contracts.

Defense (analyzed using Zenith Daylong Reward data):

When it comes to defense, ACBL-Ben shows a performance remarkably close to that of Advanced GIB. In cases where both robots defended against the same contract, Ben scored 49.3% compared to GIB’s 50.8%. Their defensive play, measured in terms of tricks taken and contracts fulfilled, is also closely matched.

Bidding (analyzed using Zenith Daylong Reward data):


ACBL-Ben demonstrates a more aggressive bidding style, buying more contracts (53%) compared to GIB (50%) and opting for game-level bids more frequently. This aggressive approach pays off, as Ben generally scores higher when it buys the contract. Additionally, Ben’s boldness in partscore battles often leads to opponents overreaching and failing more often, which can be partially down to Ben’s propensity to double for penalties.

Summary:

  • ACBL-Ben, while more aggressive in bidding, falls short in declarer play compared to Advanced GIB.
  • Both robots are on par in defensive play.
  • Advanced GIB outperforms Ben in declarer play, especially in high-level contracts.

By continuously comparing our different versions of Ben and GIB, we gain important insights that help us improve our AI models. Our goal is to make bridge robots play more like humans, blending human intuition with robotic accuracy in bridge. Keep an eye out for more updates as we explore and improve this exciting combination of artificial and human intelligence in the game of bridge.

One comment on “AI Evolution in the World of Bridge”

Leave a Reply

Your email address will not be published. Required fields are marked *

crossmenu