
We recently asked Lorand Dali, our software engineer, some questions about GIBBO. He walk us through what’s changed, how it works, and what players should expect at the table.
GIBBO is one of the biggest upgrades GIB has ever had. Many players aren’t fully aware of what changed and what stayed the same. How would you explain it in a simple way? Where should players expect the biggest differences in bidding, card play, and defense?
The biggest improvement is in the basic (free) robot. The main reason is that we enabled Monte Carlo simulation during bidding for the basic robot.
This means that when it’s the robot’s turn to bid, it may consider several possible bids. For each candidate, it makes the bid, fast-forwards to the end of the auction, and evaluates which bid worked better across a sample of possible layouts.
Previously, this was too computationally expensive, so we could only offer it in the advanced GIB. Now the basic GIBBO can do this too.
Another big change is the introduction of “rollouts.” This is a new way to run simulations using neural networks.
We can also now ask questions like: “Given this position, how likely is a player to choose a certain card?” If we observe a specific play, we can estimate the probability of different possible hands.
GIBBO uses a neural network to evaluate bidding outcomes instead of traditional double dummy analysis. For a player who has never touched AI, what does that actually mean at the table?
A simple way to think about a neural network is that it’s a piece of math that can decide what card to play next. In a sense, it acts like a small robot that plays bridge. The interesting part is that the main robot (GIBBO) can use this “small robot” to plan ahead by rolling out the rest of the hand.
A rollout means you simulate playing a card, then let the neural network play the rest of the hand to see what happens. This is an alternative to using a double dummy solver.
There are advantages:
If GIBBO were a human bridge player at your local club, how would you describe their personality at the table?
If GIBBO were human, I think he would be a strong but very single-minded and stubborn player. Occasionally, he would do something out of the blue, and give an explanation that nobody else can relate to.
Can you give a few examples of situations where GIBBO performs noticeably better than the old GIB?
The old basic GIB would always pass when a situation wasn’t covered by an explicit rule. For example, after a 2NT opening, it might pass with 11 points and a long minor. GIBBO will try to find a reasonable bid even when the situation isn’t explicitly defined.
Old GIB also tended to lead passively, especially against notrump. GIBBO is much more aggressive, sometimes even too aggressive (especially against 6NT, but we’ll improve that). GIB also assumed perfect play from opponents. That often led to situations where it didn’t matter what card it played, so it would choose something that looked strange.
No robot is perfect. Where is GIBBO still catching up?
Bidding is still the weakest area, especially in long or complex auctions. If you stretch in competitive auctions, GIBBO will take your bids literally and assume you have the full strength you promised. Because it treats every action seriously, it’s easier to fool than a human. Humans have intuition that something might be wrong and adjust. GIBBO doesn’t.
Defense also still has room for improvement, especially signaling and cooperation.
Declarer play is the strongest area, but in a different way than humans. For example, GIBBO can outperform strong players in some declarer challenges, but still struggles with certain structured problems.
Players sometimes post hands where GIBBO’s decisions look surprising. How should players interpret those moments?
When the robot does something unexpected, I sometimes facepalm. Other times I just smile and move on to the next board. I save the hand and analyze it later. I’m happy when players post examples, as it helps identify patterns in what players find frustrating.
How does player feedback reach you and influence development?
I review reported hands and use them for debugging. Forum discussions are also useful to understand trends and recurring frustrations. Players can also message me directly on BBO (Lorserker).
What’s the best way for players to help improve GIBBO?
The best approach is to share clear examples, ideally using the Handviewer link and specifying whether it was basic or advanced, and whether it was MP or IMP.
If you could take one feature from another bridge AI, what would it be?
Some robots allow configurable systems and conventions, which I like. For defense, I like Bridge Baron’s signaling. From BEN, I would take the sampling, bid simulation, and additional neural network approaches.
What are you most excited to work on next?
I’ve focused mostly on card play so far, since it’s the hardest part. Now it’s time to give more attention to bidding and defensive signaling. I’m excited to work on those next.
Finish this sentence: “Players would be surprised to know that GIBBO…”
“…plays 10,000 cards in its head before choosing the one it actually plays.”
Any final words for players who had a rough session with GIBBO?
First, I apologize. I play with the robots every day, I read the forums, and I understand when you are frustrated. Rest assured we are working hard to optimise GIBBO.
And one tip: try to adapt to the robot. The robot will not adapt to you. To shoot the arrow, you must become the arrow.
That’s GIBBO, through the eyes of the person building it. Some things are already better. Some are still evolving. And behind it all, there’s a lot more going on than it might seem at the table.
If you’ve had a memorable moment with GIBBO, good or bad, we’d love to hear about it. Share it in the comments.