Books Bard dreamt up
Posted on June 2, 2023 • 3 minutes • 428 words
Google’s Bard, a language model competing with ChatGPT, is available to all Google users. I decided to give it a try.
My topic was from a somewhat obscure corner of number theory. Bard surprised me with the correctly formatted math output like below.
Unfortunately, Bard stopped showing math notation in the following sessions for no obvious reason. But let’s keep to the story.
I asked Bard for the best books on modular power sequences in number theory and it came up with a list of titles, along with authors’ names, short descriptions and even links to Amazon (not Google Books). Amazon is not the cheapest book store, and I always check Booko for better deals .
The first book suggested by Bard was “A Course in Number Theory” by Kenneth S. Rosen:
Booko has a pretty smart search system that can find different media (hardcover and paperback prints, audio, e-books) and editions for the books you look for. This time Booko was seriously confused and offered some other books on the same topic. Surprised, I checked Amazon to find out that Bard had invented book titles and their authors.
Here is the first book again:
And here is the book on Amazon that Bard links it to (“A Classical Introduction to Modern Number Theory”):
Bard fused both authors, Kenneth Ireland and Michael Rosen, into Kenneth S. Rosen. I wonder where the middle name comes from.
Even having access to the books’ pages on Amazon, Bard still hallucinates the titles and authors. Other two books are also faked.
Bard’s suggested a book by Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone called “Cryptography: Theory and Practice”:
The authors had a different title in mind, though:
The last book Bard suggested was “Elementary Number Theory” by John Stillwell:
I think by third book Bard ran out of imagination and decided to show its superiority by “trivialising” the title:
Is it surprising that with access to the internet, Bard is still unable give factually correct results? Not really. It is too early to put any trust into language models.
With LLM, don’t trust, always verify.
P.S. Bard does not lack vanity. When asked to rewrite a paragraph about the best books on number theory, Bard shamelessly added “I was impressed with Bard’s ability to generate accurate and relevant information”. No, I was not!
P.P.S. No part of this post was written by Bard, ChatGPT or any other LLM. It is a product of a rather small language model running inside a low-powered neural network called the human brain.