Generative AI is a hot topic and both Google and Microsoft are investing heavily in it. They're integrating their chatbots with productivity software like Word, Excel, Gmail, and Docs, and even pitching them as search engine companions, if not replacements.
Inverse recently put Bing and Bard, the chatbots developed by Microsoft and Google, respectively, in a head-to-head test to see which is better. Bing uses OpenAI's GPT-4, while Bard uses Google's own LaMDA.
The first prompt was to create a comparison chart for running shoes, comparing price, color, and other features. Bard generated a somewhat helpful chart with features to consider when buying running shoes, but didn't cite its sources. Bing summarized a web search, but cited its sources.
After resetting Bing to its "Creative" setting, the chatbot was able to provide a more imaginative response, including a chart of shoes with prices, colors, and branded features.
The next prompt was to create an itinerary for a week-long trip to Portugal with restaurant recommendations for each day. Bard created a basic itinerary but didn't provide specific restaurant names at first. Bing was much more detailed in its response, citing a source and providing a lengthy itinerary.
The last prompt was to give advice on how to start the first chapter of a historical fiction novel set in Victorian England. Bard offered some basic writing tips with a nod to the Victorian setting, but didn't write any of the first chapter. Bing offered some basic writing tips and cited a source.
Overall, both chatbots were able to reasonably tackle a wide variety of requests and questions, but the depth at which those prompts were answered varied wildly. In the end, it seems like Bing may have the edge in terms of informativeness, while Bard is more focused on creativity and curiosity.