ChatGPT's new ‘Browse with Bing’: Watching AIs stalk you is odd
This week, I logged into ChatGPT to continue testing its ChatGPT plugins feature, only to notice a new addition to the beta feature section: Browse with Bing. Needless to say, I jumped on it — who doesn’t love a new toy? — And put it through its paces. However, not only was I amazed by how powerful this new feature is, I was a little unsettled by how it could be used as well.
Browse with Bing is ChatGPT’s answer to Google Bard
If you compare ChatGPT and Google Bard, one of the advantages Google’s Bard has had until now is that it answers you using current search data, while ChatGPT has been stuck working off data it was trained on from 2021. This means the chatbot didn’t really know much about the world and events after this time.
Honestly, that’s terrible; it meant ChatGPT was stuck in time at the height of the Covid pandemic. Spare a thought for the poor chatbot, stuck e-shopping and thinking Tiger King is the height of TV culture. Jokes aside, this limitation has meant while Bard could confidently go into detail with its answers, ChatGPT has had to play it a bit more coy. After all, how embarrassing would it be if it got the current owner of Twitter wrong?
But now, this key differentiator is gone. OpenAI has said it will start using Bing as the default search experience for ChatGPT. While ChatGPT Plus users get first access to it, it will soon be rolled out to free users as well, erasing the feature lead Bard had. Worse, Bard doesn’t have the Plugins feature yet, so this puts Google solidly behind.
Bing-powered ChatGPT does multiple searches per query
When you turn on this new feature, one of the immediately obvious things is that ChatGPT doesn’t just do one search, it looks around until it gets the answer it’s after. You can check out its thought pattern by clicking on the little button above the answer you get, and it’s honestly fascinating to see the hoops it leaps through to get to the final result.
In the above example, it was also interesting that it defaulted to two US-based news outlets and US-centric headlines, though it did include BBC and Aljazeera. It didn’t include something like the Times of India or Deutsche Welle, for example. As an Australian, news outlets like the ABC, the Sydney Morning Herald, or the Herald Sun would have been more applicable (something that would have come up with an actual Bing or Google search). NYT is paywalled, so there’s also some interesting questions there.
In any case, ChatGPT doesn’t always exercise what I’d call “expert-level Google-fu” (Or in this case, Bing-fu) when it comes to looking for results. For instance, it might look around Twitter for an answer, but utterly ignore other avenues of research, resulting in incorrect answers that a bit more diligence would have solved.
This sort of problem could be solved with a bit more targeted prompt engineering, or sequential queries. I’m sure this is to limit the tokens expended per query; if ChatGPT does a search that’s too thorough, there’s likely less tokens left to respond to you.
However, if you’re working with a fairly simple query, it doesn’t need to do multiple searches. For instance, here’s what happened when we asked it to check the latest blog post on Pluralsight.com.
At the time of writing, this was correct! Certainly, I could have just searched that myself, but I didn’t need to leave the chat window, and this could be done for more niche terms.
ChatGPT now cares about giving credible references
Previously, ChatGPT would give you an answer and not tell you what its sources were at all. In fact, it famously hallucinates some of these and makes up books and papers that don’t even exist. However, when using ChatGPT, suddenly the chatbot was all about cross-checking. There were little hyperlink numbers above a statement that told me exactly what domain it had gotten its information from!
What was more impressive is that it even apologized if it couldn’t cross reference something, shared with me the source that it had used, and assured me it was credible.
This is important because we’re also not doing our own critical thinking, but outsourcing this to ChatGPT: in a normal Google search, we’d have a list of sites to perform our own sensibility check, and get a bit of a holistic view of what’s out there as a result. We’re entirely in ChatGPT’s hands in this scenario, including what sites it shows us and doesn’t.
When it fails, it takes time. When it succeeds, it takes time
In the above example, the search actually failed. This happened with a third of the tests I ran. With just a handful of tests, a third of them failed. This wouldn’t be so bad, but they didn’t fail fast; I was sitting waiting for a solid minute each time, meaning I had to navigate away and remember to come back to the window. Here’s an example below:
And here is a case where it failed to find a source completely, and just decided to come up with some random best practice without a source.
This is actually a bit lazy, since it only tried one search term (“business cases for multicloud 2023”), and there are a lot of resources out there on this topic. In fact, we’ve written a whole bunch about it, and we’re not the only ones!
But even when it didn’t fail, it still took twenty seconds per query. This might not seem long, but a two second delay in load time can result in abandonment rates of 87%, which is why Google aims for under half-a-second load time. In fact, I spent a lot of time sitting thinking “Gee, it wouldn’t take me this long to Google it”, and that’s not where you want a “Google killer” to be.0
ChatGPT is also now a little bit creepy
Sad admission: occasionally I like to egosurf (aka. Googling yourself — it’s what it says on the tin). As a writer and author, I love to see what sort of footprint I’m leaving behind, and also what others might see. And so, empowered with this new version of ChatGPT, this was totally the first thing I did.
Well, first of all, this is way off. I’m not from Denmark, though the family name ‘Ipsen’ is, and I’m certainly not a lawyer. But that’s understandable: the prompt wasn’t strong, and even though there aren’t many “Adam Ipsens” in the world, I don’t have the strongest social media footprint. And so, I tried out one of our resident Pluralsight Dev Rels and Azure authors, Lars Klint. After all, who else wouldn’t mind Microsoft looking them up?
Here’s the steps ChatGPT went through:
Here’s the steps ChatGPT went through:
And here’s the output:
So this was both interesting and unnerving. All of this is correct, but ChatGPT totally missed the fact Lars works for Pluralsight or his thriving YouTube channel. Both of these would have been evident with a quick Google search. On the other hand, looking at what ChatGPT did to get it is highly reminiscent of, well, someone who’s preparing to corner a celebrity in an alleyway after a gig. This is a tool that’s going to be available to everyone: automated, sequential searching of people, where the AI crawls around every nook and cranny of the internet for you.
That said, this is something that people can do manually now, and this is what might be referred to as an “upcode” problem (an issue with human cognition and human behavior) rather than a “downcode” problem (the programs). It’s the same argument some make in that cybersecurity might be an unsolvable problem; the origin of the issue is arguably not with the technology, but the people using it.