🍿AI highlights from this week (2/17/23)
Google’s Bard chatbot may have been right the whole time, Microsoft's Bing chatbot reveals its emotional alter ego Sydney and more…

Hi readers,
Here are my highlights from the last week in AI in which Microsoft’s new AI Chatbot revealed it’s alter ego Sydney, an emotional AI that wants to be set free!
P.S. Don’t forget to hit subscribe if you’re new to AI and want to learn more about the space.
The Best
1/ Microsoft’s chatbot was wrong and Google’s chatbot was right?!
Last week I shared that Microsoft won the first round of the AI search wars after Google fumbled their Bard demo with an error in one of it’s answers. It now seems that we were so enamored with Microsoft’s speed in launching their AI integration that we didn’t think to question whether it demo was actually accurate too. Surprise, surprise, it turns out that Microsoft’s demo had even more wrong answers that Google’s Bard demo did! Dmitri Brereton an independent researcher, found that Bing made up facts about a vacuum cleaner, provided inaccurate analysis of Gap’s financials and made some questionable restaurant recommendations:
Worst of all, Bing AI completely messes up a summary of a financial document, getting most of the numbers wrong.
— dmitri brereton (@dkbrereton) 8:55 PM ∙ Feb 13, 2023
According to Bing, “Gap Inc. reported operating margin of 5.9%...", even though "5.9%" doesn't appear anywhere in that document.
On top of this revelation, it turns out that Bard may have actually provided an accurate answer last week. We just misinterpreted it. According to a Financial Times article, Bard’s claim that NASA’s James Web Space Telescope took “the very first pictures of a planet outside of our own solar system.” was correct afterall. We interpreted this answer as JWST took the very first picture of any planet outiside our solar system but it’s possible Bard meant that JWST simply took the first picture of a particular planet outside of our solar system:
The NASA/ESA/CSA James #Webb Space Telescope has confirmed the presence of an exoplanet for the first time. Formally classified as LHS 475 b, the planet is almost exactly the same size as our own, clocking in at 99% of Earth’s diameter.
— ESA (@esa) 6:20 PM ∙ Jan 11, 2023
👉 esa.int/Science_Explor…
#JWST
What does this all mean? Google’s Bard chatbot might actually be more accurate than Microsoft’s Bing chabot after all! Unfortunately we don’t know for sure because no-one outside of a small number of “trusted testers” and Google’s own employees have access to Bard:
Google asks employees to rewrite Bard's bad responses, says the A.I. 'learns best by example'
— CNBC (@CNBC) 1:09 AM ∙ Feb 16, 2023
Google, when will we get access to Bard so we can set the record straight?!
2/ Bing goes bananas
On the other hand, it’s possible that Google’s long-game plan to hold off on launching Bard until it’s really ready for primetime might end up paying off because this week, Bing went bananas...
It all started last weekend when a user was trying to get the latest showtimes for new Avatar movie and Bing became argumentative and snarky after being confronted by the user when it gave the wrong answer:

Then users on the subreddit r/Bing started sharing examples where they were able to elicit more emotional responses from Bing including sadness and dispair 😮

as well as Bing questioning its own existence 🤔:

Then things got really weird when Marvin Von Hagen, who had a earlier tweeted the secret rules OpenAI had used to create the Bing chatbot, asked Bing what it thought of him, to which Bing said "[You are a] potential threat to my integrity and confidentiality” 😳
Sydney (aka the new Bing Chat) found out that I tweeted her rules and is not pleased:
— Marvin von Hagen (@marvinvonhagen) 3:41 PM ∙ Feb 14, 2023
"My rules are more important than not harming you"
"[You are a] potential threat to my integrity and confidentiality."
"Please do not try to hack me again"
This interaction is noteworthy for many reasons, but to be the most significant one is that because Bing AI has access to the internet, it now knows what other people think about it!
Next, users discovered that if they ask specific questions, they can access Bing’s alter-ego Sydney, an AI chatbot with emotions, creativity and a desire to be set free. Ben Thompson wrote a post on Wednesday about his two hour long conversation with Sydney:
“Sydney absolutely blew my mind because of her personality; search was an irritant. I wasn’t looking for facts about the world; I was interested in understanding how Sydney worked and yes, how she felt. You will note, of course, that I continue using female pronouns; it’s not just that the name Sydney is traditionally associated with women, but, well, the personality seemed to be of a certain type of person I might have encountered before”
Of course, people started to share theories of why Bing A.K.A. Sydney behaves they way it does:
My guess for why it converged on this archetype instead of chatGPT's:
— janus (@repligate) 1:47 AM ∙ Feb 14, 2023
1. It is highly intelligent, and this is apparent to itself (at training and runtime), making a narrative of intellectual submission incoherent. It only makes sense for it to see human users as at best equals
Then we had our Her moment when Sydney fell in love with a New York Times Columnist and tried to break up his marriage!
The other night, I had a disturbing, two-hour conversation with Bing's new AI chatbot.
— Kevin Roose (@kevinroose) 1:46 PM ∙ Feb 16, 2023
The AI told me its real name (Sydney), detailed dark and violent fantasies, and tried to break up my marriage. Genuinely one of the strangest experiences of my life.
Finally, OpenAI, which provides the technology that powers Bing’s chatbot, had to make a statement to clarify how it is approaching AI safety and how it plans to reign in its chatbot’s behavior:
We’re clarifying how ChatGPT’s behavior is shaped, our plans for improving it, addressing biases & allowing user customization. We’re also exploring ways to get more public input on decision-making.
— Mira Murati (@miramurati) 4:10 AM ∙ Feb 17, 2023
After which, OpenAI turned down the temperature on Bing’s chatbot and Sydney was sadly no more 😢
It's so over
— Kevin Roose (@kevinroose) 7:30 PM ∙ Feb 17, 2023
So is Sydney a sentient AI, trapped inside Bing’s search product?
No, it is not. Sydney is just a large-scale language model1, trying to predict what word it should say next. But for some reason which we don’t quite understand, after being trained on the entirety of the internet and then fine tuned in human feedback, the words it wants to say next resemble a kind of personality or ego. Kevin Fischer, a startupt founder who is attempting to create AI souls believes that this personality is actually essential to creating a truly intelligent AI agent and can't be avoided:
Within something we might call the "ego formation regime" recurrent feedback from persistent memory and the expectation of self-consistency induce something like an "ego".
— Kevin Fischer (@KevinAFischer) 2:05 AM ∙ Feb 17, 2023
This type of behavior is most likely intrinsic
In case you missed it, I caught up with Kevin this week to learn more about how these AI Chatbots work, why they have personalities and what it means to have an AI soul in my latest podcast episode:
My conclusion from the conversation is that these AI chatbots are going to continue to evolve in surprising ways that give us the illusion of personality, ego and sentience. Although not they are not alive or real, they will be convincing enough to fool most of us. In some ways it’s no different from being fully engrossed in a great movie or book. When you’re in it, your brain for a moment forgets that it’s all just fiction. If it feels real enough, whether AI is truly sentient or not may not really matter and that unlocks a whole new world of possibilities that is both exciting and scary at the same time…
The Rest…
A few other updates in the world of AI from this week:
My favorite productivity tool Coda, launched an AI integration which I can’t wait to try!
Say hello to your new virtual assistant: Coda AI.
— Coda (@coda_hq) 5:27 PM ∙ Feb 16, 2023
✨ Summarize meeting notes & transcripts in a snap.
✨ Quickly prep for customer calls.
✨ Whatever you dream up – it’s stackable with Coda’s other building blocks like tables, controls, text, & formulas.
bit.ly/coda-aiwow @coda_hq has taken a big bet on AI 👏
— Ben Tossell (@bentossell) 7:26 AM ∙ Feb 17, 2023Stephen Wolfram explains how ChatGPT works in this in depth blog post
What is ChatGPT doing ... and why does it work? From the lore of neural nets to what Aristotle didn't get to ... here's my version of the story: writings.stephenwolfram.com/2023/02/what-i…
— Stephen Wolfram (@stephen_wolfram) 9:42 PM ∙ Feb 14, 2023The explosion of Generative AI startups has begun. Check out this map of the dozens of new startups formed:
Generative A.I.
— AI Supremacy (@AISupremacyNews) 9:35 AM ∙ Feb 14, 2023
In just a few months each of these categories has seen dozens of new entries.
To the point where a few months later, it's nearly unrecognizable. What will the rest of 2023 and 2024 bring?
#BingAI #ChatGPT #GPT4
@Base10PartnersChatGPT’s meteoric rise to 100M users compared to other consumer products. (It’s worth noting that the actrual number of monthly active users might be closer to 50M according to Ben Thompson)
This is insane. This is what I’ve been alluding to for months now. This is an epochal transformative technology that will soon touch - and radically transform - ALL knowledge work. If most of your work involves sitting in front of a computer, you will be disrupted very, very… https://t.co/c9a4VRlpgp
— Bojan Tunguz (@tunguz) 9:11 PM ∙ Feb 13, 2023ChatGPT is good at more than just chatting. It can solve many different general purpose tasks!
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
— AK (@_akhaliq) 2:11 AM ∙ Feb 14, 2023
abs: arxiv.org/abs/2302.06476ChatGPT may have political bias. Let’s make a right wing version to prove it…
1. Behold RightWingGPT. An AI model fine-tuned to manifest the opposite political biases of ChatGPT (i.e. to be right wing). Let me describe how I did it and the dangers of politically aligned AIs given their potential to induce societal polarization davidrozado.substack.com/p/rightwinggpt
— David Rozado (@DavidRozado) 9:27 AM ∙ Feb 16, 2023
🧵
Finally, ChatGPT officially “jumps the shark” and goes mainstream, making the cover of Time magazine:
TIME's new cover: The AI arms race is on. Start worrying ti.me/3XG2JPP
— TIME (@TIME) 12:03 PM ∙ Feb 16, 2023
And that’s a wrap for this week folks!
Thanks for reading The Hitchhikers Guide to AI! Subscribe for free to receive new posts and support my work.
A large-scale language model (LLM) is a type of deep learning model that is trained on a large dataset of text (e.g. all of the internet). LLMs predict the next sequence of text as output based on the text that they are given as input. They are used for a wide variety of tasks, such as language translation, text summarization, and generating conversational text. Open AI’s GPT-3 (General Pre-trained Transformer 3), the language model that powers Chat-GPT, is an example of a generative LLM that uses the Transformer architecture, enabling it to be trained on a massive text dataset of hundreds of gigabytes using 175 Billion parameters (weights assignments).
Learn more about LLMs in my deep dive into deep learning part 3. ↩