Friday, March 29, 2024
More
    HomeTechnology‘60 Minutes’ Made a Shockingly Wrong Claim About a Google AI Chatbot

    ‘60 Minutes’ Made a Shockingly Wrong Claim About a Google AI Chatbot

    Since OpenAI unleashed ChatGPT on the world, we’ve seen takes you people wouldn’t believe. Some folks have claimed that chatbots have a woke agenda. U.S. Senator Chris Murphy tweeted that ChatGPT “taught” itself advanced chemistry. Even seasoned tech journalists have written stories about how the chatbot fell in love with them. It seems as though the world is reacting to AI the same way cavemen probably reacted when they saw fire for the first time: with utter confusion and incoherent babbling.

    One of the latest examples comes from 60 Minutes, which threw its voice in the ring with a new episode focused on innovations in AI that aired on CBS Sunday. The episode featured interviews with the likes of Google CEO Sundar Pichai—and included questionable claims about one of the company’s large language models (LLM).

    The topic of the clip is about emergent behavior, which describes an unexpected side effect of an AI system that wasn’t necessarily intended by the model’s developers. We’ve already seen emergent behavior spring up in other recent AI projects. For example, researchers recently used ChatGPT to create generative digital characters with goals and background in a study posted online last week. They observed the system performing multiple emergent behaviors such as sharing new information from one character to another and even forming relationships with one another—something the authors didn’t initially have planned for the system.

    Emergent behavior is definitely a worthwhile topic for a news show to discuss. Where the 60 Minutes clip takes a turn, though, is when we’re introduced to claims that Google’s chatbot was actually able to teach itself a language it previously didn’t know after it was prompted in that language. “For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know,” CBS News correspondent Scott Pelley said in the clip.

    Turns out it was complete BS. Not only could the bot not learn a foreign language “it was never trained to know,” but it didn’t teach itself a new skill. The entire clip spurred AI researchers and experts to excoriate the news program’s misleading framing on Twitter.

    “I sure hope some journalist does a review of the whole @60Minutes segment on Google Bard as a case study in how *not* to cover AI,” Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute, wrote in a tweet.

    “Stop Magical Thinking in Tech! It is not possible for an #AI to respond in Bengali, unless the training data was contaminated with Bengali or is trained on a language that overlaps with Bengali, such as Assamese, Oriya, or Hindi,” M. Alex O. Vasilescu, a researcher at MIT, added in another post.

    It’s worth mentioning that 60 Minutes segment didn’t say exactly what the AI they used was. However, a spokesperson from CBS told The Daily Beast that the clip was not a discussion on Bard but a separate AI program called PaLM—the underlying technology of which was later incorporated into Bard.

    The reasons the segment was so frustrating to these experts is because it ignores and manipulates the reality of what a generative AI can actually do. It can’t “teach” itself a language if it never had access to it in the first place. That’d be like trying to teach yourself Mandarin but you’ve only ever heard it after someone asked you Mandarin once.

    After all, language is incredibly complex—with subtle nuance and rules that require an incredible degree of context to understand and communicate with. There’s no way for even the most advanced LLM to grapple with and learn all of that through a few prompts.

    PaLM was already trained with Bengali, the predominant language of Bangladesh. Margaret Mitchell (no relation), a researcher at AT startup lab HuggingFace and formerly of Google, explained this in a tweet thread making the argument for why 60 Minutes was wrong.

    Mitchell pointed out that, in a 2022 demo, Google showed that PaLM could communicate and respond to prompts in Bengali. The paper behind PaLM revealed on a datasheet that the model was indeed trained in the language with roughly 194 million tokens in the Bengali alphabet.

    So it didn’t magically learn anything via a single prompt. It already knew the language.

    It’s unclear why Pichai, the CEO of Google, sat down for the interview and allowed these claims to go without any pushback. (Google did not respond to requests for comment.) Since the episode aired, he’s stayed silent despite experts pointing out the misleading and false claims made in the segment. On Twitter, Margaret Mitchell suggested the reason why could be a combination of Google leadership not knowing how their products work and also that it’s allowing shoddy messaging to propagate in order to tack on to the current hype around generative AI.

    “I suspect [Google executives] literally don’t understand how it works,” Mitchell tweeted. “What I wrote above is likely news to them. And they’re incentivised not to understand (close your eyes to that Datasheet!!).”

    The second half of the video can also be seen as problematic as Pichai and Pelley discuss a short story that Bard created that “seemed so disarmingly human,” it left both men looking somewhat shaken.

    The fact is, that these products aren’t magic. They’re not capable of being “human” because they’re not humans. They’re text predictors like the ones you have on your phone, trained to come up with the likeliest word and phrases following a string of words in phrases. To say that they are could give them a level of authority that could be incredibly dangerous.

    After all, users can use these generative AIs to do things like spread misinformation. We’ve already seen this play out with deepfakes of peoples’ likenesses and even their voices.

    Even a chatbot on its own can cause harm if it winds up producing biased results—something we’ve already seen with the likes of ChatGPT and Bard. Knowing these chatbots’ propensity to hallucinate and make up results, it could even be capable of spreading misinformation to unsuspecting users.

    Research bears this out too. A recent study published in Scientific Reports found that human responses to moral questions can be easily swayed by arguments made by ChatGPT—and users even grossly understated how much they were being influenced by the bots.

    The misleading claims on 60 Minutes are really just a symptom to a larger need for digital literacy in a time when we need it the most. Many AI experts say that now, more than ever, is a time when people need to become aware of what exactly AI can and cannot do. These basic facts about bots need to also be effectively communicated to the broader public.

    This means that the people with the biggest platforms and the loudest voices (i.e. the media, politicians, and Big Tech executives) have the most responsibility in ensuring a safer, more educated future with regards to AI. If we don’t, then we might just wind up like those aforementioned cavemen, playing with the magic of fire—and getting burnt in the process.

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    - Advertisment -
    Google search engine

    Most Popular

    Recent Comments