A year after AI 'code red,' Google is red-faced amid Gemini backlash. Was it inevitable? | The AI Beat


All weekend, it seemed like my social media feed was little more than screenshots and memes and links to headlines that either poked fun or took painful stabs at Google’s so-called ‘woke’ Gemini AI model.

Days after Google said it had “missed the mark” by outputting ahistorical and inaccurate Gemini images, X (formerly Twitter) had a field day with screenshots of Gemini output that claimed “it is not possible to definitely say who negatively impacted society more, Elon tweeting memes or Hitler.” In particular, VC Marc Andreessen spent the weekend gleefully re-posting inaccurate and offensive outputs that he claimed were “deliberately programmed with the list of people and ideas its creators hate.”

This whiplash-inducing shift from the positive response Google received after Gemini’s release in December — with its “Google-will-finally-take-on-GPT-4” vibes — is especially notable because just a little over a year ago, the New York Times reported that Google had declared a “code red” as ChatGPT’s release in November 2022 set off a generative AI boom, potentially leaving the search engine giant in the dust.

Even though its researchers had helped build the technology underpinning ChatGPT, Google had long been wary of damaging its brand, the New York Times article said — while new companies like OpenAI “may be more willing to take their chances with complaints in exchange for growth.” But with ChatGPT booming, according to a memo and audio recording, Google CEO Sundar Pichai had “been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses.”

VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.

 


Request an invite

Google’s need for speed pushed against its need to please

Perhaps those opposing forces — the need for speed pushing against the need to please users — makes the Gemini backlash inevitable. After all, Google had hesitated from the start to release its most sophisticated LLMs precisely because of the potential for what is happening right now: That is, massive backlash against the tech giant for inappropriate LLM output.

Of course, this isn’t Google’s first rodeo when it comes to getting LLM-slammed — remember LaMDA? Back in June 2022, Google engineer Blake Lemoine, told the Washington Post that he believed LaMDA, Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient. 

Lemoine, who worked for Google’s Responsible AI organization until he was placed on paid leave, and who “became ordained as a mystic Christian priest, and served in the Army before studying the occult,” had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began “teaching” LaMDA transcendental meditation, asked LaMDA its preferred pronouns, and leaked LaMDA transcripts.

At the time, Google and its research lab DeepMind, were treading carefully in the LLM space. DeepMind had planned to release its Sparrow chatbot in private beta and CEO Demis Hassabis warned that Sparrow isn’t “immune to making mistakes, like hallucinating facts and giving answers that are off-topic sometimes.”

Smaller companies like OpenAI don’t have the same baggage

There’s no doubt that OpenAI and other startups like Anthropic simply don’t have the same baggage in the AI space that Google does. In last year’s New York Times piece, a memo said that “Google sees this as a struggle to deploy its advanced AI without harming users or society,” and in one meeting’s audio recording, a manager acknowledged that “smaller companies had fewer concerns about releasing these tools, but said Google must wade into the fray or the industry could move on without it.”

Now, of course, Google is all in when it comes to generative AI. But that doesn’t make its challenges any easier, especially when OpenAI does not have stockholders to please and billions of global users of its legacy tech to satisfy.

All LLM companies have to deal with issues of hallucinations — after all, ChatGPT just went completely off the rails last week with gibberish answers and had to respond with a note that “the issue has been identified and is being remediated now.”

Maybe those nonsensical outputs were not as sensitive and politically questionable as Gemini’s. But it does seem like people will always have higher expectations of Google, as the love-to-hate incumbent, to make its LLM outputs please everyone. Which, of course, is impossible — not only are hallucinations inevitable (at least for now), but no LLM-powered chatbot could ever output the perfect balance of social, cultural and political values that all humans agree with, because there is no such thing. Which is why Google may be red-faced, but it remains stuck between a massive rock and a colossal hard place.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.





Source link