• unwind ai
  • Posts
  • Google’s AI Overview Goes Rogue

Google’s AI Overview Goes Rogue

PLUS: OpenAI partners with News Corp., Meta’s mixed-modal model

Today’s top AI Highlights:

  1. OpenAI partners with News Corp. to bring reliable information to people

  2. Google is in controversy (again) as its AI Overview gives misleading responses

  3. Meta’s mixed-model AI model that blends text and images seamlessly

  4. Nvidia CEO says they are planning to launch new AI chips every year

  5. Hide Google’s AI Overview and ads with this Chrome extension

& so much more!

Read time: 3 mins

Latest Developments 🌍

OpenAI is partnering with News Corp., the company behind big names like Wall Street Journal, New York Post, Barron’s, and MarketWatch, to bring News Corp.’s content as answers to questions by OpenAI’s users. The aim is to provide users with more reliable information and news from trusted sources while using OpenAI’s products.

Plus, News Corp. will “share journalistic expertise to help ensure the highest journalism standards are present across OpenAI’s offering.”

The internet is stuffed with garbage content and it’s easy to come across misleading and false information. Getting reliable and verified responses is now extremely important. Take Google’s AI Overview for example which uses top search results to summarize information and present it to user queries. Since there is no system to check the reliability of the answers, Google has again fallen into the controversy pit as AI Overview shows inaccurate and stupid answers.

 

Photo by Dare Obasanjo on May 23, 2024. May be a Twitter screenshot of text.

Simply saying that search results are filled with misinformation and Google’s LLM is just summarizing them, not hallucinating, doesn’t absolve Google of its responsibilities. AI should be leveraged to cut the noise and give accurate information.

Current multimodal AI models struggle to truly blend information from different sources like text and images. They often treat them separately, which limits their ability to understand and create complex content that seamlessly combines both. Meta’s research team has developed Chameleon, a new AI model designed to overcome these limitations. Chameleon can understand and generate both text and images together regardless of how they are arranged.

Key Highlights:

  1. Unified Understanding: Chameleon can analyze and generate content that freely mixes images and text. It can answer questions about images, generate captions, write stories illustrated with images, and even create entirely new visual concepts based on textual descriptions.

  2. Performance: Chameleon outperforms SOTA models like GPT-4V and Gemini in several tasks, including image captioning, visual QA, and text-based reasoning. Interestingly, it achieved these results while being a single model, as opposed to other models which were augmented with DALL-E 3-generated images.

  3. A new foundation model: Most existing models are bottlenecked by their “late fusion” approach – they process text and images separately and then combine the information later. This is like trying to understand a story by reading the text in one room and looking at the pictures in another. Chameleon’s early fusion architecture breaks down the walls between these modalities from the very beginning. It allows a single transformer model to learn representations that inherently blend information from both sources.

Nvidia posted a blasting Q1-2025 earnings this Wednesday, exceeding all expectations and estimates. The company reported record revenue of over $26 billion, fueled by insatiable demand for its AI-powering GPUs. The earnings call gave us a glimpse into Nvidia’s ambitious roadmap, with a clear emphasis on expanding its full-stack solutions and catering to the burgeoning demands of AI factories and a future brimming with AI applications.

Here are the key takeaways:

  1. Automotive Sector Poised for Growth: The automotive sector will ascend to become Nvidia’s largest enterprise vertical within the Data Center market this year. This means autonomous vehicle companies are investing heavily in AI infrastructure to train and build next-gen vehicles. In another interview, Jensen Huang said “every single car, someday we will have to have autonomous capability.”

  2. No brakes in Demand: Even with the introduction of the Blackwell architecture, the demand for Hopper GPUs continues to surge. This highlights the insatiable appetite for AI computing power, and Nvidia is struggling to cater to the supply.

  3. Generative AI is Fueling an Inference Explosion: The rise of generative AI, with its complex inference requirements to create text, images, and more, is driving incredible growth in demand for Nvidia’s products. Coupled with the emergence of 15-20,000 AI startups all needing processing power, this indicates we’re only at the beginning of a massive wave of AI adoption.

  4. AI Factories: Large-scale AI deployments, what Nvidia calls “AI factories,” are being built by major players like Meta and Tesla. These are massive clusters of GPUs dedicated to training and deploying cutting-edge AI models.

  5. Sovereign AI is on the Rise: Countries around the world are investing heavily in building their own AI infrastructure, and Nvidia sees this as a significant growth opportunity.

  6. Blackwell is Selling Soon: The Blackwell platform, boasting a massive performance boost over Hopper, is already in production. Shipments begin in Q2, with major customers like Amazon, Google, Meta, and Microsoft expected to have systems up and running by Q4.

  7. Second Quarter Outlook: Nvidia projects Q2 revenue of $28 billion, indicating continued strong demand across all market platforms.

  8. The Race for AI Supremacy is Heating Up: Cloud providers are vying for a piece of the AI pie, with companies like Google and Meta developing their own AI chips. But Nvidia remains confident in its full-stack approach.

  9. A New Chip, Every Year: Nvidia is on an aggressive release schedule, planning to launch a brand new AI chip architecture every single year. This ‘one-year rhythm’ means customers can expect a constant stream of performance improvements and new capabilities.

😍 Enjoying so far, share it with your friends!

Tools of the Trade ⚒️

  1. Unify: Dynamically routes each prompt to the best LLM based on your desired balance of quality, speed, and cost. It helps you efficiently manage different LLMs, ensuring optimal performance and cost-effectiveness for your applications.

  2. Bye Bye, Google AI: Turn off Google AI Overviews, ads, and discussions from this Google Chrome extension. Since you can’t turn these off from any settings in Chrome, this extension lets you hide them. It simply uses CSS to set those areas of the page to be hidden (display="none").

  3. MusicGPT: Run the latest music generation AI models locally in a performant way, in any platform and without installing heavy dependencies like Python or machine learning frameworks. Right now it only supports MusicGen by Meta.

  4. Awesome LLM Apps: Build awesome LLM apps using RAG for interacting with data sources like GitHub, Gmail, PDFs, and YouTube videos through simple texts. These apps will let you retrieve information, engage in chat, and extract insights directly from content on these platforms.

Hot Takes 🔥

  1. I would urge parents to limit the amount of social media that children can see because they're being programmed by a dopamine-maximizing AI. ~Elon Musk

  2. You could argue that the point of programming is to produce bugs. Bugs show you where your model of a problem doesn't match the problem, and in a highly motivating form. ~Paul Graham

Meme of the Day 🤡

LLMs being released in 2024 🔥

That’s all for today! See you tomorrow with more such AI-filled content.

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.