• unwind ai
  • Posts
  • LLMs as Operating Systems for Extended Context

LLMs as Operating Systems for Extended Context

PLUS: Google's Small yet Strong Vision Language Model, Customize AI Responses During Inference

Today’s top AI Highlights:

  1. Google’s PaLI-3: Smaller, Faster, and Stronger VLM

  2. NVIDIA’s SteerLM: Customize AI Responses During Inference

  3. UC Berkeley’s MemGPT: OS-inspired Memory Management

  4. YouTube's New AI Advertising Package for Event Marketing

  5. Your AI Assistant Frontend as a Service

& so much more!

Read time: 3 mins

Latest Developments 🌍

Mighty VLM in a Compact Package 🏋️‍♂️

Researchers at Google introduce PaLI-3, a smaller, faster, and stronger vision language model, performing exceptionally well compared to larger models. This emphasizes the importance of smaller-scale models being more practical in training and deployment, eco-friendliness, and faster research cycles.

Key Highlights:

  • PaLI-3 integrates a ViT image encoder with a transformer-based encoder-decoder, leveraging contrastively pretrained components. This enables exceptional performance in visually-situated text understanding and object localization tasks.

  • The training strategy involves contrastive pretraining on extensive image-text data, leading to robust multimodal training. The model's adaptability is further enhanced by training at higher resolutions.

  • Despite its 5B parameter size, PaLI-3 excels in visually-situated text understanding tasks, generalizes in video question-answering benchmarks, and sets new standard in multilingual cross-modal retrieval across 36 languages.

Tailoring AI Responses During Inference 🎚️

NVIDIA introduces SteerLM, a new technique to fine-tune LLMs during the inference process, promising a streamlined and efficient method to align AI outputs with your specific needs. It is available as an open-source software.

Key Highlights:

  • SteerLM enables you to define specific attributes for AI models and modify them in real time, tailoring responses for various applications including legal, marketing, and gaming.

  • It simplifies the customization process by utilizing a basic set of prompts and responses, eliminating the need for extensive dataset labeling and multiple model retraining. This four-step approach significantly reduces the time and resources required.

  • SteerLM achieves SOTA results on the Vicuna benchmark, outperforming existing RLHF models like LLaMA 30B RLHF. Its approach allows for accurate customization, ensuring AI outputs closely align with user-defined attributes.

Diagram of the four steps of Steer LM. Step 1. The base language model is trained to assess the quality of responses by predicting attribute values. Step 2. The attribute prediction model is used to annotate response quality across diverse datasets. Step 3. Given a prompt and desired attribute values, a new base model is fine-tuned to generate responses that align with the specified attributes. Step 4. Multiple responses are sampled from the fine-tuned model in Step 3, specifying maximum quality. The sampled responses are evaluated by the trained attribute prediction model, leading to another round of fine-tuning.

LLMs with OS-inspired Memory Management 🧠

Researchers at UC Berkeley introduce a unique approach to address the limited context window in LLMs. They propose MemGPT, an OS-inspired technique for LLMs enabling extended context and adaptive memory management for extended conversations and document analysis.

Key Highlights:

  • Drawing parallels from a traditional OS, MemGPT employs a virtual memory paging system to create an illusion of limitless context for fixed-context LLMs for retrieving relevant historical data missing from the context.

  • With a multi-level memory architecture, MemGPT distinguishes between main context (like RAM) and external context (similar to disk storage). This allows efficient handling of data exceeding the standard context window.

  • MemGPT equips LLMs with autonomous function calls, reducing the need for external intervention. It seamlessly manages the flow of control between memory, processing, and user interactions.

YouTube's AI Spotlight Moments 🕯️

YouTube has introduced Spotlight Moments, a new advertising package that uses AI to identify popular YouTube videos related to specific cultural events such as Halloween, major awards shows, or sporting events. Advertisers can target these moments and serve ads across a branded YouTube channel.

In a broader AI push, Google is working towards a new era in advertising. Google’s DemandGen uses generative AI for campaign creation and asset generation for YouTube and Google Search.

Tools of the Trade ⚒️

  • AgentLabs: An open-source and full-featured UI as a service for building chat-based AI Assistants in a snap. It provides built-in real-time, async I/O, conversation persistence, and more.

  • Cosine: Your AI-co-developer that provides semantic code search, contextual understanding of codebases, and step-by-step guidance for implementing new features.

  • Equixly: A virtual hacker for securing APIs through AI-driven scanning, attack simulations based on OWASP Top 10, and comprehensive API mapping.

  • Nexus Trade: AI-powered trading platform that lets you create, test, optimize, and deploy trading strategies to the cloud with ease, along with a GPT-powered chatbot.

  • Intellize.ai: An AI-optimized observability tool for developers to easily search logs, build dashboards, and set alerts with simple text prompts.

😍 Enjoying so far, TWEET NOW to share with your friends!

Hot Takes 🔥

  • Open LLMs need to get organized and co-ordinated about sharing human feedback. It's the weakest link with Open LLMs right now. They don't have 100m+ people giving feedback like in the case of OpenAI/Anthropic/Bard. They can always progress with a Terms-of-Service arbitrage, but no at-scale customer would touch models fine-tuned with sketchy Human-alignment data "synthesized from" GPT-4. ~ Soumith Chintala

  • GPT-4 wrapper or not, users don’t care! All that matters: - will people pay? - is there a defensible long-term business to be built around it? ~ Matt Shumer

  • OpenAI is no longer a research lab. It's a company selling AI. ~ Pedro Domingos

Meme of the Day 🤡

Image

That’s all for today!

See you tomorrow with more such AI-filled content. Don’t forget to subscribe and give your feedback below 👇

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.