• unwind ai
  • Posts
  • Skeleton-of-Thoughts for LLMs 💭

Skeleton-of-Thoughts for LLMs 💭

PLUS: LLM Hallucination Leaderboard, Google opensources Project Guideline

Today’s top AI Highlights:

  1. Skeleton-of-Thought: LLMs Can Do Parallel Decoding

  2. Google Opensources Navigation Tech for Visually Impaired

  3. Google PaLM Hallucinates the Most

  4. Transform Data into Multiple RAG Pipelines

& so much more!

Read time: 3 mins

Latest Developments 🌍

Laying the Skeleton First Before Answering 🏠

While LLMs have become invaluable in various applications, their user experience is often hampered by significant response latency. The Skeleton-of-Thought (SoT) is an approach designed to decrease the end-to-end generation latency, inspired by how humans think and write, not always sequentially but often starting with a basic structure or 'skeleton'.

Key Highlights:

  1. SoT guides LLMs to initially create a concise answer 'skeleton', followed by parallel processing to flesh out each point. This mirrors human strategies of organizing thoughts before detailing them, and is applicable to both open-source and closed-source LLMs.

  2. An adaptive version, SoT-R, determines the suitability of SoT based on the question type. It switches between SoT and standard decoding to maintain quality across various question categories, enhancing practical applicability.

  3. SoT and SoT-R have shown promising results in speeding up generation and improving answer quality. Tested on 9 open-source models and 3 API-based models on diverse datasets, these methods showcase efficiency gains and improvements in response diversity and relevance.

Google Gets a Face-PaLM 🤦

Vecata has released its LLM Hallucination Evaluation Model that evaluates how often an LLM introduces hallucinations when summarizing a document. As anyone would expect, GPT-4 demonstrated the least hallucination, and Google PaLM led the board!

The methodology for determining this leaderboard involved training a model to detect hallucinations in LLM outputs using various open-source datasets and then summarizing 1,000 short documents per LLM through their public APIs, strictly adhering to the facts presented in the documents.

Of these, 831 documents were summarized by every model, with the rest being rejected by at least one model due to content restrictions. The leaderboard thus reflects the overall accuracy (absence of hallucinations) and hallucination rate for each model.

Google's Project Guideline for the Visually Impaired 👓

Google Research has recently open-sourced Project Guideline, a platform designed to aid visually impaired individuals in navigating outdoor paths independently. This initiative harnesses the power of machine learning and advanced computer vision to offer a more accessible world for people with visual impairments.

Key Highlights:

  1. The system guides users via audio signals based on their velocity and direction. It also includes an obstacle detection feature using monocular depth estimation from single images, and a low-latency audio system for real-time navigational cues.

  2. Specifically developed for Google Pixel phones with Google Tensor chips, Project Guideline ensures efficient on-device ML processing. This results in a significant reduction in latency, facilitating immediate navigation instructions essential for real-time use.

  3. The open-source release includes the core platform, an Android app, ML models, and a 3D simulation framework. The inclusion of a simulator aids in rapid testing and prototyping in virtual environments, streamlining the development process.

Tools of the Trade ⚒️

  • Trace: A design and developer tool that leverages the power of LLMs to generate UI code, preview it on the web and accelerate the app-building process.

  • Fastlane: All-in-one platform that provides powerful AI assistants and tools for different tasks like writing, generating images, academic research, learning languages, developing applications, browsing web, interacting with files and more.

  • RAGs v2: New updates to RAGs (a Streamlit app to create a RAG pipeline from a data source using natural language) include multiple RAG pipeline creation, customization, and better development quality through added linting and CI.

  • Code to Flow: Transforms code into interactive flowcharts, simplifying complex logic by visualizing nested loops and conditionals, streamlining the understanding and debugging of code.

😍 Enjoying so far, TWEET NOW to share with your friends!

Hot Takes 🔥

  1. If it can scare Ilya, imagine how the normies are gonna react when they finally find out. ~ Bojan Tunguz

  2. Yann LeCun thinks the risk of AI taking over is miniscule. This means he puts a big weight on his own opinion and a miniscule weight on the opinions of many other equally qualified experts. ~ Geoffrey Hinton

Meme of the Day 🤡

r/ProgrammerHumor - weAreNotSameBruh

That’s all for today!

See you tomorrow with more such AI-filled content. Don’t forget to subscribe and give your feedback below 👇

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.