• unwind ai
  • Posts
  • AI Pin Startup Humane up for Sale šŸ·ļø

AI Pin Startup Humane up for Sale šŸ·ļø

PLUS: PyTorch library for robotic training, Anthropic peeks inside the mind of LLM

Todayā€™s top AI Highlights:

  1. Anthropic researchers peek inside black box LLMs to learn how AI ā€˜thinksā€™

  2. Hugging Face releases a new PyTorch library for real-world robotic training

  3. Humane is Seeking a Buyer After AI Pinā€™s Disappointing Launch

  4. AI notepad for meetings that transforms unstructured patchy words to proper neat notes

& so much more!

Read time: 3 mins

Latest Developments šŸŒ

Itā€™s a major challenge to understand how LLMs work internally. We treat them like black boxes: input goes in, output comes out, and we often donā€™t know how and why. This lack of transparency raises concerns about their safety and reliability. But a team of researchers at Anthropic has made significant progress in peering inside these black boxes. Theyā€™ve developed a method to map the concepts that LLMs use to ā€œthinkā€ and generate text. This discovery could have huge implications for making AI safer and more reliable.

Key Highlights:

  1. Millions of Concepts Mapped: The team mapped millions of concepts represented inside Claude-3 Sonnet using a technique called ā€˜dictionary learning.ā€™ This technique finds patterns of neuron activations that correspond to specific concepts.

  2. Features Go Beyond the Surface: The mapped concepts are more complex than simple words. They represent things like cities, people, scientific fields, and even abstract concepts like gender bias or keeping secrets. The researchers were even able to identify a feature related to ā€˜inner conflict.ā€™

  3. Features Can Be Manipulated: These features could also be manipulated, causing changes in the modelā€™s output. For example, artificially amplifying the ā€˜Golden Gate Bridgeā€™ feature made the LLM obsessed with the bridge, mentioning it in response to almost any question.

  4. Features Linked to Potential Misuse: The research identified features connected to potentially harmful capabilities like creating code backdoors or developing biological weapons. This is a critical finding as it helps pinpoint areas where LLMs could be misused.

  5. Potential for Safer AI: This research opens the door to new ways to improve AI safety. The findings could be used to identify and mitigate dangerous behaviors, steer LLMs towards desirable outcomes, and even remove harmful content from their output.

Hugging Face has released LeRobot, a new PyTorch-based library to make real-world robotics more accessible to researchers and developers. LeRobot provides a suite of pre-trained models, datasets with human demonstrations, and simulation environments, all designed to help researchers train robots efficiently. It is designed to be user-friendly, with clear documentation and examples. Remi Cadene who has built this library says ā€œLeRobot is to robotics what the Transformers library is to NLP.ā€

Key highlights:

  1. LeRobot offers a variety of pre-trained models and datasets, including human demonstrations, which can be used to train models for various robotics tasks. Users can start building their own robotics applications without the need to collect data.

  2. LeRobot includes support for Weights & Biases for experiment tracking, to monitor their model training progress and evaluate different hyperparameters.

  3. LeRobotā€™s codebase has been validated by replicating state-of-the-art results in simulations, ensuring that the library is robust and reliable.

Humane, the company behind the AI Pin that was supposed to be the next big thing, is reportedly looking for a buyer. The company founded by ex-Apple employees Imran Chaudhri and Bethany Bongiorno, has faced a lot of criticism after its underwhelming release of the AI Pin and criticism coming from left, right and centre.

Last year it was valued by investors at $850 million. The company raised $230 million from a roster of high-profile investors including OpenAI CEO Sam Altman, even before it sold its first product. Companies like Meta and Rabbit have been trying to establish a space for AI wearables but it doesnā€™t seem to be happening anytime soon.

This a classic case of a startup raising millions by just riding the ā€œAIā€ wave and becoming over-valued, only to be burst later. However, the founders may also be able to turn it around by successively improving the technology and make it a viable product.

A demonstration of the the laser ink display projection of a wearable Humane Inc. AI pin.

šŸ˜ Enjoying so far, share it with your friends!

Tools of the Trade āš’ļø

  1. Granola: An AI notepad for people who attend back-to-back meetings. It transforms raw unstructured meeting notes to properly readable notes that can be shared with everyone. It transcribes your meeting audio to do this. With GPT-4 built in, it can also help you do your post-meeting action items.

  1. Multi Agent Flow in Flowise AI: Have multiple AI agents working together for you to complete complex tasks more efficiently by delegating specific roles and responsibilities. Powered by LangGraph, each AI agent has dedicated prompts, tools, and models, for better performance for long-running tasks through reflective loops for auto-correction and specialized functions.

  2. Codium Cover Agent: Automates the generation of unit tests and enhances code coverage for software projects. It leverages generative AI to create and validate tests, and streamline development workflows. It can run via a terminal, and is planned to be integrated into popular CI platforms.

  3. FlyCl: Automatically fix your failing CI builds, helping you save time by reducing debugging efforts. You can start using it quickly by changing just one line in your GitHub workflow, and focus on building instead of troubleshooting.

  4. Awesome LLM Apps: Build awesome LLM apps using RAG for interacting with data sources like GitHub, Gmail, PDFs, and YouTube videos through simple texts. These apps will let you retrieve information, engage in chat, and extract insights directly from content on these platforms.

Hot Takes šŸ”„

  1. If you are a student interested in building the next generation of AI systems, don't work on LLMs ~Yann LeCun

  2. Hypothetically if I was the CEO of a major AI company (OpenAI) and I wanted the industry to be regulated to ensure there is no competition, what would I do? Maybe clone the voice of a mainstream actor and then release it causing an uproar and demands for regulation from Hollywood and the general public. The company is not public so there are no repercussions in the equity markets + I can just do a monetary settlement with the actor (Scarlett). Some 4-D chess here? ~Karma

Meme of the Day šŸ¤”

HAL 9000 meme (OpenAI ChatGPT GPT-4o meme)

Thatā€™s all for today! See you tomorrow with more such AI-filled content.

Real-time AI Updates šŸšØ

āš”ļø Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss whatā€™s trending!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.