AGI Warning Ahead ⚠️

PLUS: Intel's announcement from Computex, Coda Brain for Enterprise AI

Today’s top AI Highlights:

  1. Ex-OpenAI employees write an open letter with Geoffrey Hinton to protect AI whistleblowers

  2. Coda launches AI platform that answers all your company’s questions

  3. Intel strengthens AI portfolio with new AI processors and accelerators

  4. Build and deploy serverless RAG pipelines easily with this opensource AI tool

& so much more!

Read time: 3 mins

Latest Developments 🌍

The issue of safe and responsible AI development is more prevalent than ever. There have been heated discussions on how frontier AI companies cannot be left to self-regulate in this regard. In the center of this debate is OpenAI which has caught the most controversy on how the company has put AI safety in a cold box.

Amidst this, several OpenAI employees have written an open letter calling for better whistleblower protection within the AI industry. Co-signed by prominent AI researchers Geoffrey Hinton and Yoshua Bengio, the letter says that the current protections are insufficient because they only cover illegal activities, not concerns around broader AI safety issues. These employees fear retaliation for speaking out about the concerns of the potential for AI to entrench inequalities, spread misinformation, and even threaten human existence.

The open letter highlights several specific issues, AI companies discouraging criticism, the lack of anonymous reporting channels, and the prevalence of non-disparagement clauses that can silence employees. Clearly AI companies are not prioritizing open criticism and transparency in the absence of strong government oversight.

The productivity company Coda is launching a new AI platform “Coda Brain” to make it easier for teams to access and use data within their organizations. Coda Brain, powered by Snowflake’s Cortex, understands your company data, can respond in text and tables, and is permission-aware. You don’t need to get caught in a loop of searching for context, interrupting the flow of others to find it, or submitting internal requests, just to see the same data presented in slightly different ways. You can simply ask Coda Brain!

Key Highlights:

  1. Coda Brain understands your company data. It can access data from a variety of sources, including internal systems, databases, and external apps. It uses Coda’s existing system of “Packs” to connect to your data, and it even lets you build your own Packs if you have a need for a more niche integration.

  2. Coda Brain is secure. It only gives you access to the information you’re allowed to see. It’s designed to work with your company’s existing security protocols. You can use Coda Brain to ask questions about sensitive information, like employee data or financial records, without worrying about unauthorized access.

  3. Text and Tables. Powered by Snowflake’s Cortext Analyst text-to-SQL engine, Coda Brain not only gives unstructured responses (plain text), but even structured ones like charts and tables. These responses can also be further refined on the fly.

  4. No hallucinations. To avoid the risk of LLM hallucinating which can become especially problematic in an enterprise setup, Coda Brain produces citations for answers and also includes reference links in the answers.

Intel made a splash at Computex 2024, announcing a stack of new AI technologies to boost performance and drive down costs across the entire AI ecosystem. From the data center to the edge and the PC, Intel is giving developers and businesses more tools to build and deploy powerful AI solutions, and competing strongly with AMD. Here are the major releases:

  1. Xeon 6 Processors for AI Data Centres

    1. The first processor in their new Xeon 6 family, a powerful E-core (efficient core) chip code-named Sierra Forest, designed for high-density, scale-out workloads.

    2. Optimized for demanding applications like cloud-native applications, content delivery networks, network microservices, and consumer digital services.

    3. Consolidates 3 servers into 1 rack, achieves up to 4.2x the performance and 2.6x the performance per watt compared to the Xeon 2 processors.

  1. Gaudi 3 AI Accelerator

    1. Announced last month, a Gaudi 3 AI accelerator kit contains 8 Gaudi 3 accelerators, designed for training and inference of LLMs

    2. Priced at $125,000, approximately 2/3rd the cost of comparable competing platforms

    3. In a cluster of 8,192 accelerators, Gaudi 3 can train GenAI models up to 40% faster than Nvidia H100 GPU clusters. It also delivers a 15% faster training throughput on the Llama 2-70B model compared to Nvidia H100 for a 64-accelerator cluster.

  2. Lunar Lake Processors for AI Laptops

    1. Delivers significant improvements in AI processing crucial for making AI-powered PCs truly mainstream.

    2. Equipped with a 4th generation Intel neural processing unit (NPU) delivering up to 48 TOPS of AI performance, along with the new Xe2-powered GPU which delivers up to 67 TOPS of AI performance

    3. Expected to ship in Q3 2024

😍 Enjoying so far, share it with your friends!

Tools of the Trade ⚒️

  1. SciPhi: An opensource platform to quickly build, deploy, and scale AI-powered RAG pipelines. You can customize configurations, manage infrastructure effortlessly, and optimize performance using built-in tools for monitoring and scaling.

Screenshot of the SciPhi dashboard.
  1. Final Round AI: Ace your interviews with this Interview Copilot. It helps you succeed in job interviews through resume revision, cover letter generation, real-time interview guidance, and post-interview follow-up support. It works seamlessly with popular meeting platforms like Zoom, Teams, and Google Meet.

  2. MarsCode: An AI-powered IDE that helps you write, debug, and optimize code quickly with features like code completion, generation, and explanation. It also supports one-click deployment, automatic testing, and seamless integration with various platforms for a streamlined development experience.

  3. Awesome LLM Apps: Build awesome LLM apps using RAG for interacting with data sources like GitHub, Gmail, PDFs, and YouTube videos through simple texts. These apps will let you retrieve information, engage in chat, and extract insights directly from content on these platforms.

Hot Takes 🔥

  1. Just another day, waking up to find out that the undergrad hired by your grad student generated a $20,000 OpenAI API bill in the last three days, without your authorization. ~William Wang

  2. ChatGPT is down. What if it never comes back and we have to use our own brains to write stuff again? ~tokyo_todd

Meme of the Day 🤡

That’s all for today! See you tomorrow with more such AI-filled content.

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.