エピソード

  • RT6. London AI Hub 2025-01-13
    2025/01/26

    Another great session. Thoughts on AI in 2024 and how well we did with predictions, then a look at the new innovations of 2025 and what's in store. See https://tongfamily.com/2025/01/26/pod-london-ai-hub-january-13-2025/ for more details

    続きを読む 一部表示
    2 時間 41 分
  • ST1. August Update on AI 2024-08-08
    2024/10/08

    There's been a hiatus mainly because we've been shipping products, but more because I lost all my skills at making new videos. I was stuck for a long time on Drop Zones and doing a better intro and outro, but that's a digression for Final Cut Pro nerds.

    In this episode, we cover the latest trends as of August 8, 2024, on AI and the latest trends. The big news has been the shipment of so many Large Language Models (LLMs) and what it means for AI which is way more choice and confusion.

    Plus the emergence of a much better set of tools that are much smaller as well called Small Language Models (SLM sometimes) and also Agents that are chopping the problem into many small pieces.

    I also wanted to introduce Steven to the mix, we are going to have a rotating set of co-hosts and solo episodes as well so we can get the content out on time and not take 6 weeks for post productions. Thanks to the new intro and outro, that should be easier. I'm playing with these a bunch, but check out https://tongfamily.com and https://tne.ai where I hang out a bunch!

    続きを読む 一部表示
    53 分
  • RT3. AI Hardware Introduction
    2024/03/09

    This is another sort of nerdy side note. If anyone is still watching, this section is just to give intuition on the basics of the hardware. There are lots of assumptions about GPUs and CPUs that I wanted to make sure people understood.But the basics are that CPUs are tuned for lots of branches and different workflows, while GPUs are tuned for lots of the same things like matric math. And because they are so fast, most of the job of the computer folks is "feeding the beast". That is caching the most frequently used information so they don't have to wait. There are some errors I think in the levels of the CPU and GPU performance particularly in the cache performance as it is not very clear how this works and the results of course vary depending on the models of processors, so these are all approximations. Put in comments better sources. I have all the sources listed in a spreadsheet that is part of this. We are happy to send this to anyone who wants the source data. I'll fix these errors in later editions (as I'm obsessive that way)

    .Also, I'm quite proud of the HDR mix, using the latest OBS settings, producing in HDR in Final Cut Pro, and adjusting the video scope levels helps. The audio is a little hot and I'm sorry about that, I'll turn it down next time, I stoo much time in the red. My Scarlett needs to about 1 O'clock and it works.

    See https://youtu.be/FupclouzYTI for a video version. And more details at https://tongfamily.com/2024/03/08/pod-rt3-ai-hardware-introduction/


    Chapters: 00:00 AI Hardware Introduction 00:42 Computer Engineering in Two Slides 05:40 It's 165 Years to a Single Disk Access?!! 14:12 Intel Architecture CPU 17:03 What's all this about NVidia 25:24 And now for something completely different, Apple 29:45 Introduction Summary

    2024-03 -08 Shot as UHD HEVC HDR PQ 10 bit using OBS Studio and Final Cut Pro© 2024. Iron Snow Technologies, LLC. All Rights Reserved.

    続きを読む 一部表示
    30 分
  • DT6. AI Intro and Intuitions
    2024/03/08

    OK, we are not experts nor PhDs so most of this is probably not technically correct, but the math is so complicated and the concepts so complicated, that we thought it would be good to just get some intuition on what is happening. So this is a quick talk that summarizes readings from so many different sources about the history of AI from the 1950s all the way to January 2024 or so. It is really hard to keep up but much harder without some intuition about what is happening. We cover expert systems, the emergence of neural networks, Convolution Neural Networks, and Recurrent Neural Networks. The Attention is All You Need paper led to Transformers and then finally some intuition on how such a simple idea, that is training on things can lead to emergent and unexpected behaviors, and finally, some intuition on how Generative images work.

    You can go to YouTube to see the slides we are using at YouTube and more information at Tongfamily.com

    Chapters:

    • 23:25 Attention is all you need Transformers
    • 27:36 But it's too Simple! Emergence is surprising
    • 33:04 What emerges inside a Transformer?
    • 43:01 One Model to Rule Them All
    • 47:54 Works for Image generation too


    続きを読む 一部表示
    53 分
  • DT5. 8 AI Diamonds for you. Insights from 2023 and 2024 Predictions
    2024/03/06
    Ok, you heard it from us first, this is a summary of all the many things that happened in the AI moment of 2023. It's a huge list, but we go through 8 diamonds that are in the blizzard of Artificial Intelligence and machine learning work that has happened, plus some predictions for the future!Of course, all of these are random opinions for entertainment only. It's not like either of us knows the future or has any deep information, but surfing the Internet makes it easy to learn what others are thinking. See https://tongfamily.com/2024/03/05/pod-deon-and-tong-review-ai-and-predict-2024/ for details And some errata and apologies. Sorry, it was too much to go in and Reddit this, so what you hear is what you get. But, 1. Of course we know that Andrej Karpathy's name isn't literally "St. Andrew", but having just spent a holiday in Scotland (what a place), it seemed like a small inside joke for us. Apologies Mr. Karpathy, please keep the insight coming. 2. There are some terminology and "slang" I didn't get quite right. whats, one abbreviation we had was a vLLM for a Visual LLM (mainly because it doesn't fit on the slide). But to be clear (and apologies to the vLLM team), is an LLM inference and service that uses PagedAttention (hence the v is for virtual not visual) and focuses on which keys and values to store in the attention buffer, we update the slides to say viLLM to distinguish, but not here. https://blog.vllm.ai/2023/06/20/vllm.html 0:00 - Introduction 01:13 - In a Nutshell: 8 Diamonds 05:59 - Diamond 1: LLM as an OS 14:52 - D2: It's raining LLMs! 34:21 - D3: LLM Refinements 1:08:13 - D4: Training Magic Show 1:24:59 - D5: Programming LLMs 1:31:28 - D6: Hardware Advances 1:35:09 - D7: Safety, safety, safety 1:36:39 - D8: Customers and Industry
    続きを読む 一部表示
    1 時間 51 分
  • RT5. Testing, testing, 1, 2, 3...
    2024/01/02

    OK, this is a reboot of systems. And getting ready for next year, the new pipeline is ready and the most important thing is that the audio finally sounds better. Stay tuned for more soon!

    続きを読む 一部表示
    3 分
  • PT6. AI Geeks and Happy Birthday France
    2023/07/15
    Happy Birthday France! And a quick catchup on podcasting, the audio level seems better, there is less "breathing" but there is some clipping of the sound, so not all the way tuned. I also hear clicking as it comes in and out, this could be related to the Shure microphone to the Sony camera to the Elgato S60+. https://tongfamily.com/2023/07/14/pt6-ai-geeks-and-happy-birthday-france/
    続きを読む 一部表示
    46 分
  • DT4. AI Adventurers! 2023-06-24
    2023/06/25

    The AI Adventurers return! We are back after two months off retooling our brains and various ventures to be AI-Native. It's not easy and we talk about why. What does it mean to look at your software people and decide if they are type 1, 2 or 3. And how hard it is to make it all work.

    Show notes at Tongfamily.com

    続きを読む 一部表示
    40 分