エピソード

  • #250 Pedro Domingos on the Real Path to AGI
    2025/04/24

    This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.

    To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai

    Can AI Ever Reach AGI? Pedro Domingos Explains the Missing Link

    In this episode of Eye on AI, renowned computer scientist and author of The Master Algorithm, Pedro Domingos, breaks down what’s still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers.

    Topics covered:

    • Why deep learning alone won’t achieve AGI

    • How reasoning by analogy could unlock true machine creativity

    • The role of evolutionary algorithms in building intelligent systems

    • Why transformers like GPT-4 are impressive—but incomplete

    • The danger of hype from tech leaders vs. the real science behind AGI

    • What the Master Algorithm truly means — and why we haven’t found it yet

    Pedro argues that creativity is easy, reliability is hard, and that reasoning by analogy — not just scaling LLMs — may be the key to Einstein-level breakthroughs in AI.

    Whether you're an AI researcher, machine learning engineer, or just curious about the future of artificial intelligence, this is one of the most important conversations on how to actually reach AGI.

    📚 About Pedro Domingos: Pedro is a professor at the University of Washington and author of the bestselling book The Master Algorithm, which explores how the unification of AI's "five tribes" could produce the ultimate learning algorithm.

    Stay Updated:

    Craig Smith on X:https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) The Five Tribes of AI Explained

    (02:23) The Origins of The Master Algorithm

    (08:22) Designing with Bit Strings: Radios, Robots & More

    (10:46) Fitness Functions vs Reward Functions in AI

    (15:51) What Is Reasoning by Analogy in AI?

    (18:38) Kernel Machines and Support Vector Machines Explained

    (22:23) Case-Based Reasoning and Real-World Use Cases

    (27:38) Are AI Tribes Still Siloed or Finally Collaborating?

    (32:42) Why AI Needs a Deeply Unified Master Algorithm

    (36:40) Creativity vs Reliability in AI

    (39:14) Can AI Achieve Scientific Breakthroughs?

    (41:26) Why Reasoning by Analogy Is AI’s Missing Link

    (45:10) Evolutionaries: The Most Distant Tribe in AI

    (48:41) Will Quantum Computing Help AI Reach AGI?

    (53:15) Are We Close to the Master Algorithm?

    (57:44) Tech Leaders, Hype & the Reality of AGI

    (01:04:06) The AGI Spectrum: Where We Are & What’s Missing

    (01:06:18) Pedro’s Research Focus

    続きを読む 一部表示
    1 時間 8 分
  • #249 Brice Challamel: How Moderna is Using AI to Disrupt Modern Healthcare
    2025/04/20

    This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less.

    On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved.

    Offer only for new US customers with a minimum financial commitment. See if you qualify for half off at http://oracle.com/eyeonai

    In this episode of Eye on AI, Craig Smith sits down with Brice Challamel, Head of AI Products and Innovation at Moderna, to explore how one of the world’s leading biotech companies is embedding artificial intelligence across every layer of its business—from drug discovery to regulatory approval.

    Brice breaks down how Moderna treats AI not just as a tool, but as a utility—much like electricity or the internet—designed to empower every employee and drive innovation at scale. With over 1,800 GPTs in production and thousands of AI solutions running on internal platforms like Compute and MChat, Moderna is redefining what it means to be an AI-native company.

    Key topics covered in this episode:

    • How Moderna operationalizes AI at scale

    • GenAI as the new interface for machine learning

    • AI’s role in speeding up drug approvals and clinical trials

    • The future of personalized cancer treatment (INT)

    • Moderna’s platform mindset: AI + mRNA = next-gen medicine

    • Collaborating with the FDA using AI-powered systems

    Don’t forget to like, comment, and subscribe for more interviews at the intersection of AI and innovation.

    Stay Updated:

    Craig Smith on X:https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Preview

    (02:49) Brice Challamel’s Background and Role at Moderna

    (05:51) Why AI Is Treated as a Utility at Moderna

    (09:01) Moderna's AI Infrastructure

    (11:53) GenAI vs Traditional ML

    (14:59) Combining mRNA and AI as Dual Platforms

    (18:15) AI’s Impact on Regulatory & Clinical Acceleration

    (23:46) The Five Core Applications of AI at Moderna

    (26:33) How Teams Identify AI Use Cases Across the Business

    (29:01) Collaborating with the FDA Using AI Tools

    (33:55) How Moderna Is Personalizing Cancer Treatments

    (36:59) The Role of GenAI in Medical Care

    (40:10) Producing Personalized mRNA Medicines

    (42:33) Why Moderna Doesn’t Sell AI Tools

    (45:30) The Future: AI and Democratized Biotech

    続きを読む 一部表示
    50 分
  • #248 Pedro Domingos: How Connectionism Is Reshaping the Future of Machine Learning
    2025/04/17

    This episode is sponsored by Indeed.

    Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.

    Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI

    In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.

    From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today’s most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.

    We also dive into:

    • The tribal war between Connectionists and Symbolists

    • The surprising origins of Backpropagation

    • How transformers redefined machine translation

    • Why GANs and generative models exploded (and then faded)

    • The myth of modern reinforcement learning (DeepSeek, RLHF, etc.)

    • The danger of AI research narrowing too soon around one dominant approach

    Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what’s coming next.


    Don’t forget to subscribe for more episodes on AI, data, and the future of tech.

    Stay Updated:

    Craig Smith on X:https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) What Are Generative Models?

    (03:02) AI Progress and the Local Optimum Trap

    (06:30) The Five Tribes of AI and Why They Matter

    (09:07) The Rise of Connectionism

    (11:14) Rosenblatt’s Perceptron and the First AI Hype Cycle

    (13:35) Backpropagation: The Algorithm That Changed Everything

    (19:39) How Backpropagation Actually Works

    (21:22) AlexNet and the Deep Learning Boom

    (23:22) Why the Vision Community Resisted Neural Nets

    (25:39) The Expansion of Deep Learning

    (28:48) NetTalk and the Baby Steps of Neural Speech

    (31:24) How Transformers (and Attention) Transformed AI

    (34:36) Why Attention Solved the Bottleneck in Translation

    (35:24) The Untold Story of Transformer Invention

    (38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem

    (42:29) GANs: The Evolutionary Arms Race in AI

    (48:53) Reinforcement Learning Explained

    (52:46) Why RL Is Mostly Just Supervised Learning in Disguise

    (54:35) Where AI Research Should Go Next

    続きを読む 一部表示
    1 時間
  • #247 Barr Moses: Why Reliable Data is Key to Building Good AI Systems
    2025/04/13

    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more.

    In this episode of Eye on AI, Craig Smith sits down with Barr Moses, Co-Founder & CEO of Monte Carlo, the pioneer of data and AI observability. Together, they explore the hidden force behind every great AI system: reliable, trustworthy data.

    With AI adoption soaring across industries, companies now face a critical question: Can we trust the data feeding our models? Barr unpacks why data quality is more important than ever, how observability helps detect and resolve data issues, and why clean data—not access to GPT or Claude—is the real competitive moat in AI today.

    What You’ll Learn in This Episode:

    • Why access to AI models is no longer a competitive advantage

    • How Monte Carlo helps teams monitor complex data estates in real-time

    • The dangers of “data hallucinations” and how to prevent them

    • Real-world examples of data failures and their impact on AI outputs

    • The difference between data observability and explainability

    • Why legacy methods of data review no longer work in an AI-first world



    Stay Updated:

    Craig Smith on X:https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Intro

    (01:08) How Monte Carlo Fixed Broken Data

    (03:08) What Is Data & AI Observability?

    (05:00) Structured vs Unstructured Data Monitoring

    (08:48) How Monte Carlo Integrates Across Data Stacks

    (13:35) Why Clean Data Is the New Competitive Advantage

    (16:57) How Monte Carlo Uses AI Internally

    (19:20) 4 Failure Points: Data, Systems, Code, Models

    (23:08) Can Observability Detect Bias in Data?

    (26:15) Why Data Quality Needs a Modern Definition

    (29:22) Explosion of Data Tools & Monte Carlo’s 50+ Integrations

    (33:18) Data Observability vs Explainability

    (36:18) Human Evaluation vs Automated Monitoring

    (39:23) What Monte Carlo Looks Like for Users

    (46:03) How Fast Can You Deploy Monte Carlo?

    (51:56) Why Manual Data Checks No Longer Work

    (53:26) The Future of AI Depends on Trustworthy Data

    続きを読む 一部表示
    56 分
  • #246 Will Granis: How Google Cloud is Powering the Future of Agentic AI
    2025/04/09

    This episode is sponsored by Thuma.

    Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.

    To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai

    What happens when AI agents start negotiating, automating workflows, and rewriting how the enterprise world operates?

    In this episode of the Eye on AI podcast, Will Grannis, CTO of Google Cloud, reveals how Google is leading the charge into the next frontier of artificial intelligence: agentic AI. From multi-agent systems that can file your expenses to futuristic R2-D2-style assistants in real-time race strategy, this episode dives deep into how AI is no longer just about models—it's about autonomous action.


    In this episode, we explore:

    • How AgentSpace is transforming how enterprises build AI agents

    • The evolution from rule-based workflows to intelligent orchestration

    • Real-world use cases: expense automation, content creation, code generation

    • Trust, sovereignty, and securing agentic systems at scale

    • The future of multi-agent ecosystems and AI-driven scientific discovery

    • How large enterprises can match startup agility using their data advantage

    Whether you're a founder, engineer, or enterprise leader—this episode will shift how you think about deploying AI in the real world.

    Subscribe for more deep dives with tech leaders and AI visionaries.

    Drop a comment with your thoughts on where agentic AI is headed!

    (00:00) Preview and Intro

    (02:34) Will Grannis’ Role at Google Cloud

    (05:14) Origins of Agentic Workflows at Google

    (09:10) How Generative AI Changed the Agent Game

    (12:29) Agents, Tool Access & Trust Infrastructure

    (14:01) What is Agent Space?

    (16:30) Creative & Marketing Agents in Action

    (23:29) Core Components of Building Agents

    (25:29) Introducing the Agent Garden

    (28:06) The “Cloud of Connected Agents” Concept

    (33:53) Solving Agent Quality & Self-Evaluation

    (37:19) The Future of Autonomous Finance Agents

    (40:55) How Enterprises Choose Cloud Partners for Agents

    (43:50) Google Cloud’s Principles in Practice

    (46:27) Gemini’s Context Power in Cybersecurity

    (49:50) Robotics and R2D2-Inspired AI Projects

    (52:39) How to Try Agent Space Yourself

    続きを読む 一部表示
    58 分
  • #245 Rajat Taneja: Visa's President of Technology Reveals Their $3.3 Billion AI Strategy
    2025/04/02

    This episode is sponsored by Thuma.

    Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.

    To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai



    Visa’s President of Technology, Rajat Taneja, pulls back the curtain on the $3.3 billion AI transformation powering one of the world’s most trusted financial networks.

    In this episode, Taneja shares how Visa—a company processing over $16 trillion annually across 300 billion real-time transactions—is leveraging AI not just to stop fraud, but to redefine the future of commerce.

    From deep neural networks trained on decades of transaction data to generative AI tools powering next-gen agentic systems, Visa has quietly been an AI-first company since the 1990s. Now, with 500+ petabytes of data and 2,900 open APIs, it’s preparing for a future where agents, biometrics, and behavioral signals shape every interaction.

    Taneja also reveals how Visa’s models can mimic bank decisions in milliseconds, stop enumeration attacks, and even detect fraud based on how you type. This is AI at global scale—with zero room for error.

    What You’ll Learn in This Episode:

    • How Visa’s $3.3B data platform powers 24/7 AI-driven decisioning

    • The fraud models behind stopping $40 billion in criminal transactions

    • What “agentic commerce” means—and why Visa is betting big on it

    • How Visa uses behavioral biometrics to detect account takeovers

    • Why Visa rebuilt its infrastructure for the AI era—10 years ahead of the curve

    • The role of generative AI, biometric identity, and APIs in the next wave of payments

    The future of commerce isn’t just cashless—it’s intelligent, autonomous, and trust-driven.

    If you’re curious about how AI is redefining payments, security, and digital identity at massive scale, this episode is essential viewing.

    Subscribe for more deep dives into the future of AI, commerce, and innovation.



    Stay Updated:

    Craig Smith on X:https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI



    (00:00) Introduction

    (02:57) Meet Rajat Taneja, Visa’s President of Technology

    (04:02) Scaling AI for 300 Billion Transactions Annually

    (05:27) The Models Behind Visa’s Fraud Detection

    (08:02) Visa’s In-House AI Models vs Open-Source Tools

    (10:54) Inside Visa’s $3.3B AI Data Platform

    (12:29) Visa’s Role in E-Commerce Innovation

    (16:24) Biometrics, Identity & Tokenization at Visa

    (21:14) Visa’s Vision for AI-Driven Commerce






    続きを読む 一部表示
    25 分
  • #244 Yoav Shoham on Jamba Models, Maestro and The Future of Enterprise AI
    2025/03/27

    This episode is sponsored by the DFINITY Foundation.

    DFINITY Foundation's mission is to develop and contribute technology that enables the Internet Computer (ICP) blockchain and its ecosystem, aiming to shift cloud computing into a fully decentralized state.

    Find out more at https://internetcomputer.org/



    In this episode of Eye on AI, Yoav Shoham, co-founder of AI21 Labs, shares his insights on the evolution of AI, touching on key advancements such as Jamba and Maestro. From the early days of his career to the latest developments in AI systems, Yoav offers a comprehensive look into the future of artificial intelligence.

    Yoav opens up about his journey in AI, beginning with his academic roots in game theory and logic, followed by his entrepreneurial ventures that led to the creation of AI21 Labs. He explains the founding of AI21 Labs and the company's mission to combine traditional AI approaches with modern deep learning methods, leading to innovations like Jamba—a highly efficient hybrid AI model that’s disrupting the traditional transformer architecture.

    He also introduces Maestro, AI21’s orchestrator that works with multiple large language models (LLMs) and AI tools to create more reliable, predictable, and efficient systems for enterprises. Yoav discusses how Maestro is tackling real-world challenges in enterprise AI, moving beyond flashy demos to practical, scalable solutions.

    Throughout the conversation, Yoav emphasizes the limitations of current large language models (LLMs), even those with reasoning capabilities, and explains how AI systems, rather than just pure language models, are becoming the future of AI. He also delves into the philosophical side of AI, discussing whether models truly "understand" and what that means for the future of artificial intelligence.

    Whether you’re deeply invested in AI research or curious about its applications in business, this episode is filled with valuable insights into the current and future landscape of artificial intelligence.

    Stay Updated:

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction: The Future of AI Systems

    (02:33) Yoav’s Journey: From Academia to AI21 Labs

    (05:57) The Evolution of AI: Symbolic AI and Deep Learning

    (07:38) Jurassic One: AI21 Labs’ First Language Model

    (10:39) Jamba: Revolutionizing AI Model Architecture

    (16:11) Benchmarking AI Models: Challenges and Criticisms

    (22:18) Reinforcement Learning in AI Models

    (24:33) The Future of AI: Is Jamba the End of Larger Models?

    (27:31) Applications of Jamba: Real-World Use Cases in Enterprise

    (29:56) The Transition to Mass AI Deployment in Enterprises

    (33:47) Maestro: The Orchestrator of AI Tools and Language Models

    (36:03) GPT-4.5 and Reasoning Models: Are They the Future of AI?

    (38:09) Yoav’s Pet Project: The Philosophical Side of AI Understanding

    (41:27) The Philosophy of AI Understanding

    (45:32) Explanations and Competence in AI

    (48:59) Where to Access Jamba and Maestro

    続きを読む 一部表示
    52 分
  • #243 Greg Osuri: Why the Future of AI Depends on Decentralized Cloud Platforms
    2025/03/18

    This episode is sponsored by Indeed.

    Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.

    Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI


    Greg Osuri’s Vision for Decentralized Cloud Computing | The Future of AI & Web3 Infrastructure

    The cloud is broken—can decentralization fix it? In this episode, Greg Osuri, founder of Akash Network, shares his groundbreaking approach to decentralized cloud computing and how it's disrupting hyperscalers like AWS, Google Cloud, and Microsoft Azure.

    Discover how Akash Network’s peer-to-peer marketplace is slashing cloud costs, unlocking unused compute power, and paving the way for AI-driven infrastructure without Big Tech’s control.

    What You'll Learn in This Episode:
    - Why AI training is hitting an energy bottleneck and how decentralization solves it
    - How Akash Network creates a global marketplace for underutilized compute power
    - The role of blockchain in securing cloud resources and enforcing smart contracts
    - The privacy risks of hyperscalers—and why sovereign AI in the home is the future
    - How Akash Network is evolving from a resource marketplace to a full-fledged services economy
    - The future of AI, energy-efficient cloud solutions, and decentralized infrastructure

    The battle for the future of cloud computing is on—and decentralization is winning. If you're interested in AI, blockchain, Web3, or the economics of cloud infrastructure, this episode is a must-watch!


    Stay Updated:
    Craig Smith Twitter: https://twitter.com/craigss
    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    (00:00) Introduction & The Biggest Challenges in AI Training
    (02:36) Greg Osuri’s Background
    (04:50) The Problem with AWS, Google Cloud & Traditional Cloud Providers
    (06:40) How To Use Blockchain for a Decentralized Cloud
    (10:17) Akash Network’s Marketplace Matches Compute Buyers & Sellers
    (14:42) Security & Privacy: Protecting Users from Data Risks
    (18:25) The Energy Crisis: Why Hyperscalers Are Unsustainable
    (21:51) The Future of AI: Decentralized Cloud & Home AI Computing
    (26:42) How AI Workloads Are Routed & Optimized
    (30:24) Big Companies Using Akash Network: NVIDIA, Prime Intellect & More
    (45:49) Building a Decentralized AI Services Marketplace
    (55:09) Why the Future of AI Needs a Decentralized Cloud

    続きを読む 一部表示
    59 分