• $1 billion to build AI models with emotional intelligence
    2025/10/31

    $1 billion to build AI models with emotional intelligence


    What if AI Could Understand Your Emotions?


    Have you ever wondered what it would be like if AI could truly understand and respond to your emotions? In today's episode of The Deep Dive, we're exploring a groundbreaking AI funding story that shifts the focus from sheer computational power to emotional intelligence. We're delving into the mission of a new startup, Humans in Darrett, which aims to create AI that is not just smart, but emotionally intelligent. This episode challenges the conventional notion of AI as cold and machine-like, and instead, envisions a future where AI can empathize and collaborate effectively with humans.


    Eric Zelickman: The Visionary Behind Emotionally Intelligent AI


    Leading this innovative venture is Eric Zelickman, a former Stanford PhD student and a key figure in the AI community. Zelickman has a background in logic and reasoning, having contributed significantly to advancements in AI language models. His work introduced the concept of a "scratch pad" for internal reasoning, which helped improve AI's logical flow. Now, he's making a bold leap into the realm of emotions, raising a staggering $1 billion at a $4 billion valuation to bring his vision to life. Investors are not just betting on the idea of emotionally intelligent AI; they are backing Zelickman's expertise and track record in the field.


    The Future of AI: Empathy and Human Collaboration


    This episode unpacks Zelickman’s critique of current AI models, which he argues are too detached and fail to maintain emotional continuity in conversations. Humans in Darrett aims to address this by developing AI that remembers emotional contexts and can empathize with users. The implications of such technology are profound, extending beyond improved chatbots to potentially revolutionizing fields like cancer research, education, and diplomacy. By fostering better human collaboration through empathy, AI could become a powerful ally in tackling complex global challenges. The episode leaves us pondering whether the $4 billion valuation is justified and what the future holds if AI can genuinely understand human emotions and motivations.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • Supermemory: A Teenager's AI Memory Startup
    2025/10/03

    Have you ever wondered what it would take for AI to truly remember you?


    In this episode of "The Deep Dive," we explore a groundbreaking development in artificial intelligence: solving the long-term memory problem. Imagine interacting with an AI that doesn't just forget your previous conversations but remembers and builds on them, much like a human would. This is the vision behind a new startup called Super Memory, which has recently secured $2.6 million in seed funding from industry giants like Google AI chief Jeff Dean and Cloudflare CTO Dane Necht. The involvement of such prominent figures indicates the significance of tackling memory persistence in AI, a challenge that has long been a stumbling block for large language models.


    Meet Dravya Shah, the 19-year-old prodigy behind Super Memory.


    Originally from Mumbai, Dravya Shah's journey into the tech world began with a simple bot that formatted tweets into screenshots, which he sold to a company called Hypefury. This entrepreneurial spirit led him to the United States for university, and eventually to a pivotal role at Cloudflare. It was there that Shah's idea for Super Memory took shape, with encouragement from advisors like Dane Necht. His innovative approach to AI memory caught the attention of tech leaders, prompting them to support his vision of creating a universal memory API that could revolutionize how AI interacts with data.


    Revolutionizing AI with a universal memory API.


    Super Memory's technology is designed to manage and utilize vast amounts of unstructured data—ranging from files and documents to chat logs and multimedia content—by extracting key insights and building a comprehensive knowledge graph. This approach not only enhances the AI's ability to recall information but also improves performance by structuring data into a relational map, allowing for more efficient and context-aware retrieval. As a result, applications can achieve deeper personalization and understanding. With backing from investors and early customers like a fintech company and an AI video editor, Super Memory is poised to lead a transformation in AI, turning forgetful tools into personalized partners that genuinely remember and understand user interactions over time. This shift could redefine the future of AI, making it an integral part of our daily lives.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • AMD and OpenAI Forge Massive AI Chip Partnership
    2025/10/01

    What does it take to power the future of AI?


    How much compute power is truly necessary to unlock AI's full potential? This episode delves into a groundbreaking partnership that could redefine the AI landscape. We're exploring the colossal deal between AMD and OpenAI, involving billions of dollars and, intriguingly, gigawatts of power. As we peel back the layers of this agreement, we uncover the staggering scale of AMD's commitment to supply 6 gigawatts of power—enough to energize millions of homes. This episode challenges listeners to ponder the immense infrastructure required to fuel the ambitions of AI giants.


    Meet the key players behind the scenes


    Our guest today is an industry insider with deep insights into the tech world, particularly in the realm of artificial intelligence and computing power. They bring a wealth of knowledge about AMD's latest technological advancements and OpenAI's strategic maneuvers in securing resources from multiple tech giants. With a background in both hardware and software development, our guest provides a nuanced perspective on how these partnerships are shaping the future of AI and the potential risks and rewards involved.


    A high-stakes gamble in the AI arms race


    The episode provides a high-level overview of the AMD-OpenAI partnership, focusing on the technical and financial intricacies. AMD is betting big with its next-gen Instinct Mi450 chips, scheduled for deployment in 2026, and OpenAI is poised to become a major shareholder, contingent on AMD meeting ambitious performance targets. This deal is part of OpenAI's broader strategy, which includes commitments from Nvidia and Broadcom, and a massive Stargate initiative for data center expansion. The discussion highlights the intense competition and collaboration required to meet the astronomical compute demands of AI's future, as OpenAI diversifies its suppliers to avoid dependency.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • OpenAI DevDay 2025 : vibe coding, AgentKit, and more
    2025/09/29

    What would you create if you had the power to build your own AI agent with just a few clicks?


    In this episode of the Deep Dive podcast, we explore OpenAI's groundbreaking announcements from their Dev Day 2025. The central theme of the event was "vibe coding," a concept that promises to revolutionize how we interact with technology by transforming ChatGPT from a simple chatbot into a comprehensive front-end development tool. With over 800 million weekly users, this evolution aims to make cutting-edge technology more accessible and visually immediate. Our mission today is to delve into the three main components driving this innovative idea: the new apps SDK, the Agent Kit API, and a significant change in content policy regarding mature experiences.


    today the conversation revolves around the latest initiatives from OpenAI, a leading AI research organization known for its development of advanced artificial intelligence models. The discussion highlights OpenAI's strategic moves to partner with major companies like booking.com, Canva, Coursera, and Spotify, among others, showcasing the broad impact and potential of their new tools. These partnerships are set to enhance user experience by embedding third-party apps directly into ChatGPT, allowing seamless interaction without leaving the chat interface.


    The episode provides a high-level overview of OpenAI's new offerings, focusing on their potential to democratize AI creation through vibe coding. The apps SDK enables deep integration of third-party apps into ChatGPT, while the Agent Kit API allows users to build customized AI agents with ease, using a visual interface akin to Canva. However, the introduction of mature content raises important safety and governance questions. OpenAI plans to implement robust age verification tools before launching any 18+ features, highlighting the delicate balance between innovation and responsibility. As listeners ponder the possibilities of these advancements, they are left with a thought-provoking question: what AI agent would you create to simplify your life?

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • AI, data and decarbonized infrastructure
    2025/09/26

    Are we inadvertently fueling climate change with our digital habits?


    In this episode of "The Deep Dive," we explore a pressing paradox: the significant and growing energy and carbon footprint of AI and the data centers it requires. While digital technology is often heralded as a tool for decarbonizing the economy, its rapid expansion is proving unsustainable. In 2020, the digital sector alone accounted for nearly 4% of global greenhouse gas emissions, a figure that is projected to grow alarmingly if current trends continue. The conversation centers on the potential surge in data center energy consumption, which could see a dramatic increase from 530 terawatt hours in 2023 to almost 1500 TWh by 2030, with corresponding emissions potentially doubling France's national emissions.


    the episode draws on insights from various sources to shed light on the topic. The discussion revolves around the implications of AI, particularly generative AI, on energy consumption. The host emphasizes the staggering adoption rates of technologies like ChatGPT, which reached 100 million users in just two months, illustrating the rapid pace at which these technologies are being integrated into daily life. This swift adoption raises questions about the sustainability of AI's energy demands, especially during the inference phase, which is projected to constitute a significant portion of AI's electricity consumption by 2030.


    The clash between AI growth and climate goals


    The episode delves into the conflict between the booming growth of generative AI and global climate objectives. As AI technologies expand, the demand for energy, often met by fossil fuels, is increasing faster than clean energy solutions can be deployed. This has led to the development of numerous gas-fired power plants specifically to power data centers. The local impact is particularly acute in places like Ireland, where data centers consume over 20% of the country's electricity, prompting a moratorium on new connections. Similarly, in France, data center energy demands threaten to overshadow the power needed for the country's energy transition goals. The episode concludes with a sobering reminder of the long-term consequences of delaying stricter efficiency standards and carbon budgets for the sector, potentially locking in significant emissions for decades.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • Nvidia’s $100B AGI bet on OpenAI
    2025/09/25

    What does a $100 billion investment in AI infrastructure mean for the future of superintelligence?


    In this episode, the hosts delve into a groundbreaking strategic partnership between Nvidia and OpenAI, marked by a potential investment from Nvidia of up to $100 billion. This monumental deal is not just about financial transactions; it's fundamentally about laying the physical groundwork necessary for OpenAI's pursuit of artificial general intelligence (AGI). The episode aims to cut through the headlines and big numbers to explore the strategic implications, the sheer physical scale, and the potential impact on the AGI timeline and the hardware market.


    The discussion revolves around the insights and implications of this partnership, featuring commentary from industry leaders like Jensen Huang, CEO of Nvidia. The hosts dissect the strategic elements of the deal, such as the deployment of 10 gigawatts of AI data centers and the use of Nvidia's VE Rubin platform, designed for training massive models at the trillion-parameter scale. This detailed analysis provides listeners with a deep understanding of the technical and strategic complexities involved.


    The episode covers the broader implications of this partnership, emphasizing how it reshapes the power dynamics in the AI industry. By designating Nvidia as a preferred strategic partner, OpenAI is creating a symbiotic relationship that enhances Nvidia's dominance while ensuring OpenAI's access to cutting-edge technology. The discussion also touches on the geopolitical and economic ramifications of such unprecedented energy demands, raising questions about resource allocation and the future of the global economy as it adapts to the needs of AGI development.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • 1.7 B€ raised for Mistral AI, the European champion welcome ASML
    2025/09/23
    How does Europe's tech landscape shift with Mistral AI's groundbreaking funding?


    In this episode of the Deep Dive, we explore a significant development in the European tech scene: the remarkable funding round secured by the French startup, Mistral AI. With an astonishing $1.7 billion raised, Mistral AI is now valued at $11.7 billion, making it France's first AI decacorn. But what does this mean for European digital sovereignty, and how might this reshape the continent's position in the global AI race? These are the questions we delve into as we unpack the implications of this record-breaking investment.


    Our guest today is Mistral AI, a company founded in June 2023 by three former Meta researchers: Arthur Mensch, Guillaume Lample, and Timothée Lacroix. Despite its recent inception, Mistral AI has rapidly gained recognition for its technical prowess. Their products, such as the chatbot Le Chat, are already considered serious competitors to OpenAI's offerings. A key factor in their appeal is their commitment to open weights, which provides transparency and avoids vendor lock-in, offering strategic autonomy that resonates with European businesses and governments.


    The episode examines the strategic partnership between Mistral AI and ASML, the Dutch tech giant that contributed $1.3 billion to the funding round, becoming a major shareholder. This collaboration is seen as a move towards building European digital sovereignty by reducing reliance on non-European providers. However, despite Mistral AI's impressive valuation, it still faces significant challenges compared to US giants like OpenAI and Anthropic. The episode concludes by pondering whether these strategic alliances can truly forge a sovereign path in the global AI landscape or if the financial and infrastructural disparities are simply too vast to bridge.


    0:00:00 - Introduction and context

    0:00:25 - Record €1.7 billion funding round

    0:00:52 - First French AI decacorn

    0:01:63 - Deal structure and ASML’s role

    0:01:97 - Seat on ASML’s board of directors

    0:02:62 - Initial passion and Europe’s digital sovereignty strategy

    0:03:177 - Mistral’s technical strengths and open-source models

    0:04:214 - Mistral versus global competition

    0:04:297 - Maintaining strategic autonomy

    0:05:333 - Conclusion and Europe’s strategic stakes

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分
  • Anthropic's $13 Billion AI Expansion Fundraise
    2025/09/04

    Have you ever wondered why some AI companies suddenly leap ahead in the tech race, leaving others in the dust? This episode invites listeners to explore the meteoric rise of Anthropic, a company that recently secured a staggering $13 billion in funding, boosting its valuation to an astounding $183 billion. As the AI landscape shifts at an unprecedented pace, this episode delves into why Anthropic is capturing the attention of investors worldwide and why this moment is pivotal for the tech industry.


    We explore the story of Anthropic, a company founded by former OpenAI researchers, positioning itself as a counter-model in the AI space with a strong focus on AI safety and interpretability. Known for its strategic emphasis on enterprise clients, Anthropic is not just another AI player. The company has rapidly expanded its customer base, boasting over 300,000 business customers and a significant increase in large accounts. Their flagship product, Claude Code, an AI assistant for developers, has been a major contributor to their success, generating substantial revenue and showcasing impressive growth.


    This episode provides a high-level overview of Anthropic's recent achievements and strategic focus. With a commitment to reliability and trust, Anthropic is differentiating itself in a crowded market by emphasizing AI safety and interpretability. The episode also explores the complex realities of global scaling and investment, highlighting the challenges of balancing growth with core values. As Anthropic navigates this intricate landscape, it raises important questions about the future direction of the AI industry and the trust we place in these transformative technologies.


    00:00:00 - Episode introduction and presentation of Anthropic

    00:00:15 - Deep dive into Anthropic's massive funding

    00:00:35 - Why is this funding attracting so much attention?

    00:00:51 - Breakdown of Anthropic's valuation figures

    00:01:09 - Tripling of valuation in just a few months

    00:01:29 - Diversity of investors and their confidence

    00:02:02 - What truly sets Anthropic apart in the market

    00:02:41 - Growth in recurring revenue and major clients

    00:03:08 - Anthropic's role in the race with OpenAI

    00:03:52 - Ethical challenges and funding decisions

    00:04:40 - Complexity and nuances of rapid AI growth

    00:05:00 - Conclusions and implications for the future of the AI industry


    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分