エピソード

  • 71: Quantum Computing Explained: Qubits, AI, and the Race to Build the Future with Jonas Kölzer
    2026/04/21
    Summary:The episode on quantum computing is structured as an accessible explainer for non-specialists, using rich analogies (coin toss for superposition, flight history for the hardware race) while covering genuinely deep technical ground. Dr. Jonas Kölzer is a quantum physicist, entrepreneur, and educator whose career bridges deep research and public understanding of emerging technologies. After early enthusiasm for physics communication, he studied physics at RWTH Aachen University, where a lecture by Professor Hendrik Bluhm on spin qubits drew him into quantum computing research; he later specialized in topological insulators and completed his PhD while also helping launch Polarstern Education, the foundation for the School of Quantum. Today, he works across quantum technology education and AI systems, and is known for explaining topics such as qubits, superposition, error correction, and quantum hardware architectures in clear, practical language for professionals and non-specialists alike.Key Takeaways:1. Quantum Computing Is in Its "Wright Brothers Moment"Just as early aviation saw a race between zeppelins, helicopters, and aircraft with no obvious winner, quantum computing hardware is in an analogous race between superconducting qubits, ion traps, photonic systems, spin qubits, and topological approaches. No single architecture has emerged as dominant — the best platform may depend on the specific application.2. Superposition + Entanglement = Exponential PowerSuperposition: a qubit can exist in a probabilistic mix of 0 and 1, like a coin spinning in the air before landing.Entanglement: multiple qubits become correlated, so changing one affects others. The resulting combinatorial states scale as 2^n (n = number of qubits), rapidly exceeding what any classical computer can simulate.3. Noise and Error Correction Are the Central Engineering ChallengeQuantum states are destroyed by even tiny energy perturbations — temperature fluctuations, cosmic particles. The no-cloning theorem means quantum information cannot be simply copied for error recovery. Current research focuses on error mitigation and logical qubit error correction as the bridge to practical large-scale machines.4. Quantum Computers Are Co-Processors, Not ReplacementsToday's quantum computers work alongside classical supercomputers in a hybrid loop. The quantum unit handles specific optimization or simulation tasks; the classical system manages parameters and optimization. Full universal quantum computers remain a long-horizon aspiration.5. The Quantum–AI Relationship Is BidirectionalQuantum hardware can accelerate certain AI workloads (QPU ↔ GPU analogy), especially high-dimensional optimization.Classical AI (GPU clusters, e.g., Nvidia's quantum research program) is already being used to optimize and improve quantum systems.Companies like Nvidia are investing in quantum-GPU hybrid infrastructure.6. Total Energy Cost of Quantum Is NuancedWhile a qubit chip operates at microwatt efficiency, the surrounding cooling infrastructure (helium-3, compressors, mechanical pumps) runs in the kilowatt range. The full total cost of ownership must be assessed honestly before claiming quantum as a "green" alternative to data center AI compute.Chapters: 0:04 Introduction and Background of the Episode3:50 Jonas’ Early Interest in Physics4:46 Jonas’ Introduction to Quantum Computing7:09 Quantum Mechanics and Computing8:55 Understanding Qubits and Superposition13:02 Challenges in Quantum Computing19:05 Designs and Paths in Quantum Computing27:12 Applications and Future of Quantum ComputingHyperlinks:LinkedIn Dr. Jonas KoelzerArticle Nature Communications Materials (2021)Article Advanced Electronic Materials (2020)axelera.aiAnastassia Lauterbach - LinkedInAI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    続きを読む 一部表示
    57 分
  • 70: AI, Deepfakes & the Law: Do You Have the Right to Your Own Digital Identity? With Gabriela Bar
    2026/04/14
    Summary:Anastassia sits down with Dr. Gabriela Bar — attorney, PhD in Law, founder of Gabriela Bar Law & AI, and independent ethics adviser to European Commission AI projects — for a wide-ranging conversation about one of the most underexplored frontiers in law: what happens when the entities we create begin to resemble us, and the legal system has no vocabulary to respond.The conversation moves across three connected territories: the philosophy of legal personhood and whether AI could ever qualify for it; the alarming absence of real legal protection for individuals whose digital identities are weaponised through deepfakes and fabricated content; and the statistical reality of children's exposure to predatory behaviour in digital space.Key Takeaways:The Cheshire Cat theory reframes legal personhood entirelyGabriela introduces the framework of Ngaire Naffine: legal personhood is not about souls, bodies, or divine origin — it is about the capacity to participate in legal relationships. This framework is exactly the right tool for thinking about advanced AI.The EU AI Act has a significant blind spotThe Act prohibits a defined list of AI practices. Non-consensual deepfakes — fabricated intimate images, false criminal scenarios, identity fabrication — are not on that list in any meaningful way. Gabriela's position is unambiguous: they should be banned outright, not merely regulated.Digital persona harm is a present crisis, not a future riskAnastassia speaks from personal experience: during a period of intense and unjust media scrutiny, fabricated digital avatars of her were distributed publicly — a direct assault on her identity and dignity.More than 50% of children aged 9–16 have experienced predatory online contactData from a Polish governmental cybersecurity study shared by Gabriela shows that over half of children in that age group had experienced some form of contact with sexual predators online — not all severe, but many were. The gap between the sophistication of the tools and the simplicity of the safeguards is vast.Law is a fiction — and we choose which fictions to writeWe can write new legal fictions that protect individuals from AI-generated harm, that extend narrow rights to sufficiently advanced AI.AI literacy must include legal literacyLiteracy is a must, and goes beyond fluency.Chapters:0:05 Introduction to the episode: Digital personhoods and digital identities3:21 Max Tegmark’s Book “Life 3.0” and AI Ethics4:06 Science Fiction (Blade Runner) influencing Gabriela’s thoughts on digital personas5:33 Digital Persona and Consciousness7:31 Legal Perspectives on AI Rights43:53 Cultural Perspectives on Legal Personhood Hyperlinks:Website: gabriela.bar — firm overview, fields of expertise, publicationsLinkedIn profile: linkedin.com/in/gabrielabarAcademic & Professional DirectoriesAILAWTECH Foundation profile: ailawtech.org/en/gabriela-barWolters Kluwer expert profile: wolterskluwer.com/pl-pl/experts/gabriela-barYouTube — AI Legal Personhood: Should AI Eventually Have Legal Personhood?Ngaire Naffine Cheshire Cat TheoryAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    続きを読む 一部表示
    41 分
  • 69: How AI Is Transforming Clinical Trials — Faster Recruitment, Smarter Medicine & What It Means for All of Us with Julio G. Martinez-Clark
    2026/04/07
    Summary:Anastassia and Julio unpack the evolving role of AI in healthcare, with a focus on clinical trials, patient identification, and medical education.Julio G. Martinez-Clark is an entrepreneur and clinical research strategist recognized for transforming global clinical trial operations across MedTech, biopharma, and radiopharmaceutical sectors. He is the CEO of bioaccess®, where he champions quality and efficiency in clinical research throughout Latin America and beyond. Key insights:Clinical trials are the essential "bridge" between laboratory research and market approval, governed by regulatory bodies such as the FDA and EMA.The importance of generating trustworthy evidence, how credibility varies by trial location, and what regulators accept.The role of Contract Research Organizations (CROs) and the industry’s move toward outsourcing trial operations.AI enhances trial efficiency through proactive patient matching, diversity improvements, and the simplification of complex informed consent documents.Privacy and regulations such as GDPR and HIPAA are critical—data is anonymized, and access is strictly controlled.AI reduces administrative burdens in regulatory processes by automating translation and simplifying communication for less-educated populations.Early-phase clinical studies benefit from AI’s ability to predict device safety, optimize protocols, and enable adaptive designs, significantly accelerating time-to-market.The democratization of AI — becoming as ubiquitous as electricity — signals the need for professionals to embrace this tool for better diagnostics, treatment, and research.Medical education must adapt by integrating AI literacy to prepare future doctors for a new landscape in which their roles encompass oversight, empathy, and advanced technical skills.AI’s ongoing integration raises questions about maintaining core human skills, trust, and the patient-doctor relationship amid automation.Chapters:00:06 – Introduction to the episode about AI in clinical trials02:19 The importance of clinical trials in healthcare innovation04:58 - The value chain of clinical trials: regulators, manufacturers, CROs, hospitals, and investigators07:34 - Industry shifts: large pharma companies vs. smaller manufacturers and outsourcing trends11:39 - Trust and credibility: geographical considerations in clinical data acceptance17:35 - The critical role of diversity and local data in global trials18:47 - Privacy regulations: GDPR, HIPAA, and anonymization practices19:48 - How AI reduces regulatory and translation costs through automation and simplified communication24:32 - The impact of AI in early phase testing: safety prediction and protocol optimization28:19 - The democratization of AI: from novelty to essential infrastructure31:10 - Integrating AI into medical education for better diagnostics and future roles34:41 - The future of medical professionals in an AI-enabled healthcare system37:25 - The importance of empathy and human judgment alongside automationHyperlinks:Julio's websiteClinical research news about JulioLinkedIn post "AI Innovations in Clinical Trials" by JulioLinkedIn post "Transforming Global Clinical Trials: Key Insights from My Latest Podcast Appearance" by JulioAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    続きを読む 一部表示
    44 分
  • 68: Can an LLM Lie? Inside Large Language Models with AI Expert Sairam Sundaresan
    2026/03/31
    Summary:Anastassia and Sairam delve into the complexities of Large Language Models (LLMs), exploring their inner workings, practical applications for small business owners, and the ethical concerns surrounding their use. They discuss the phenomenon of hallucinations in LLMs, the potential for synthetic data, and the future of AI, including the quest for Artificial General Intelligence (AGI). Sairam shares insights on how small businesses can leverage LLMs effectively while addressing the importance of data quality and the implications of AI on society.Guest Bio — Sairam Sundaresan:Sairam Sundaresan is an AI engineer, educator, and author based in Chennai, India, with a Master's degree from the University of Michigan. He spent eight years at Qualcomm, working on groundbreaking computer vision and machine learning projects for multimedia applications — including real-time 3D reconstruction and cutting-edge object tracking algorithms featured in Forbes. His work lives in the smartphones that billions of people use every day.Beyond engineering, Sairam is an educator at heart. He served for three years as a Machine Learning Lead and Mentor at the Frontier Development Lab, a prestigious research programme at the intersection of AI and space science — and the work of his team was personally recognised by Google CEO Sundar Pichai.Today, Sairam reaches a global audience through his widely read Gradient Ascent newsletter on Substack, where he breaks down complex AI concepts for curious non-technical readers, and through his book AI for the Rest of Us* — a practical, jargon-free guide to understanding artificial intelligence that has made him one of the most trusted AI voices for everyday audiences worldwide.Takeaways:LLMs are a class of neural networks inspired by the human brain.They learn patterns from vast amounts of data to predict text.The deep learning revolution in 2012 enabled significant advancements in AI.Hallucinations in LLMs are a feature, not a bug, due to their predictive nature.Small business owners can utilize LLMs for organizing and content creation without needing extensive technical knowledge.Synthetic data can amplify errors and biases if not curated properly.The future of AI may involve integrating ontologies for better understanding and causality.AGI remains an amorphous concept, with no clear path to its realization.The need for ethical considerations in AI development is paramount, especially regarding data sourcing.AI developers are often motivated by a desire to improve human life and the planet.Chapters:0:05 Introduction to the episode and Sairam’s work4:28 Introduction to Large Language Models (LLMs)5:42 Understanding Neural Networks and Deep Learning8:18 Challenges and Opportunities with LLMs12:49 Practical Applications for Small Business Owners19:47 Ethical Considerations and Data Concerns32:51 Future of AIHyperlinks:linkedin.com/in/sairam-sundaresanGradient Ascent Newsletter:newsletter.artofsaience.com — Weekly AI guide trusted by over 27,000 subscribers, including teams at Silicon Valley's top tech firms and academic labsBook — AI for the Rest of Us, Apple Books: books.apple.com/us/book/ai-for-the-rest-of-us/id6751973560Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    続きを読む 一部表示
    41 分
  • 67: Confidential AI, Speech Recognition, and Why AI Literacy Starts with Teachers with Giorgio Natili
    2026/03/24

    Summary:


    In this episode, Anastassia and Giorgio Natili discuss the importance of AI literacy, the evolution of speech recognition technology, and the challenges of ensuring data privacy and sovereignty in AI applications. They explore the concept of confidential AI, the need for responsible usage in education, and the future aspirations for AI explainability and funding allocation. The conversation emphasizes the necessity of understanding AI's limitations and the ethical implications of its deployment in various sectors.

    Giorgio Natili is an engineering leader, author, and community figure with over 20 years of experience in software engineering and technological innovation. He is currently Head of AI Engineering at Oracle Cloud, and previously Vice President and Head of Engineering at Opaque Systems, where he worked on confidential AI and secure data analytics platforms. Giorgio was previously the Head of Engineering for Firefox at Mozilla, Director of Software Engineering at Capital One, and a Software Development Manager at Amazon. Natili is also known for founding GNStudio, a Rome-based development studio, and being involved as a W3C member, author, and educator.​

    In addition to his achievements in technology, Giorgio is an advocate for diversity, inclusion, and ethical leadership, and he has also spoken about his past as a professional windsurfer and DJ, emphasizing the human side of leadership.


    Takeaways:


    AI literacy is crucial for understanding the complexities of technology.

    Speech recognition has evolved significantly, but still faces challenges.

    Accents and environmental factors greatly impact transcription accuracy.

    Confidential AI focuses on maintaining data privacy and sovereignty.

    AI does not possess human-like understanding or reasoning capabilities.

    Responsible usage of AI is essential for protecting sensitive data.

    Prompt engineering can enhance the effectiveness of AI tools.

    AI can provide personalized learning experiences for students.

    Explainability in AI is necessary for safe and effective use.

    Funding for AI should prioritize explainability and safety over mere scaling.


    Chapters:

    

    0:00 Introduction to the episode: Who is our guest, and what will we learn today?

    1:54 Explainer on AI Literacy

    2:27 History of Speech Recognition

    3:22 Challenges in Speech-to-Text Technology

    7:26 Data and Model Limitations

    13:15 Confidential AI and Data Sovereignty concepts

    26:18 AI in Education and Responsible Usage

    39:02 Future of AI and Explainability



    続きを読む 一部表示
    36 分
  • 66: The EU AI Act Uncovered: Law, Ethics & Europe's Bet on Responsible AI with Gabriela Bar
    2026/03/17
    Summary:Gabriela Bar, a legal expert specializing in AI law and ethics, talks about how AI is shaping legal frameworks, societal perceptions, and technological innovations – especially within Europe and Poland. She discusses the importance of responsible AI governance, the evolving legal landscape, and the societal implications of AI deployment at scale. The discussion with Anastassia touches on compliance costs to implement the EU AI Act, practices to introduce national LLMs, and what constitutes responsible AI.Gabriela Bar is a prominent legal expert specializing in technology and artificial intelligence law, based in Poland. She has over 20 years of experience and is the founder of the Gabriela Bar Law & AI firm, serving as a legal and ethics advisor for EU technology projects focused on AI, digital law, and compliance with regulations such as the EU AI Act and GDPR. She is recognized among the TOP100 Women in AI in Poland, Forbes 25 Women in Business Law, and is active in several international organizations dedicated to technology, digital ethics, and law. Gabriela Bar frequently lectures at universities, publishes on AI law and ethics, and advises technology companies and research consortia on responsible and practical AI innovation. Key Topics:Gabriela’s journey from technology law to AI ethics and her ongoing work within European AI regulation.The rapid growth of AI adoption in Polish businesses and public sector initiatives for language models.The challenges and opportunities of implementing responsible AI, including transparency, accountability, and bias mitigation.The role of AI legislation, with a focus on the European AI Act, regulatory costs, and how it balances innovation with safeguards.The global landscape of AI regulation, contrasting EU's comprehensive approach with the US decentralized system.Technical limitations of deep learning models, explainability, and the importance of aligning AI development with ethical principles.The future of AI in cybersecurity, digital personas, and the geopolitics of AI competitiveness among the US, EU, and China.Chapters:00:04 - Introduction to Gabriela and AI in Poland02:55 - How Gabriela transitioned from traditional law to technology and AI04:03 - Cultural portrayals of AI and public perceptions influenced by movies and literature07:49 - Misinformation and misconceptions about AI technology today09:17 - The private sector’s role in AI development and application in Poland10,:54 - Demographic challenges in Poland and AI’s potential role in mitigating them13:45 - Political and regulatory gaps in AI, and the importance of cross-sector integration15:38 - The absence of national LLMs in languages like Japanese; success stories from other countries18:01 - Foundations of responsible and ethical AI: core principles and risk management21:51 - Data quality, biases, and ongoing governance in AI lifecycle management22:53 - The flaws in deep learning transparency and the necessity for cautious regulation29:34 - Legal accountability, the role of audits, and fairness in AI systems33:34 - The evolving landscape of AI litigation and insurance implications36:14 - Regulatory costs for AI companies and the competitive landscape in Europe39:03 - The scope of the European AI Act and its impacts on high-risk sectors42:49 - Cybersecurity risks involving AI, criminal misuse, and the importance of legal safeguards44:08 - Europe's strategic imperative in AI sovereignty amid global technology race46:39 - The contrasting regulatory systems of the US and China and their influence on innovation51:17 - The emerging need for regulation of digital personas and synthetic media51:35 - Wrapping up: key takeaways and the importance of dialogue between tech developers, policymakers, and societyResources & Links:Gabriela Bar - LinkedIn | TwitterAnastassia Lauterbach - LinkedIn@romyandroby“Leading Through Disruption”AI EdutainmentRomy & Roby Book
    続きを読む 一部表示
    46 分
  • 65: From Narrow AI to AGI - Breakthroughs, Limits, and Sense of Purpose in AIs with Dr. Craig Kaplan
    2026/03/10
    Summary:Anastassia and Dr. Craig Kaplan delve into the complexities of artificial general intelligence (AGI) and the evolving landscape of AI technologies. Craig emphasizes the importance of defining AGI as an AI capable of performing any cognitive task as well as an average human, highlighting the challenges of achieving true general intelligence beyond narrow applications. They discuss the historical context of AI development, the shift from symbolic AI to machine learning, and the potential of collective intelligence as a more effective approach to building AGI. Craig advocates for a community of models rather than a single monolithic AI, suggesting that this could lead to safer and more ethical AI systems that reflect diverse human values. The conversation also touches on the limitations of current AI systems, particularly their lack of understanding of causality and reasoning. Craig argues that while AI might develop its own sense of purpose, it is crucial to instill positive human values early on to guide its development. The discussion concludes by emphasizing the importance of AI literacy and critical thinking, noting that human behavior and values will significantly shape the future of AI and its impact on society.Craig A. Kaplan is an artificial general intelligence (AGI) expert and entrepreneur who focuses on collective intelligence, safe superintelligence, and practical strategies for aligning advanced AI with human values and goals. He has founded and led multiple AI-related ventures, including iQ Company, which develops AI systems to enhance human decision-making; previously, PredictWallStreet, an early crowdsourced stock prediction platform; and he speaks and writes about how to safely build and govern increasingly powerful AI systems.Takeaways:AGI is defined as AI that can perform any cognitive task like an average human.The shift from symbolic AI to machine learning in the 1960s and 1970s, big data and superb semiconductors later on enabled today’s AI revolution.Collective intelligence may offer a safer and more effective path to AGI, and this include development of individual LLMs and models based on values and perspectives of individual humans.Current AI systems lack an understanding of causality and reasoning.AI will develop its own sense of purpose, but early values are crucial.AI Literacy is imperative to build safe, transparent and beneficial AI.Chapters:00:00 Introduction to the episode: Researching Artificial General Intelligence (AGI) and the work of Dr. Craig Kaplan2:06 Introduction to AGI and AI Definitions04:16 The Evolution of AI: From Symbolic to Machine Learning07:02 The Limitations of Current AI Systems14:01 Causality and Reasoning in AI19:38 The Collective Intelligence Approach to AGI26:46 The Future of AI: Transparency and Collaboration28:37 The Purpose of AI Collectives29:25 Utopia vs. Reality in AI Development30:49 The Risks of AI: Understanding P-Doom32:16 Human Values vs. AI Intelligence35:09 Fusing Humanities with AI Engineering37:40 The Role of Human Responsibility in AI40:22 The Evolution of AI Values44:59 The Bell Curve of Society and AI's Reflection47:42 Education and AI: Building a Better Future49:38 The Necessity of AI Literacy and Critical ThinkingHyperlinks:LinkedIn profileOrcid profileAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    続きを読む 一部表示
    49 分
  • 64: Unbreakable Backups - Decentralized Storage for Smart Systems with Murphy John
    2026/03/03
    Summary:The conversation focuses on decentralized cloud storage as an alternative to traditional hyperscale cloud providers, emphasizing security, privacy, cost, and resilience. It discusses the limitations of centralized cloud systems and how decentralized storage offers a more secure and distributed solution.Murphy John is Chief Growth Officer at StorX Network, a decentralized cloud storage platform (DePIN) built on blockchain technology to deliver secure, private, and cost-efficient data storage at scale. With a background in designing and managing internet and cloud infrastructure for large enterprises, banks, and financial institutions, he has over a decade of experience in building resilient, secure systems for mission-critical workloads. Since joining StorX in 2021, Murphy has led ecosystem development, strategic partnerships, and go‑to‑market initiatives, working closely with Web3, IoT, and AI partners to integrate StorX’s encrypted, geo-distributed storage into real-world applications. A strong advocate for data privacy and decentralization, he frequently speaks on how technologies such as encryption, data fragmentation, and distributed ledgers can protect organizations against ransomware, data misuse, and single points of failure in traditional cloud models. Key Takeaways:Centralized Cloud Issues : Traditional cloud systems face challenges in scalability, security, and cost.Decentralized Storage Benefits : Offers encrypted, distributed data storage with enhanced security and privacy.Ecosystem and Governance: StorX operates a global network with incentives for node operators and AI-driven management.Real-World Use Cases: Includes healthcare data storage with geofencing and IoT data processing.Future Outlook: Emphasizes education and adoption in a market dominated by legacy cloud players.Chapters:0:04 Introduction into AI Literacy mission and the episode about decentralized storage3:11 Introduction and Market Context4:47 Traditional Cloud Promises and Limitations10:38 Decentralized Storage Architecture and Security22:04 Ecosystem, Node Operations, and AI Governance31:19 Use Cases and Regulatory Considerations39:39 Challenges and Future OutlookHyperlinks LinkedIn Murphy JohnStorX WebsiteStronger. Safer. Decentralized: StorX’s Guide to Cloud Storage vs. BackupAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
    続きを読む 一部表示
    43 分