『Free Form AI』のカバーアート

Free Form AI

Free Form AI

著者: Michael Berk
無料で聴く

このコンテンツについて

Free Form AI explores the landscape of machine learning and artificial intelligence with topics ranging from cutting-edge implementations to the philosophies of product development. Whether you're an engineer, researcher, or enthusiast, we cover career growth, real-world applications, and the evolving AI ecosystem in an open, free-flowing format. Join us for practical takeaways to navigate the ever-changing world of AI.© 2025 Michael Berk 出世 就職活動 経済学
エピソード
  • Systematic Creativity: TRIZ, Knowledge Graphs and AI-Driven Innovation (E.26)
    2025/12/17

    What happens when creativity is treated not as intuition, but as a system that can be studied and scaled?

    In this episode of Free Form AI, Michael and Ben sit down with Nicolas Douard, Lead Data Scientist at the Virtue Foundation, to explore how AI and data science are being used to automate innovation itself. Drawing from Nicolas’ PhD research, the conversation examines TRIZ — a systematic framework for inventive problem solving — and how it can be augmented with modern AI techniques to connect ideas across disciplines.

    The discussion moves through biomimicry as a model for interdisciplinary discovery, the use of knowledge graphs to represent and traverse complex domains, and the role AI may play in accelerating scientific insight. Along the way, this conversation unpacks deeper questions about creativity, discovery and whether innovation can be meaningfully formalized without losing its human essence.

    Tune into episode 26 for a wide-ranging conversation about:

    • TRIZ as a structured methodology for inventive problem solving
    • Biomimicry as a blueprint for cross-disciplinary innovation
    • How knowledge graphs enable new forms of scientific reasoning
    • The role of AI in discovery, not just automation
    • Whether creativity can be systematized without being diminished

    Whether you work in data science, engineering or applied research, this episode offers a thoughtful look at how AI innovation itself might become a computable process.

    Note: This episode was released first on YouTube as part of Free Form AI’s video-first relaunch.


    続きを読む 一部表示
    1 時間 5 分
  • Inside the Codebase: Reviews, Testing and the Hidden Mechanics of Good Software (E.25)
    2025/11/13

    Ever wondered what senior engineers actually talk about behind closed doors?

    In this episode of Free Form AI, Michael and Ben open up the conversations developers usually only hear behind closed doors. We're talking how real engineering teams review code, manage dependencies, keep tests reliable and prevent their codebases from turning into chaos.

    Live and in real time, they break down the habits and workflows that make software durable: using reusable components to avoid reinvention, building integration tests that catch silent failures, choosing versioning strategies that won’t break downstream users, and writing documentation that actually accelerates collaboration.

    Tune into episode 25 for a wide-ranging conversation about:
    • What code reviews really accomplish
    • Why reusable components reduce long-term friction
    • How dependency management goes wrong (and how to keep it stable)
    • Why integration tests are the backbone of reliable software
    • How versioning choices shape releases
    • The role of clear documentation in team velocity
    • Why internal utilities need user-centric design
    • How clean codebases speed up onboarding and feedback

    If your work touches code, this episode gives you the kind of insight you’d normally only get sitting next to seasoned engineers at the office.

    続きを読む 一部表示
    1 時間 6 分
  • Beyond Intelligence: GPT-5, Explainability and the Ethics of AI Reasoning (E.24)
    2025/10/23

    What happens when AI stops generating answers and starts deciding what’s true?

    In this episode of Free Form AI, Michael Berk and Ben Wilson dive into GPT-5’s growing role as an interpreter of information — not just generating text, but analyzing news, assessing credibility, and shaping how we understand truth itself.

    They unpack how reasoning capabilities, source reliability, and human feedback intersect to build, or break trust in AI systems. The conversation also examines the ethical stakes of explainability, the dangers of “sycophantic” AI behavior and the future of intelligence in a market-driven ecosystem.

    Tune in to Episode 24 for a wide-ranging conversation about:
    • How GPT-5’s reasoning is redefining “understanding” in AI
    • Why explainability is critical for trust and transparency
    • The risks of AI echo chambers and feedback bias
    • The role of human judgment in AI alignment and evaluation
    • What it means for machines to become arbiters of truth

    Whether you build, study, or rely on AI systems, this episode will leave you questioning how far we’re willing to let our models think for us.

    続きを読む 一部表示
    42 分
まだレビューはありません