『Future Solving』のカバーアート

Future Solving

Future Solving

著者: Brian Evergreen
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

We're taught in school and at work to solve problems and leave things a little better than we found them. This podcast is for people who want to go beyond solving problems to setting a vision and solving for a bold & achievable future for their life, their team, their organization, their ecosystem, or their market. Because the future is not to be predicted but created: with vision, strategy, and bold leadership.


People have so many ideas about creating a better future, but how do you get leaders to show up in the room, make the decisions, invest the actual dollars, and do the work? That is Future Solving.


If you know me from my books, courses, or speaking, you know that I started working with Fortune 500 C-suite executive teams while at Microsoft in 2019 to develop their AI strategies, where I learned that the playbook we've inherited from the industrial revolution and the era of Digital Transformation are not suited for the era of AI. The research and writing that followed led to the introduction of my first book, Autonomous Transformation, where I introduced the idea of Future Solving.


I started this podcast to share conversations I've been having behind closed doors with the world's top executives and thinkers with people who want to hear honest conversations about the frameworks, examples, and stories of how others have set a vision and solved for a bold & achievable future in their lives and work.


So if you're looking for a new idea, a fresh perspective, or just a reminder that there are great minds out there working on creating a better future for all of us — join me on Future Solving!


Connect with me on Instagram @brian.evergreen or LinkedIn at @brianevergreen

Hosted on Acast. See acast.com/privacy for more information.

Brian Evergreen
経済学
エピソード
  • Reid Blackman on Ethical Nightmares
    2026/05/04

    Reid Blackman is a leading voice in AI ethics and governance with a background in philosophy and hands-on experience advising Fortune 500 companies on ethical AI—he is the author of Ethical Machines and the founder of Virtue, a consultancy that helps organizations operationalize ethical AI practices and manage risk. Reid previously served as the first Head of AI Ethics at Deloitte, and his work has been featured in Harvard Business Review, The Wall Street Journal, and MIT Technology Review.

    Reid and Brian sat down together and discussed his new book, The Ethical Nightmare Challenge, they talked through real-world examples of AI nightmares, and how leaders can create the right governance and rhythm to proactively solve for these potential nightmares to significantly lower their likelihood. They discussed the importance of AI literacy, a structured approach for leaders to envision and score ethical and other nightmares, why and how legal and risk functions and processes needs to evolve, how Reid would assist if he were airdropped into an ethical nightmare crisis, and so much more.


    Follow Reid:

    LinkedIn - https://www.linkedin.com/in/reid-blackman/

    Instagram - https://www.instagram.com/reid.blackman/

    YouTube - https://www.youtube.com/@reidblackman

    Podcast - https://www.ethicalmachinespodcast.com/

    📙 The Ethical Nightmare Challenge: https://amzn.to/42f20Kl

    Website - https://www.reidblackman.com/


    Future Solving:

    🟦 Buy Autonomous Transformation book here - https://a.co/d/fDs7GSJ

    🟦 Bring Future Solving to your organization: https://bit.ly/48Y5Qvm

    🟦 Subscribe to Future Solving on YouTube: https://www.youtube.com/@futuresolving

    🟦 Follow Brian on LinkedIn - https://www.linkedin.com/in/brianevergreen

    🟦 Follow Brian on Instagram - https://www.instagram.com/brian.evergreen

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    53 分
  • Kim Scott on Being Candid in 2026
    2026/04/20

    Kim Scott is a a leadership thinker and former Silicon Valley executive whose career spans Google, Apple, and advising CEOs—she is the author of the global bestseller Radical Candor, which introduced a widely adopted framework for giving feedback that “cares personally while challenging directly,” along with Just Work and Radical Respect, extending her work into building more equitable workplaces. Kim co-founded Radical Candor, a global training company that has helped hundreds of organizations and tens of thousands of leaders build stronger, more honest cultures. Her work has been featured in Harvard Business Review, The New York Times, and The Wall Street Journal.

    Kim and Brian sat down together and discussed why we need to have more real human conversations in the era of AI, the fact that human communication is how we build relationships, and why we need to practice important conversations. Kim shared her advice for leaders on how to talk about the current economic and geopolitical climate, why we need to talk to people we disagree with, how she solved for the future of radical candor in organizations, how she would facilitate conversation between two fiercely disagreeing leaders, and so much more.


    Content warning: This episode includes discussion of sexual assault.


    Follow Kim:

    LinkedIn - https://www.linkedin.com/in/kimm4

    Instagram - https://www.instagram.com/kimmalonescott

    YouTube - https://www.youtube.com/@RadicalCandor

    📚 Kim's Books: https://kimmalonescott.com/

    Website - https://www.radicalcandor.com/


    Future Solving:

    🟦 Buy Autonomous Transformation book here - https://a.co/d/fDs7GSJ

    🟦 Bring Future Solving to your organization: https://bit.ly/48Y5Qvm

    🟦 Subscribe to Future Solving on YouTube: https://www.youtube.com/@futuresolving

    🟦 Follow Brian on LinkedIn - https://www.linkedin.com/in/brianevergreen

    🟦 Follow Brian on Instagram - https://www.instagram.com/brian.evergreen

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    57 分
  • Natalie Nixon on Creativity in the era of AI
    2026/04/13

    Natalie Nixon is a global authority on creativity with a background in anthropology, fashion, academia, and a lifelong dancer—she has published 2 bestselling books on Creativity and the future of work: The Creativity Leap and Move. Think. Rest. Natalie was shortlisted for the Thinkers50 Talent Award, selected for the Thinkers50 Radar class of 2024, and her work has been featured in Forbes, Fast Company, and Inc. Magazine.


    Natalie and Brian sat down together and discussed creativity in the era of AI and why it is essential for leaders to practice the discipline of creativity and how creativity contributes to the top and bottom line. They talked about a technique called doodle sprints, spending time developing wonder as well as rigor, how we're in the middle of an imagination era, a new category of activities called "key performance experiences," how she would coach a group of leaders on a business version of a baking show who were skeptical about the importance of creativity and whether they were even capable of being creative, and so much more.


    Follow Natalie:

    LinkedIn - https://www.linkedin.com/in/natalienixonphd

    Instagram - https://www.instagram.com/natwnixon

    YouTube - https://www.youtube.com/c/NatalieNixon

    📖 The Creativity Leap - https://www.amazon.com/Creativity-Leap-2nd-Curiosity-Improvisation/dp/B0FN3F19SR

    📙 Move. Think. Rest. - https://www.amazon.com/Move-Think-Rest-Productivity-Relationship/dp/0306835584

    Website - https://www.figure8thinking.com/


    Future Solving:

    🟦 Buy Autonomous Transformation book here - https://a.co/d/fDs7GSJ

    🟦 Bring Future Solving to your organization: https://bit.ly/48Y5Qvm

    🟦 Subscribe to Future Solving on YouTube: https://www.youtube.com/@futuresolving

    🟦 Follow Brian on LinkedIn - https://www.linkedin.com/in/brianevergreen

    🟦 Follow Brian on Instagram - https://www.instagram.com/brian.evergreen

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    55 分
adbl_web_anon_alc_button_suppression_c
まだレビューはありません