• EP24 - Inside 200,000 AI Conversations: What Microsoft Tells Us About Work
    2025/09/09

    What if you could peek behind the curtain and see exactly how people are using AI at work right now? In this episode, we dive into an exclusive, unpublished study from Microsoft that does just that. Titled "Working with AI: Measuring the Occupational Implications of Generative AI," the paper analyzes 200,000 real, anonymized conversations between users and Microsoft Bing Copilot to uncover how AI is reshaping the workforce. This isn’t theoretical. This is actual, on-the-ground usage, rich with data, surprising insights, and implications for nearly every job you can imagine.

    We explore what people are really asking AI to help with and what the AI is actually doing in response. From writing and research to coaching and advising, the results may surprise you, especially the fact that in 40 percent of cases, what users wanted and what AI did were completely different tasks. The study maps these interactions to job roles using the O*NET occupational database, producing an “AI applicability score” that highlights which professions are most and least exposed to AI capabilities today. Spoiler: knowledge workers, communicators, and information professionals should pay close attention.

    Whether you’re a business leader, knowledge worker, or educator, this episode offers a grounded look at how generative AI is actually being used across different types of work. The findings show that AI’s current strengths lie in supporting tasks like writing, information gathering, and communication, while its direct performance is most visible in roles involving teaching, advising, or coaching. Physical and manual occupations remain less affected, for now, but even those show signs of interaction. By focusing on real-world data rather than predictions, the episode provides a more nuanced view of how AI is fitting into the workplace today.

    続きを読む 一部表示
    32 分
  • EP23 - Trust, Attitudes, and AI: What 48,000 People Around the World Really Think
    2025/09/02

    In this episode, we explore the results of a major new global study from the University of Melbourne and KPMG titled Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025. Drawing on the views of more than 48,000 people across 47 countries, this research offers one of the most detailed snapshots to date of how AI is perceived, trusted, and used around the world. It examines differences between advanced and emerging economies, workplace adoption, student use in education, and the growing call for stronger governance.

    The conversation unpacks why emerging economies are leading the way in AI trust and uptake, and why advanced economies are showing more scepticism. It highlights the gap between public confidence in AI’s technical ability and concerns about its safety and ethical use. You will hear about patterns in workplace behaviour, from productivity gains to policy breaches, and how students are using AI both to enhance learning and, in some cases, to bypass it. The episode also discusses the widespread demand for stronger AI regulation, especially to counter misinformation.

    This discussion matters because it captures the reality of AI adoption beyond the headlines, showing both its opportunities and its risks. The findings reveal where trust is being built and where it is eroding, and why literacy, governance, and clear regulation are critical as adoption accelerates. Whether working in a business, leading a team, or studying in a university, understanding these trends can help in making informed decisions about how to engage with AI responsibly and effectively.

    続きを読む 一部表示
    25 分
  • EP22 - Why Pure AI Isn’t Enough for Understanding Human Behaviour: The Real Secret to Decision Prediction
    2025/08/26

    Think machines can predict your next move? Think again. In this episode, we dive into one of the most intriguing challenges at the crossroads of psychology, artificial intelligence, and business: can we truly predict what people will choose before they do? We explore the breakthrough research published in Nature Human Behaviour, spotlighting BEAST-GB, a revolutionary model that blends the best of behavioural science with cutting-edge machine learning. It’s not just another algorithm—it’s a new lens on how humans really make decisions.

    Join us as we unpack why pure AI and raw data alone keep falling short in predicting real human choices. Discover how BEAST-GB leverages psychological insight, cognitive biases, and decades of behavioural research to outperform even the smartest deep learning models. You’ll also hear how these ideas are powering new tools like Accurment, designed to help marketers and decision-makers move beyond guesswork and gut instinct, transforming messy human data into clear, actionable strategy.

    If you’re curious about the future of decision science, want to understand the secrets behind truly effective marketing, or just love uncovering what makes people tick, this episode is a must-listen. Hit play to find out why the smartest predictions don’t come from AI alone, but from a powerful partnership between machine learning and human understanding.

    続きを読む 一部表示
    30 分
  • EP21 - Can an AI Change Your Mind? A Meta-Analysis of 120 Experiments on AI Persuasion
    2025/08/19

    Is artificial intelligence the next master persuader? In this episode of Professor Insight Podcast, we dig into one of the most fascinating questions of the digital age: can AI agents actually out-persuade humans? Drawing on a landmark 2023 meta-analysis from the Journal of Communication, we explore the world of AI-powered chatbots, recommendation engines, and digital advisors, asking whether these technologies are quietly winning the battle for our beliefs, behaviors, and buying decisions.

    Discover what actually makes AI persuasive, how machines subtly nudge us, and why our resistance to algorithm-driven advice might not be as strong as we think. We break down the latest science, sharing stories, surprising findings, and actionable insights. You will also hear about the situations where the human touch still holds the edge and why that matters.

    Backed by nearly 90 studies and the experiences of more than 50,000 participants, this episode is your deep dive into the evolving psychology of influence. Whether you are a marketer, leader, educator, or just curious about how AI is shaping your everyday choices, you will find plenty here to challenge your assumptions. Tune in to see if you can really tell when you are being persuaded by a person or a machine.

    続きを読む 一部表示
    28 分
  • EP20 - Understanding Science Through LLMs? Beware of Generalisation Bias
    2025/08/12

    In this episode, we delve into one interesting findings in the world of AI and science communication. A new study published in Royal Society Open Science, authored by Uwe Peters and Benjamin Chin-Yee, reveals a systematic problem in how large language models summarise scientific research. Even when prompted for accuracy, many LLMs, including the latest versions of ChatGPT, Claude, and DeepSeek, consistently overgeneralise research findings. They take cautious, specific claims and subtly turn them into broad statements that were never actually made in the original papers.

    This phenomenon called generalisation bias may not sound alarming at first, but its implications are massive. Imagine a clinical study that finds a treatment is effective in some patients being summarised as effective for all patients. Or nuanced scientific uncertainty being rewritten as confident advice. According to the study, AI-generated summaries are nearly five times more likely to contain these distortions than human-written summaries. And here’s the twist - the newer, more advanced models are often worse offenders than their predecessors.

    If you rely on AI tools to digest research, teach, communicate, or make decisions based on scientific evidence, this episode is essential listening. We unpack how and why this bias happens, explore its potential risks for science, education, medicine, and media, and share practical tips for working smarter with LLMs.

    続きを読む 一部表示
    24 分
  • EP19 - What MIT Discovered About Your Brain When Writing with AI?
    2025/08/05

    What happens to your brain when you let AI do the thinking for you? In this episode, we explore a fascinating and widely discussed study out of the MIT Media Lab titled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. This research takes us deep into the cognitive consequences of using large language models like ChatGPT for academic work. Using EEG headsets to monitor brain activity, the study reveals something few of us want to hear: repeated reliance on AI might make your work easier but your brain weaker.

    The researchers tracked students over four months as they wrote essays using either ChatGPT, a search engine, or no tools at all. The group that relied on AI not only showed weaker brain connectivity but also had trouble recalling their own work and claimed lower ownership of what they wrote. The paper introduces the concept of "cognitive debt"—the hidden cost of letting AI carry your intellectual load. Meanwhile, the "brain-only" group showed stronger neural activity, deeper memory retention, and a clearer sense of authorship.

    This is not an anti-AI message. It is a moment to pause and consider how these tools are reshaping our learning, our memory, and our sense of self. Whether you are a student, educator, business leader, or technologist, understanding the neuroscience behind AI use is more important than ever. In this episode, we unpack what the data tells us and what it means for the way we think, learn, and create in a world increasingly mediated by machines.

    続きを読む 一部表示
    25 分
  • EP18 - Double Standards: Why Your AI Use Feels More Honest Than Everyone Else’s
    2025/07/30

    Is using generative AI more acceptable when you do it yourself than when someone else does it? According to a new study from the Rotterdam School of Management, most of us think so. This episode dives into the fascinating research behind the paper “Acceptability Lies in the Eye of the Beholder”, which explores how we judge AI-assisted work differently depending on who is using the tech. The authors conducted nine studies with nearly 4,500 participants to unpack how we assess human versus AI contribution — and the results are both surprising and incredibly relevant.

    At the heart of the findings is a powerful bias. People believe they use AI tools like ChatGPT as a source of inspiration, while assuming others rely on them to outsource the heavy lifting. That difference in perception has real consequences. It affects how students are evaluated, how job applicants are judged, and how AI-generated content is viewed in marketing, education, and business. This isn't just about technology, it's about human psychology and the stories we tell ourselves about fairness, effort, and ownership.

    In this episode, we break down what the research uncovered, why it matters, and what it reveals about our evolving relationship with AI. Whether you’re a teacher, marketer, manager, or just curious about how AI is reshaping the rules of credit and creativity, this conversation offers insight into the silent double standards that shape our views.

    続きを読む 一部表示
    25 分
  • EP17: The Dark Side of GenAI: Unpacking the Misuse of GenAI
    2025/07/23

    Generative AI has captured the world's attention for its power to create, accelerate, and enhance—but what happens when these same tools are misused? In this episode of the Professor Insight Podcast, we turn the spotlight to the darker side of GenAI. Drawing on a groundbreaking new paper from DeepMind titled Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data, we explore nearly 200 real-life cases where generative tools were used to deceive, defraud, and manipulate.

    From deepfake impersonations and non-consensual imagery to AI-powered phishing scams and fake social media botnets, the study reveals how GenAI is being exploited in ways that are surprisingly low-tech but highly effective. These misuse tactics often require little technical skill and rely on easy-to-access tools, making them more widespread and harder to track. We break down the taxonomy of tactics and the motivations behind them, including disinformation, monetisation, harassment, and even digital resurrection. This episode is a deep dive into how misuse is already reshaping online trust and public perception. We also discuss the broader implications for AI governance, content authenticity, and digital safety.

    続きを読む 一部表示
    46 分