『M365 Show Podcast』のカバーアート

M365 Show Podcast

M365 Show Podcast

著者: Mirko Peters
無料で聴く

このコンテンツについて

Welcome to the M365 Show — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365 Show brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer.



Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.Copyright Mirko Peters / m365.Show
政治・政府
エピソード
  • Why Your Power BI Query is BROKEN: The Hidden Order of Operations
    2025/11/05
    Opening: The Lie Your Power BI Query Tells YouYou think Power BI runs your query exactly as you wrote it. It doesn’t. It quietly reorders your steps like a bureaucrat with a clipboard—efficient, humorless, and entirely convinced it knows better than you. You ask it to filter first, then merge, then expand a column. Power BI nods politely, jots that down, and proceeds to do those steps in whatever internal order it feels like. The result? Your filters get ignored, refresh times stretch into geological eras, and you start doubting every dashboard you’ve ever published.The truth hiding underneath your Apply Steps pane is that Power Query doesn’t actually execute those steps in the visual order you see. It’s a logical description, not a procedural recipe. Behind the scenes, there’s a hidden execution engine shuffling, deferring, and optimizing your operations. By the end of this, you’ll finally see why your query breaks—and how to make it obey you.Section 1: The Illusion of Control – Logical vs. Physical ExecutionHere’s the first myth to kill: the idea that Power Query executes your steps top to bottom like a loyal script reader. It doesn’t. Those “Applied Steps” you see on the right are nothing but a neatly labeled illusion. They represent the logical order—your narrative. But the physical execution order—what the engine actually does—is something else entirely. Think of it as filing taxes: you write things in sequence, but behind the curtain, an auditor reshuffles them according to whatever rules increase efficiency and reduce pain—for them, not for you.Power Query is that auditor. It builds a dependency tree, not a checklist. Each step isn’t executed immediately; it’s defined. The engine looks at your query, figures out which steps rely on others, and schedules real execution later—often reordering those operations. When you hit Close & Apply, that’s when the theater starts. The M engine runs its optimized plan, sometimes skipping entire layers if it can fold logic back into the source system.The visual order is comforting, like a child’s bedtime story—predictable and clean. But the real story is messier. A step you wrote early may execute last; another may never execute at all if no downstream transformation references it. Essentially, you’re writing declarative code that describes what you want, not how it’s performed. Sound familiar? Yes, it’s the same principle that underlies SQL.In SQL, you write SELECT, then FROM, then WHERE, then maybe a GROUP BY and ORDER BY. But internally, the database flips it. The real order starts with FROM (gather data), then WHERE (filter), then GROUP BY (aggregate), then HAVING, finally SELECT, and only then ORDER BY. Power Query operates under a similar sleight of hand—it reads your instructions, nods, then rearranges them for optimal performance, or occasionally, catastrophic inefficiency.Picture Power Query as a government department that “optimizes” paperwork by shuffling it between desks. You submit your forms labeled A through F; the department decides F actually needs to be processed first, C can be combined with D, and B—well, B is being “held for review.” Every applied step is that form, and M—the language behind Power Query—is the policy manual telling the clerk exactly how to ignore your preferred order in pursuit of internal efficiency.Dependencies, not decoration, determine that order. If your custom column depends on a transformed column created two steps above, sure, those two will stay linked. But steps without direct dependencies can slide around. That’s why inserting an innocent filter early doesn’t always “filter early.” The optimizer might push it later—particularly if it detects that folding back to the source would be more efficient. In extreme cases, your early filter does nothing until the very end, after a million extra rows have already been fetched.So when someone complains their filters “don’t work,” they’re not wrong—they just don’t understand when they work. M code only defines transformations. Actual execution happens when the engine requests data—often once, late, and in bulk. Everything before that? A list of intentions, not actions.Understanding this logical-versus-physical divide is the first real step toward fixing “broken” Power BI queries. If the Apply Steps pane is the script, the engine is the director—rewriting scenes, reordering shots, and often cutting entire subplots you thought were essential. The result may still load, but it won’t perform well unless you understand the director’s vision. And that vision, my friend, is query folding.Section 2: Query Folding – The Hidden OptimizerQuery folding is where Power Query reveals its true personality—an obsessive efficiency addict that prefers delegation to labor. In simple terms, folding means pushing your transformations back down to the source system—SQL Server, a Fabric ...
    続きを読む 一部表示
    22 分
  • Your Fabric Data Model Is Lying To Copilot
    2025/11/05
    Opening: The AI That Hallucinates Because You Taught It ToCopilot isn’t confused. It’s obedient. That cheerful paragraph it just wrote about your company’s nonexistent “stellar Q4 surge”? That wasn’t a glitch—it’s gospel according to your own badly wired data.This is the “garbage in, confident out” effect—Microsoft Fabric’s polite way of saying, you trained your liar yourself. Copilot will happily hallucinate patterns because your tables whispered sweet inconsistencies into its prompt context.Here’s what’s happening: you’ve got duplicate joins, missing semantics, and half-baked Medallion layers masquerading as truth. Then you call Copilot and ask for insights. It doesn’t reason; it rearranges. Fabric feeds it malformed metadata, and Copilot returns a lucid dream dressed as analysis.Today I’ll show you why that happens, where your data model betrayed you, and how to rebuild it so Copilot stops inventing stories. By the end, you’ll have AI that’s accurate, explainable, and, at long last, trustworthy.Section 1: The Illusion of Intelligence — Why Copilot LiesPeople expect Copilot to know things. It doesn’t. It pattern‑matches from your metadata, context, and the brittle sense of “relationships” you’ve defined inside Fabric. You think you’re talking to intelligence; you’re actually talking to reflection. Give it ambiguity, and it mirrors that ambiguity straight back, only shinier.Here’s the real problem. Most Fabric implementations treat schema design as an afterthought—fact tables joined on the wrong key, measures written inconsistently, descriptions missing entirely. Copilot reads this chaos like a child reading an unpunctuated sentence: it just guesses where the meaning should go. The result sounds coherent but may be critically wrong.Say your Gold layer contains “Revenue” from one source and “Total Sales” from another, both unstandardized. Copilot sees similar column names and, in its infinite politeness, fuses them. You ask, “What was revenue last quarter?” It merges measures with mismatched granularity, produces an average across incompatible scales, and presents it to you with full confidence. The chart looks professional; the math is fiction.The illusion comes from tone. Natural language feels like understanding, but Copilot’s natural responses only mask statistical mimicry. When you ask a question, the model doesn’t validate facts; it retrieves patterns—probable joins, plausible columns, digestible text. Without strict data lineage or semantic governance, it invents what it can’t infer. It is, in effect, your schema with stage presence.Fabric compounds this illusion. Because data agents in Fabric pass context through metadata, any gaps in relationships—missing foreign keys, untagged dimensions, or ambiguous measure names—are treated as optional hints rather than mandates. The model fills those voids through pattern completion, not logic. You meant “join sales by region and date”? It might read “join sales to anything that smells geographic.” And the SQL it generates obligingly cooperates with that nonsense.Users fall for it because the interface democratizes request syntax. You type a sentence. It returns a visual. You assume comprehension, but the model operates in statistical fog. The fewer constraints you define, the friendlier its lies become.The key mental shift is this: Copilot is not an oracle. It has no epistemology, no concept of truth, only mirrors built from your metadata. It converts your data model into a linguistic probability space. Every structural flaw becomes a semantic hallucination. Where your schema is inconsistent, the AI hallucinates consistency that does not exist.And the tragedy is predictable: executives make decisions based on fiction that feels validated because it came from Microsoft Fabric. If your Gold layer wobbles under inconsistent transformations, Copilot amplifies that wobble into confident storytelling. The model’s eloquence disguises your pipeline’s rot.Think of Copilot as a reflection engine. Its intelligence begins and ends with the quality of your schema. If your joins are crooked, your lineage broken, or your semantics unclear, it reflects uncertainty as certainty. That’s why the cure begins not with prompt engineering but with architectural hygiene.So if Copilot’s only as truthful as your architecture, let’s dissect where the rot begins.Section 2: The Medallion Myth — When Bronze Pollutes GoldEvery data engineer recites the Medallion Architecture like scripture: Bronze, Silver, Gold. Raw, refined, reliable. In theory, it’s a pilgrimage from chaos to clarity—each layer scrubbing ambiguity until the data earns its halo of truth. In practice? Most people build a theme park slide where raw inconsistency takes an express ride from Bronze straight into Gold with nothing cleaned in between.Let’s start at the bottom. Bronze is your landing zone—parquet ...
    続きを読む 一部表示
    24 分
  • The Secret to Power BI Project Success: 3 Non-Negotiable Steps
    2025/11/04
    Opening: The Cost of Power BI Project FailureLet’s discuss one of the great modern illusions of corporate analytics—what I like to call the “successful failure.” You’ve seen it before. A shiny Power BI rollout: dozens of dashboards, colorful charts everywhere, and executives proudly saying, “We’re a data‑driven organization now.” Then you ask a simple question—what changed because of these dashboards? Silence. Because beneath those visual fireworks, there’s no actual insight. Just decorative confusion.Here’s the inconvenient number: industry analysts estimate that about sixty to seventy percent of business intelligence projects fail to meet their objectives—and Power BI projects are no exception. Think about that. Two out of three implementations end up as glorified report collections, not decision tools. They technically “work,” in the sense that data loads and charts render, but they don’t shape smarter decisions or faster actions. They become digital wallpaper.The cause isn’t incompetence or lack of effort. It’s planning—or, more precisely, the lack of it. Most teams dive into building before they’ve agreed on what success even looks like. They start connecting data sources, designing visuals, maybe even arguing over color schemes—all before defining strategic purpose, validating data foundations, or establishing governance. It’s like cooking a five‑course meal while deciding the menu halfway through.Real success in Power BI doesn’t come from templates or clever DAX formulas. It comes from planning discipline—specifically three non‑negotiable steps: define and contain scope, secure data quality, and implement governance from day one. Miss any one of these, and you’re not running an analytics project—you’re decorating a spreadsheet with extra steps. These three steps aren’t optional; they’re the dividing line between genuine intelligence and expensive nonsense masquerading as “insight.”Section 1: Step 1 – Define and Contain Scope (Avoiding Scope Creep)Power BI’s greatest strength—its flexibility—is also its most consistent saboteur. The tool invites creativity: anyone can drag a dataset into a visual and feel like a data scientist. But uncontrolled creativity quickly becomes anarchy. Scope creep isn’t a risk; it’s the natural state of Power BI when no one says no. You start with a simple dashboard for revenue trends, and three weeks later someone insists on integrating customer sentiment, product telemetry, and social media feeds, all because “it would be nice to see.” Nice doesn’t pay for itself.Scope creep works like corrosion—it doesn’t explode, it accumulates. One new measure here, one extra dataset there, and soon your clean project turns into a labyrinth of mismatched visuals and phantom KPIs. The result isn’t insight but exhaustion. Analysts burn time reconciling data versions, executives lose confidence, and the timeline stretches like stale gum. Remember the research: in 2024 over half of Power BI initiatives experienced uncontrolled scope expansion, driving up cost and cycle time. It’s not because teams were lazy; it’s because they treated clarity as optional.To contain it, you begin with ruthless definition. Hold a requirements workshop—yes, an actual meeting where people use words instead of coloring visuals. Start by asking one deceptively simple question: what decisions should this report enable? Not what data you have, but what business question needs answering. Every metric should trace back to that question. From there, convert business questions into measurable success metrics—quantifiable, unambiguous, and, ideally, testable at the end.Next, specify deliverables in concrete terms. Outline exactly which dashboards, datasets, and features belong to scope. Use a simple scoping template—it forces discipline. Columns for objective, dataset, owner, visual type, update frequency, and acceptance criteria. Anything not listed there does not exist. If new desires appear later—and they will—those require a formal change request. A proper evaluation of time, cost, and risk turns “it would be nice to see” into “it will cost six more weeks.” That sentence saves careers.Fast‑track or agile scoping methods can help maintain momentum without losing control. Break deliverables into iterative slices—one dashboard released, reviewed, and validated before the next begins. This creates a rhythm of feedback instead of a massive waterfall collapse. Each iteration answers, “Did this solve the stated business question?” If yes, proceed. If not, fix scope drift before scaling error. A disciplined iteration beats a chaotic sprint every time.And—this may sound obvious but apparently isn’t—document everything. Power BI’s collaborative environment blurs accountability. When everyone can publish reports, no one owns them. Keep a simple record: who requested each dashboard, who approved ...
    続きを読む 一部表示
    24 分
まだレビューはありません