『Project Management Tips, Trends, and New Tools』のカバーアート

Project Management Tips, Trends, and New Tools

Project Management Tips, Trends, and New Tools

著者: Andres Diaz
無料で聴く

このコンテンツについて

This is the podcast where Project Managers get updated. Every week we explore the latest trends, tools, methodologies, and news from the world of project management. If you lead teams, handle impossible deadlines, or simply love frameworks like Scrum, Kanban, or PMI, this is your space. Chats with experts, software analysis, productivity hacks, artificial intelligence applied to projects, and everything you need to stay one step ahead. Listen, learn, and improve your way of managing. Because in the world of projects, staying up-to-date changes everythingCopyright 2025 Andres Diaz マネジメント マネジメント・リーダーシップ 経済学
エピソード
  • Earned value with AI: Are you on schedule and within budget?
    2025/12/10
    Summary: - The episode introduces Earned Value Management (EVM) powered by AI, helping you see truth about progress, cost, and schedule instead of rosy stories. It ties what you planned (PV), what you’ve earned (EV), and what you’ve spent (AC to measure performance and variances. - Key metrics: - Planned Value (PV): what you intended to do/spend by today. - Earned Value (EV): value of what’s actually completed. - Actual Cost (AC): what you’ve spent. - Cost Variance (EV − AC): negative means cost overrun. - Schedule Variance (EV − PV): negative means behind. - Cost Performance Index (CPI = EV/AC) and Schedule Performance Index (SPI = EV/PV); values above 1 are good, below 1 are warnings. - AI adds real-time foresight: by analyzing progress patterns, hours, changes in scope, and approvals, AI can forecast finish date and likely total cost weeks in advance, and propose scenarios (replan, level resources, adjust sequences, prioritize critical tasks). - How to implement (step by step): 1) Define a clear WBS with measurable deliverables and a Definition of Done. 2) Set a realistic baseline (cost-distribution curve, planned % per period). 3) Establish a progress-measurement rule per package (all-or-nothing, milestones, or physical proportion). 4) Centralize data sources (tasks, timesheets, purchases) in one repository; start simple with a spreadsheet. 5) Activate an AI model to learn from history (or use initial weeks to train). 6) Start a five-minute weekly review: cost index, schedule index, variances, trend, and recommended actions. - Example: PV=100, EV=80, AC=90 → SPI=0.80, CPI=0.88. If testing tends to exceed by 20% after requirements change, AI might forecast a 15% cost overrun unless you act (stabilize requirements, improve inputs, reorder tests). - Quick context: EV originated in aerospace/defense; it endures because it answers how much value you built for the money spent. AI helps detect a mid-project “point of no return” and warn early. - Trends: hybrid agile-predictive management, automated reporting, near real-time dashboards, and predictive cost overruns analysis. EV can be measured per iteration to accommodate variable flow; if SPI drops below 0.90, replan the next iteration. - Mistakes to avoid: - Inflated progress (hours, not deliverables). - Late data feeding the AI. - Moving the baseline to look better. - Ignoring scope; EV is about delivered scope, not only time/cost. - No thresholds or action triggers. - How to start this week (5-day plan): Day 1: list deliverables and define “done.” Day 2: set a baseline with planned value by week. Day 3: build a dashboard showing PV, EV, and AC. Day 4: input progress and costs. Day 5: run a simple AI forecast to get trend and forecast. - Security/ethics: anonymize data when needed, control access to sensitive costs, validate AI recommendations with judgment. If AI suggests something inconsistent with known facts, update data and recalc. - Goal: use AI-assisted EVM to make decisions one week earlier, improving cost/schedule control and leadership conversations grounded in data. - Final prompt to drive action: decide which deliverable you’ll measure objectively today and what you’ll do if SPI drops below 1 this week; document, share with the team, and start the five-minute review. - Closing: subscribe, share thoughts, and tune your project to become a measurable story rather than a mystery. Remeber you can contact me at andresdiaz@bestmanagement.org
    続きを読む 一部表示
    8 分
  • Critical path with AI: what is the most likely date?
    2025/11/26
    Summary: - Purpose: Explain how to estimate the most probable project end date by combining the critical path method, AI-adjusted estimates, and Monte Carlo simulations, so you can defend commitments with data rather than gut feelings. - Core ideas: - The finish date is a probability distribution, not a single number, and the critical path can shift as the project evolves. - Use a structured process to describe the project, account for uncertainty, and ground decisions in data. - Step-by-step method: 1) Describe the project with a work breakdown structure (WBS) and a network diagram. For each task, collect three PERT estimates: optimistic (O), most likely (M), and pessimistic (P). Compute the initial expected duration as (O + 4M + P) / 6 to reduce optimism bias. 2) Compute the traditional critical path, identify zero-slack activities, and note that the most worrisome task isn’t always on the CP, yet often drives discussions. 3) Add AI-adjusted durations: gather historical data (planned vs actual durations, complexity, deliverable size, team experience, technical context, concurrent load) and train a simple model to predict an adjustment factor per task. Apply this factor to the PERT estimates before simulation. 4) Run a Monte Carlo simulation: for each task, define a distribution from the three estimates and the AI adjustment. In each iteration, sample durations, recalculate the CP, and record the finish date. After thousands of iterations, obtain a distribution of finish dates. The peak gives the most probable date; use percentiles (e.g., 80% for external commitments, 90% for critical contracts) to set commitments. Communicate with a confidence level rather than a single date. - Practical guidance: - This week: add three estimates per task, include two contextual factors (e.g., complexity, team maturity), run the AI-based adjustment, and launch the simulation. Start collecting actual durations to improve the model over time. - Insights and cautions: - The CP often jumps as variability occurs; simple averages are poor predictors—simulation captures these shifts. - Hofstadter’s law (“everything takes longer than you think”) still applies even when accounting for uncertainty. - Consider holidays, team capacity, external dependencies, and resource contention; AI can detect patterns (e.g., parallel reviews causing delays) and adjust durations accordingly. - Negotiation and communication: - Publish three dates from the simulation: (1) the most probable date, (2) the 80% confidence date, and (3) an internal alert date for mitigation if close to the target. - Include a sensitivity analysis (e.g., a tornado diagram) to show which tasks drive most variability and explore which tasks to de-risk first. - Common mistakes to avoid: - Treating the CP as fixed, using ideal hours, underestimating approvals, ignoring integration time, or not updating the model with progress. Update weekly and lightly retrain the AI adjustment. - Cultural takeaway: - Presenting dates with confidence levels reflects maturity and realism; many leaders already negotiate with probability curves. This approach is a precision tool, not luxury. - Actionable challenge: - In your next committee, present (1) the finish-date probability curve, (2) the top three drivers of variability and mitigations, (3) the gap between the target date and the 80% confidence date with the cost to close it. If asked to “cut a week,” respond with data-driven implications. - Closing thought: - The most probable date is designed through the CP, risk analysis, AI learning from context, and a probabilistic simulation that embraces uncertainty. The alternative is over-optimistic planning based on wishful thinking. - Sign-off reminder (for context): subscribe, review, or share the episode. Remeber you can contact me at andresdiaz@bestmanagement.org
    続きを読む 一部表示
    7 分
  • Prioritization in Kanban with AI: what comes first
    2025/10/08
    - Summary: - The text presents IA-powered Kanban prioritization as a way to decide what work to tackle first by focusing on delivering continuous value. - Prioritization should be data-driven, using a board with statuses (Pending, In Progress, Done) and scoring tasks on impact, cost of delay, dependencies, and size to avoid low-value work. - AI is framed as a co-pilot that suggests a ranking based on expected value, urgency, risk, and other criteria, while humans validate and adjust. Daily IA-driven recommendations can guide queue-review discussions, increasing clarity while preserving human decision-making. - Before adopting IA, teams should assess their data readiness and maturity to determine how well IA can be integrated. - How IA works: define criteria (business value, customer impact, learning potential, dependencies, task size) and metrics (cost of delay, strategic alignment, delivery capacity); then apply a scoring model to produce a priority index for each task. - A practical, step-by-step starter guide: 1) Define clear, measurable success criteria and risk reduction. 2) Clean the backlog: map dependencies, remove duplicates, estimate size. 3) Create a prioritization score (e.g., value 40%, customer impact 30%, cost of delay 20%, dependency 10%). 4) Feed IA with project data and start with a two-week pilot. 5) Add an AI Priority row on the board and maintain daily ordering. 6) Conduct a short daily stand-up to validate rankings and move tasks In Progress as needed. 7) Measure results (delivery times, rework, customer satisfaction) and adjust weights/rules accordingly. - Fun fact: coupling value signals with human review yields sustained gains in speed and quality; IA speeds up conversations but humans provide clarity and context. - Concrete example: IA accounts for cost of delay and dependency, potentially elevating a high-value-large, dependent task over a seemingly simpler, independent task, leading to greater clarity and deliberate prioritization. - Practical setup suggestion: add three columns—“AI Priority,” “In Review,” and “In Progress.” Include each task’s expected value, cost of delay, size, dependencies, and target date; IA ranks, team validates, and daily decisions move tasks forward. - Audience prompts: consider whether you have a backlog ready for scoring; what would be lost by not prioritizing with IA? What value criteria and data do you currently have? - Goals: establish a clear method to start IA-powered prioritization, reduce waste, shorten delivery times, and enhance decision quality. - Closing: invites subscription and feedback for the podcast episode. - Remeber you can contact me at andresdiaz@bestmanagement.org
    続きを読む 一部表示
    6 分
まだレビューはありません