『Price Power』のカバーアート

Price Power

Price Power

著者: Jacob Rushfinn
無料で聴く

このコンテンツについて

The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.© 2025 Botsi Inc. マーケティング マーケティング・セールス 経済学
エピソード
  • 7: Ekaterina Gamsriegler: How to engineer growth. Again and again.
    2025/12/17

    - PricePowerPodcast.com
    - AI Pricing for your app: Botsi.com

    Ekaterina Gamsriegler (ex-Mimo, Amplitude Product50’s Top Growth Product Leader) breaks down why most growth teams struggle not because of a lack of ideas — but because they optimize the wrong things, in the wrong order.

    Ekaterina walks through real-world examples across onboarding, paywalls, trials, activation, and pricing — showing how user psychology, perceived value, and expectation-setting matter more than dashboards alone.

    📖 Episode Chapters:

    00:00 Growth Does Not Start with an MMP
    01:40 Breaking KPIs into Controllable Inputs
    03:56 Why “Breaking Things Down” Gets You 80% There
    06:30 Product Analytics vs Attribution
    12:00 Onboarding Length vs Paywall Exposure
    16:00 Why Averages Are Always Wrong
    18:10 The Truth About Personalization
    23:30 Why Users Don’t Start Trials
    28:30 Understanding Early Trial Cancellations
    34:45 Why Longer Sessions Can Be a Bad Sign
    38:00 Pricing as a Growth Lever
    42:00 Fix the Story Before the Price
    44:00 Closing Thoughts

    💡 Key Takeaways:

    • Growth is a sequencing problem. Teams fail when they jump straight to solutions instead of first building a usable map of user behavior and breaking metrics into their underlying drivers.

    • Product analytics beats attribution early. You don’t need a perfect funnel — you need a reliable picture of what users actually do after install. MMPs come later.

    • Averages hide the truth. Looking at overall conversion rates masks real issues that only appear when you segment by device, channel, geo, or user intent.

    • More exposure ≠ more revenue. Increasing paywall impressions by removing onboarding screens often lowers trial conversion if user intent isn’t built first.

    • Personalization rarely delivers big wins. Most onboarding and paywall personalization produces single-digit uplifts while adding major complexity and risk.

    • Most early churn is voluntary. Users cancel trials early because they want control, not because they hate the product.

    • Time-to-value matters more than time-in-app. Longer sessions often mean confusion, not engagement.

    • Lowering prices can work — in specific cases. Misaligned mental price categories, lack of localization, missing feature parity, or mission-driven goals can justify it.

    • Pricing issues are often narrative issues. Before changing the price, fix how value is communicated and perceived.

    • Sustainable growth comes from focus. The best teams work on 2–3 high-confidence problems at a time — and say no to everything else.

    Links & Resources Mentioned:

    • Ekaterina on LinkedIn: https://www.linkedin.com/in/ekaterina-shpadareva-gamsriegler/
    • Maven course: https://maven.com/mathemarketing/growing-mobile-subscription-apps
    • Full presentation from Growth Phestival Conference: https://www.canva.com/design/DAGw09v8yIo/lfVoi-Xf4QRm6-ddmtro1A/view
    • Jacob's Retention.Blog

    続きを読む 一部表示
    47 分
  • 6: Lucas Moscon: Conversion Values, SKAN, Fingerprinting, MMPs, and Mobile Attribution
    2025/12/04

    Lucas Moscon, one of the most technically knowledgeable people in mobile attribution, breaks down how post-ATT measurement really works, why most marketers are using outdated mental models, and how to build a modern, resilient measurement stack. Lucas clarifies what’s deterministic vs probabilistic today, exposes where MMPs still add value (and where they absolutely don’t), and explains why IP-based fingerprinting quietly powers 90%+ of attribution today. He also walks through SKAN in plain English, conversion-value strategy, web-to-app pipelines, and why looking at blended ROI beats chasing ROAS illusions on iOS.

    If you want to understand the actual mechanics behind click → install → revenue pipelines — and why Apple’s privacy tech is failing in practice — this episode is for you.

    What you’ll learn:

    • Why ATT didn’t “kill” attribution — it forced marketers to juggle deterministic, probabilistic, and blended layers
    • How Meta/Google matching actually works (spoiler: 90%+ relies on IP, not magic AI)
    • Why SKAN isn’t enough — and why relying on ROAS on iOS is the least trustworthy metric
    • How to measure effectively without over-reacting to noisy campaign-level data
    • When you truly need an MMP today — and why most apps don’t
    • How to correctly design conversion values for SKAN without over-engineering
    • Why retention determines how many conversion values you even receive
    • How to triangulate data across store consoles, subscription platforms, MMPs, and ad networks
    • Why focusing on payback windows (D60–D180) outperforms optimizing for short-term ROAS
    • Why probabilistic fingerprinting is still powering the ad ecosystem — and why Apple hasn’t stopped it

    Key Takeaways:

    • iOS ROAS is the noisiest metric you can use. Without IDFA, everything is extrapolated. High-confidence decision-making must use blended revenue and cohort ROI, not ad-platform ROAS.

    • Modern attribution = multiple layers. Post-ATT, performance requires triangulating data from SKAN, ad networks, subscription platforms, and product analytics — not trusting a single source of truth.

    • Fingerprinting ≠ complex algorithms — it’s mostly IP. Internal tests showed that greater than 90% of probabilistic matches come from IP alone. All the “advanced modeling” narratives are overstated.

    • Most apps don’t need an MMP anymore. Exceptions: running AppLovin/Unity DSPs, React Native/Flutter SDK support gaps, or complex Web-to-App setups where Google requires certified links. Otherwise, MMPs mostly add cost, not clarity.

    • Retention determines SKAN visibility. If users don’t reopen the app, conversion values won’t update — meaning SKAN under-reports trials/purchases unless retention is strong.

    • Blend deterministic + probabilistic + aggregated signals. The goal isn’t precision — it’s directionally confident decisions across imperfect data. Marketers should work in ranges, not absolutes.

    • Longer payback windows unlock scale. Teams willing to accept D60–D180 payback dramatically out-spend competitors optimizing for D7 ROAS — assuming they have strong early-day proxies to detect failing cohorts.

    • MMPs don’t magically fix discrepancies. Even with one SDK, marketers still see mismatches across networks, stores, and internal analytics. The “one SDK solves it” narrative is outdated.

    Links & Resources

    • Appstack: https://www.appstack.tech/
    • Appstack library of resources: https://appstack-library.notion.site/
    • Lucas Moscon LinkedIn: https://www.linkedin.com/in/lucas-moscon/

    00:00 Opening Hot Take: “Are You Really Saturating Meta?”
    05:00 Early Indicators & Proxy Metrics (D3–D10)
    09:00 Predicting Cohort Success from Day 3–10
    11:00 How Click → Install Attribution Actually Works
    14:00 Web-to-App Infrastructure (Fingerprinting + SDK Flow)
    18:00 Meta/Google Matching: IDFA, AEM, SKAN
    24:30 Fingerprinting Reality: Why IP = 90% of Matches
    27:00 Apple’s Privacy Messaging vs Actual Enforcement
    30:30 How Apple Ads Uses (or Ignores) SKAN
    35:00 Should You Use an MMP in 2025?
    46:00 SKAN Conversion Value Mapping: The 63/62 Strategy
    49:00 Why Retention Determines SKAN Postbacks
    54:00 App Stack Overview + Closing Thoughts

    続きを読む 一部表示
    56 分
  • 5: Barbara Galiza: 5 Golden Rules for Conversion Events
    2025/11/18
    Barbara Galiza (HER, Microsoft, WeTransfer, Mollie) breaks down how subscription apps should structure conversion events, clean up broken tracking, and send the right signals into Meta and Google to improve ROAS. She shares her five golden rules for event design, why most apps send way too many signals, and how speed, value, and PII massively improve match rates. We also cover predictive value (without overbuilding LTV models), why strategy failures masquerade as measurement problems, and how fast event sending boosts attribution quality across platforms.What you’ll learnThe optimal 3-event conversion structure for Meta/Google (and why tracking more hurts performance)Why speed of event delivery is one of the strongest levers for match quality & cheaper CPAsHow to incorporate value signals (trial filters, buckets, predicted value) without full LTV modelingWhy using PII (hashed email/phone) dramatically improves attribution & optimizationHow to separate measurement vs. optimization data so each system actually does its jobLightweight ways to identify high-value users early and filter out low-quality trialsWhy Meta-reported ROAS doesn’t matter unless your business metrics move tooHow to diagnose whether you have a strategy problem or a measurement problemWhy small apps should use holdouts & blended metrics instead of over-complicated attribution setupsHow fast event sending helps platforms reconnect the full click → browser → app → purchase chainKey TakeawaysKeep it to ~3 conversion events. Event tracking is “free,” but every extra event adds maintenance, confusion, and breakage. For ad platforms, you rarely need more than:a top-funnel/engagement event (e.g. survey completion),signup/registration (first PII),trial start (earliest strong revenue proxy).Design the event ladder from value, not vanity. Early events show intent; signup lets you pass PII; trial start is the closest thing to revenue that usually falls inside platform lookback windows.Fire events fast. The shorter the delay from click → event, the easier for Meta/others to probabilistically match user journeys. Even within a 24-hour window, “the faster, the better.”Include value data, but don’t over-engineer LTV. For subscription apps, the actual charge often happens after the lookback window. You don’t need a perfect 2-year LTV model—start by bucketing users (e.g. worth 0 / 5 / 10 / 20) based on early behavior and use that as a value signal.Predictive value is about ranking users, not forecasting to the penny. The goal is: out of 100 trials, which ~30 are most likely to convert? Use early feature usage (first 24–48 hours), plan views, return sessions, etc. to distinguish high- vs low-value users.If you don’t send value, platforms optimize for cheap installs. Without a quality or revenue proxy, bid models will chase the lowest-CPI users—often low-intent segments like teens—at the expense of payers.Deduplicate client + server events on purpose. If you send the same “signup” from multiple sources (SDK, MMP, CAPI), use a deduped “master” event for optimization and keep source-specific events for troubleshooting. Check that SDK_signup + CAPI_signup roughly add up to the unified event.Pass PII where you legally can. Emails, login IDs, names, location, and device info (when allowed) greatly improve matching and attribution—especially now that IDFA and deterministic links are limited. Always align with privacy law + platform policies.Separate optimization data from decision data. Events in Meta/Google exist primarily to help their algorithms optimize—not to give you perfect causal measurement. Use them for bidding & creative testing, but use incrementality tests and holistic metrics to decide budget allocation.Don’t mistake a strategy problem for a measurement problem. If you’re a small app running many channels with tiny budgets and can’t tell what works, the issue is fragmentation—not that you need fancier attribution.Links & ResourcesFix My Tracking: https://fixmytracking.com/021 Newsletter: https://www.021newsletter.com/Barbara Galiza on LinkedIn: https://www.linkedin.com/in/barbara-galiza
    続きを読む 一部表示
    45 分
まだレビューはありません