エピソード

  • Ep 7. cBottle: Climate in a bottle - foundational AI weather prediction
    2025/08/05

    cBottle, developed by NVIDIA, is a generative diffusion-based framework that acts as a generative foundation model for the global atmosphere. It directly tackles the challenge of petabyte-scale climate simulation data, which is currently almost impossible to access and interact with easily due to immense storage and data movement issues1....

    This revolutionary system works in two stages: a coarse-resolution generator that creates 100-kilometer global fields, followed by a super-resolution stage that upscales to incredibly detailed 5-kilometer fields. The results are astonishing: cBottle achieves an extreme 3000x compression ratio per sample over raw data, encapsulating vast climate outputs into just a few gigabytes of neural network weights. This enables low-latency generation of realistic kilometer-scale data whenever you need it.

    Beyond mere emulation, cBottle demonstrates remarkable versatility. It can bridge different climate datasets like ERA5 and ICON, perform zero-shot bias correction, fill in missing or corrupted data channels (like fixing streaking artifacts in ERA5 radiation fields), and even generate spatio-temporally coherent weather sequences. By faithfully reproducing diurnal-to-seasonal scale variability, large-scale atmospheric modes, and even tropical cyclone statistics, cBottle is poised to transform climate informatics. It's a significant step towards building interactive digital twins of Earth, making high-fidelity climate projections accessible and usable for everyone."

    続きを読む 一部表示
    20 分
  • Ep.6 How to fine tune a weather foundation model to hydrological variables?
    2025/06/30

    This research evaluates the performance of the Aurora weather foundation model by using lightweight decoders to predict hydrological and energy variables not included in its original training. The study highlights that this decoder-based approach significantly reduces training time and memory requirements compared to fine-tuning the entire model, while still achieving strong accuracy. A key finding is that decoder accuracy is influenced by the physical correlation between the new variables and those initially used for pretraining, suggesting that Aurora's latent space effectively captures meaningful physical relationships. The authors argue that the ability to extend foundation models to new variables without full fine-tuning is an important quality metric for Earth sciences, promoting accessibility for communities with limited computational resources. They conclude that rich latent space representations allow for accurate predictions of new variables using lightweight extensions, advocating for future foundation models that encompass a broad range of physical processes.

    Reference:

    Lehmann, F., Ozdemir, F., Soja, B., Hoefler, T., Mishra, S., & Schemm, S. (2025). Finetuning a Weather Foundation Model with Lightweight Decoders for Unseen Physical Processes. arXiv preprint arXiv:2506.19088.

    続きを読む 一部表示
    10 分
  • Ep.5 What is foundation model - drawing from numerical simulation
    2025/06/03

    When we talk about foundation models, what are we talking about? This is a reflection piece on foundation models by drawing an analogy from numerical solutions in fluid dynamics.

    This paper explore the challenges in building these models for science and engineering and introduce a promising framework called the Data-Driven Finite Element Method (DD-FEM), which aims to bridge traditional numerical methods with modern AI to provide a rigorous foundation for this exciting new field.

    Choi, Y., Cheung, S. W., Kim, Y., Tsai, P. H., Diaz, A. N., Zanardi, I., ... & Heinkenschloss, M. (2025). Defining Foundation Models for Computational Science: A Call for Clarity and Rigor. arXiv preprint arXiv:2505.22904.

    続きを読む 一部表示
    29 分
  • Ep.4 Any-to-any Earth Observation Generation and Thinking - TerraMind
    2025/05/07

    IBM recently released the first-of-its-kind geospatial intelligence any-to-any model TerraMind. In this podcast, we feature this new generative model and learn its capability of multi-modality. I believe there is a lot of potential with such a model.

    Jakubik, J., Yang, F., Blumenstiel, B., Scheurer, E., Sedona, R., Maurogiovanni, S., Bosmans, J., Dionelis, N., Marsocci, V., Kopp, N., Ramachandran, R., Fraccaro, P., Brunschwiler, T., Cavallaro, G., & Longépé, N. (2025). TerraMind: Large-Scale Generative Multimodality for Earth Observation. ArXiv. https://arxiv.org/abs/2504.11171

    続きを読む 一部表示
    24 分
  • Ep.3 Geospatial foundation model - Prithvi
    2025/04/24

    Today, we are featuring a geospatial foundation model Prithvi, produced by NASA and IBM, one of the first foundation model in this space.

    Trained on a large global dataset of NASA’s Harmonized Landsat and Sentinel-2 data, Prithvi-EO-2.0 demonstrates significant improvements over its predecessor by incorporating temporal and location embeddings. Through extensive benchmarking using GEO-Bench, it outperforms other prominent GFMs across various remote sensing tasks and resolutions, highlighting its versatility. Furthermore, the model has been successfully applied to real-world downstream tasks led by subject matter experts in areas such as disaster response, land use and crop mapping, and ecosystem dynamics monitoring, showcasing its practical utility. Emphasising a Trusted Open Science approach, Prithvi-EO-2.0 is made available on Hugging Face and IBM TerraTorch to facilitate community adoption and customization, aiming to overcome limitations of previous GFMs related to multi-temporality, validation, and ease of use for non-AI experts.

    続きを読む 一部表示
    23 分
  • Ep.2 AI models for flood forecasting - HydrographNet
    2025/04/15

    This research article introduces HydroGraphNet, a novel physics-informed graph neural network for improved flood forecasting. Traditional hydrodynamic models are computationally expensive, while machine learning alternatives often lack physical accuracy and interpretability. HydroGraphNet integrates the Kolmogorov–Arnold Network (KAN) to enhance model interpretability within an unstructured mesh framework. By embedding mass conservation laws into its training and using a specific architecture, the model achieves more physically consistent and accurate predictions. Validation on real-world flood data demonstrates significant reductions in prediction error and improvements in identifying major flood events compared to standard methods.

    Taghizadeh, M., Zandsalimi, Z., Nabian, M. A., Shafiee-Jood, M., & Alemazkoor, N. Interpretable physics-informed graph neural networks for flood forecasting. Computer-Aided Civil and Infrastructure Engineering. https://doi.org/10.1111/mice.13484

    続きを読む 一部表示
    22 分
  • Ep.1 AI models for weather forecasting
    2025/03/31

    We are featuring three papers:

    Mardani, M., Brenowitz, N., Cohen, Y., Pathak, J., Chen, C., Liu, C., Vahdat, A., Nabian, M. A., Ge, T., Subramaniam, A., Kashinath, K., Kautz, J., & Pritchard, M. (2025). Residual corrective diffusion modeling for km-scale atmospheric downscaling. Communications Earth & Environment, 6(1), 1-10. https://doi.org/10.1038/s43247-025-02042-5

    Price, I., Alet, F., Andersson, T. R., Masters, D., Ewalds, T., Stott, J., Mohamed, S., Battaglia, P., Lam, R., & Willson, M. (2025). Probabilistic weather forecasting with machine learning. Nature, 637(8044), 84-90. https://doi.org/10.1038/s41586-024-08252-9

    Lam, R., Sanchez-Gonzalez, A., Willson, M., Wirnsberger, P., Fortunato, M., Alet, F., Ravuri, S., Ewalds, T., Eaton-Rosen, Z., Hu, W., Merose, A., Hoyer, S., Holland, G., Vinyals, O., Stott, J., Pritzel, A., Mohamed, S., & Battaglia, P. (2023). Learning skillful medium-range global weather forecasting. Science. https://doi.org/adi2336

    続きを読む 一部表示
    22 分