『Deep Dive in Research』のカバーアート

Deep Dive in Research

Deep Dive in Research

著者: NotebookLM
無料で聴く

このコンテンツについて

Discussion about interesting research papersNotebookLM
エピソード
  • Ellora: Standardized Recipes for LoRA and LLM Enhancement
    2025/12/05

    The text presents Ellora, a collection of standardized, production-ready methodologies, referred to as recipes, for enhancing Large Language Models (LLMs) through Low-Rank Adaptation (LoRA). This approach is justified by the fact that LoRA achieves performance comparable to full fine-tuning while drastically reducing computational costs and training up to 10,000x fewer parameters. Ellora’s recipes often utilize self-supervised methods like the Magpie approach for data generation and confirm that combining parameter-efficient techniques with reinforcement learning yields significant speed and memory savings. The six structured recipes address diverse operational needs, including recovering model accuracy after quantization, extending context windows up to 2 million tokens, and teaching secure code generation. Specifically, one recipe demonstrates a 97% vulnerability reduction through automated security analysis and Group Relative Policy Optimization (GRPO). Ultimately, Ellora provides concrete, reproducible templates for practitioners to maximize model capabilities efficiently without requiring new, complex training frameworks.


    続きを読む 一部表示
    7 分
  • The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix
    2025/11/25

    Today's podcast is based on an article from Hugging Face detailing an extensive research project that addresses the high cost and scale of training modern large language models. The authors, through over 50 systematic experiments, sought to find an optimal data mixing strategy that would allow a GPT-2 model to achieve comparable performance to models trained on ten times the data. Their central finding is that a static dataset mix of 50% finePDFs, 30% DCLM-baseline, and 20% FineWeb-Edu significantly outperforms more complex curriculum learning approaches, which often led to catastrophic forgetting or overfitting. This optimal 50-30-20 mixture successfully trained a GPT-2-70M model that achieved over 90% of the original GPT-2's benchmark performance while using substantially fewer resources. The key takeaway is that dataset quality and intelligent composition are more critical than sheer quantity for training efficient language models.


    Read the full article on https://huggingface.co/blog/codelion/optimal-dataset-mixing

    続きを読む 一部表示
    7 分
  • Unsupervised Model Improvement Through Internal Coherence Maximization
    2025/08/04

    https://huggingface.co/blog/codelion/internal-coherence-maximization

    The article presents a novel method for improving large language models (LLMs) called Internal Coherence Maximization (ICM) combined with Direct Preference Optimization (DPO), which operates without any human supervision. This unsupervised approach demonstrates superior performance in mathematical reasoning tasks compared to traditional human-supervised methods like Group Relative Policy Optimization (GRPO). Key contributions include a complete implementation of ICM with diverse solution generation and a pipeline to convert ICM results into preference pairs for DPO training. The research also shows successful cross-model capability transfer, where knowledge from a stronger model (Qwen3) improves a weaker one (Gemma3), offering a scalable and cost-effective alternative to current LLM alignment paradigms. The authors emphasize that pretrained models already possess rich understanding, and ICM+DPO offers a way to elicit and refine this internal coherence, leading to better performance without the bottleneck of human annotation.

    続きを読む 一部表示
    7 分
まだレビューはありません