Datawhale分享
观点:Emad Mostaque,编译:Datawhale
视频中英对照如下:
Distillation is nothing new, and there's no way to kind of stop this from the model basis.
蒸馏技术并不是什么新事物,而且从模型的角度来看,没有办法完全阻止这种情况的发生。
But if you actually look at what the paper says and what's reasonable, they have this version R1-Zero that created its own data.
但如果你仔细阅读论文并理性分析一下,你会发现他们提出了一个名为 R1-Zero 的版本,这个版本能够生成自己的数据。
And what's this familiar with, it's familiar with AlphaGo and AlphaGo Zero and Mu Zero, these rich reinforcement learning models that ouperformed humans on Go.
这让人联想到 AlphaGo、AlphaGo Zero 和 Mu Zero 等强大的强化学习模型,这些模型在围棋领域已经超越了人类。
In fact, you could feel like maybe we're all Lisa doll, right? like the AI is coming for all of our expertise.
事实上,你可能会觉得我们所有人都像是 Lisa 娃娃,对吧?就像 AI 正在取代我们所有的专业技能。
It's inevitable that that will happen, but I don't think they deliberately went in and did that because OpenAI's o1 outputs these cutting edge outputs will missing the chain of thought reasoning step.
这种情况的发生是不可避免的,但我并不认为他们是有意为之,因为 OpenAI 的 o1 模型虽然输出了前沿的结果,却缺少了思维链推理的步骤。
We've seen now that as you take the chain of thought reasoning from R1 and actually the new Gemini flash thinking, the Google model that's now top of the leaderboard that's what you really need if you want to optimize this process.
我们现在已经看到,当你从 R1 中引入思维链推理,以及 Google 最新的 Gemini Flash 模型(目前位居排行榜榜首)的思维方式时,这才是真正优化这一过程的关键。
So I think they actually created their own synthetic data.
所以我认为他们实际上是创建了自己的合成数据。
补充说明:这段的逻辑是OpenAI 的 o1 模型能输出结果,但中间的「思维链推理」环节是不显示的,也就是说按照传统的训练方式是没有思维链的逻辑的数据的。所以 Emad Mostaque 认为 R1 必须自主生成带有逻辑步骤的合成数据(通过强化学习或自监督机制,模拟人类逐步推导的过程),所以没有抄袭。
But as they look at all of the Internet, there will be some OpenAI data in there. We've even seen that with Llama and Gemini and others.
但当他们浏览整个互联网时,其中会包含一些 OpenAI 的数据。我们甚至在 Llama、Gemini 等模型上都看到了这种情况。
Sometimes you ask, who made you? OpenAI.
有时候你问它们:“是谁创造了你?”它们会回答:“OpenAI。”
Because it's taken so many of those strings.
因为它们吸收了大量来自 OpenAI 的数据。
另外关于大模型蒸馏的文章:原来,这些顶级大模型都是蒸馏的!
大模型蒸馏是什么:用大白话告诉你什么是模型蒸馏技术?