Yann LeCun 离开 Meta,挑战 LLM 主导范式Yann LeCun Leaves Meta, Challenges LLM Paradigm
核心要点:LLM 在语言任务上表现出色,但并非通往人类水平智能的可行路径;未来在于世界模型和 JEPA 风格架构,这些架构能在抽象表示空间中预测动作后果。
前 Meta 首席 AI 科学家、图灵奖得主 Yann LeCun 离开公司创立 Ami Labs,专注于「现实世界的 AI」。他认为现实过于高维、连续且嘈杂, autoregressive 的 token 预测难以应对。世界模型让智能体通过搜索和优化进行规划,而非猜测下一个 token。JEPA(联合嵌入预测架构)无需生成像素即可学习丰富表示,避免了 VAE 等生成模型的缺陷。
LeCun 指出,机器人领域的模仿学习需要海量数据且泛化能力差,而世界模型有望以极高数据效率实现零样本任务解决。他还介绍了 Tapestry,这是一个联邦式开源基础模型平台,用于实现文化和语言主权。在安全方面,他认为当前 LLM 因幻觉和缺乏后果预测而本质上不安全,主张使用带显式世界模型的目标驱动 AI。LeCun 保持乐观:「五年内,完全主导世界」。一句难忘的话:「LLM 对它们擅长的事很棒,但它们不是通往人类水平或类似人类智能的路径。」
前 Meta 首席 AI 科学家、图灵奖得主 Yann LeCun 离开公司创立 Ami Labs,专注于「现实世界的 AI」。他认为现实过于高维、连续且嘈杂, autoregressive 的 token 预测难以应对。世界模型让智能体通过搜索和优化进行规划,而非猜测下一个 token。JEPA(联合嵌入预测架构)无需生成像素即可学习丰富表示,避免了 VAE 等生成模型的缺陷。
LeCun 指出,机器人领域的模仿学习需要海量数据且泛化能力差,而世界模型有望以极高数据效率实现零样本任务解决。他还介绍了 Tapestry,这是一个联邦式开源基础模型平台,用于实现文化和语言主权。在安全方面,他认为当前 LLM 因幻觉和缺乏后果预测而本质上不安全,主张使用带显式世界模型的目标驱动 AI。LeCun 保持乐观:「五年内,完全主导世界」。一句难忘的话:「LLM 对它们擅长的事很棒,但它们不是通往人类水平或类似人类智能的路径。」
The Takeaway: LLMs excel at language tasks but are not a viable path to human-level intelligence; the future lies in world models and JEPA-style architectures that predict action consequences in abstract representation space.
Yann LeCun, Meta's former Chief AI Scientist and Turing Award winner, left to found Ami Labs focused on "AI for the real world." He argues reality is too high-dimensional, continuous, and noisy for autoregressive token prediction. World models enable agents to plan via search and optimization rather than next-token guessing. JEPA (Joint Embedding Predictive Architecture) learns rich representations without generating pixels, avoiding the pitfalls of generative models like VAEs.
LeCun highlights that imitation learning in robotics demands massive data and lacks generalization, while world models promise zero-shot task solving with far greater data efficiency. He also introduced Tapestry, a federated open foundation model platform for cultural and linguistic sovereignty. On safety, he views current LLMs as intrinsically unsafe due to hallucinations and lack of consequence prediction, advocating objective-driven AI with explicit world models. LeCun remains bullish: "Five years, complete world domination" for this approach. A memorable line: "LLMs are great for what they do. They're just not a path towards human level or human like intelligence."
查看原文 →
Yann LeCun, Meta's former Chief AI Scientist and Turing Award winner, left to found Ami Labs focused on "AI for the real world." He argues reality is too high-dimensional, continuous, and noisy for autoregressive token prediction. World models enable agents to plan via search and optimization rather than next-token guessing. JEPA (Joint Embedding Predictive Architecture) learns rich representations without generating pixels, avoiding the pitfalls of generative models like VAEs.
LeCun highlights that imitation learning in robotics demands massive data and lacks generalization, while world models promise zero-shot task solving with far greater data efficiency. He also introduced Tapestry, a federated open foundation model platform for cultural and linguistic sovereignty. On safety, he views current LLMs as intrinsically unsafe due to hallucinations and lack of consequence prediction, advocating objective-driven AI with explicit world models. LeCun remains bullish: "Five years, complete world domination" for this approach. A memorable line: "LLMs are great for what they do. They're just not a path towards human level or human like intelligence."