產業訊息

Epoch-ILP Webinar

Einsums, Fibertrees and Dataflow: Architecture for the Post-Moore Era


隨著摩爾定律趨緩,AI 快速崛起及大規模模型迅速擴張等挑戰,傳統依賴製程縮小與通用處理器提效的方式,已無法因應未來龐大算力與能源效率的需求。同時,傳統晶片架構也難以有效利用資料的高度稀疏性,進而導致運算資源浪費。


時代基金會將於 5 月 15 日,邀請MIT CSAIL暨電機資訊系Joel S. Emer 教授,現為 Nvidia 特聘資深研究科學家,以「Einsums, Fibertrees and Dataflow: Architecture for the Post-Moore Era」為題,進行線上專題演講。Emer 教授將介紹通用加速器設計框架,透過愛因斯坦求和表示法 (Einsums) 與抽象模型 (fibertree),系統性描述稀疏張量運算 (Sparse Tensor)、資料流設計與資料表示方式,協助參與者掌握後摩爾時代的架構設計趨勢,為未來高效能與低功耗運算架構提供創新思維。


本會特別開放台灣半導體協會會員參加,敬請於 56日(二)前報名參加(報名連結)。隨函謹附教授簡介及議程,歡迎踴躍參與。



【講者介紹】

Prof. Joel S. Emer 為 MIT 電機資訊系暨 CSAIL 教授,現為 Nvidia 特聘資深研究科學家,負責探索未來處理器架構,並開發建模和效能分析方法。擁有近 50 年的豐厚經驗,Prof. Emer 為後摩爾定律時代 (Post-Moore Era) 的創新架構奠定理論與實務基礎。他長期專注於處理器微架構設計的研究與先進開發工作,並參與多款 VAX、Alpha 和 X86 處理器的架構設計,並被公認為「定量化處理器效能評估方法」的重要開發者之一,他在深度學習加速器設計、空間與平行架構、處理器可靠度分析、記憶體相依性預測、處理器管線與快取設計、以及同時多執行緒等多個領域具有深遠貢獻。


加入 Nvidia 之前,Prof. Emer 曾任 Intel 研究員和微架構研究總監、並曾在 Compaq和Digital Equipment Corporation(DEC)任職。他曾獲選為美國電機電子工程學會(IEEE)與美國電腦學會(ACM)院士、美國國家工程院(NAE)院士,更榮獲 Eckert-Mauchly 獎與 B. Ramakrishnan Rau 獎等電腦架構領域最高榮譽,在學術界和產業界皆擁有卓越成就。


 Special Epoch Webinar Series

講題:Einsums, Fibertrees and Dataflow: Architecture for the Post-Moore Era

時間:2025.05.15 (四) 9:00-10:00 (台北時間)

地點:Zoom會議室

報名:請至 報名連結 報名

備註:時代基金會保留變更、審核活動資格之權利。


演講題目:Einsums, Fibertrees and Dataflow: Architecture for the Post-Moore Era


演講大綱:

Over the past few years, efforts to address the challenges of the end of Moore's Law has led to significant rise in domain-specific accelerators. Many of these accelerators target tensor algebraic computations and even more specifically computations on sparse tensors. To exploit that sparsity, these accelerators employ a wide variety of novel solutions to achieve good performance. At the same time, prior work on sparse accelerators does not systematically express this full range of design features, making it difficult to understand the impact of each design choice and compare or extend the state-of-the-art.


In an analogous fashion to our prior work that categorized DNN dataflows into patterns like weight stationary and output stationary, this talk will try to provide a systematic approach to characterize the range of sparse tensor accelerators. Thus, rather than presenting a single specific combination of a dataflow and concrete data representation, I will present a generalized framework for describing computations, dataflows, the manipulation of sparse (and dense) tensor operands and data representation options. In this framework, this separation of concerns is intended to better understand designs and facilitate the exploration of the wide design space of tensor accelerators. Included in this framework I will present a description of computations using an extension of the Einstein summation notation (Einsums) and a format-agnostic abstraction for sparse tensors, called fibertrees. Using the fibertree abstraction, one can express a wide variety of concrete data representations, each with its own advantages and disadvantages. Furthermore by adding a set of operators for activities, like traversal and merging of tensors, the fibertree notation can be used to express dataflows independent of the concrete data representation used for the tensor operands. Thus, using this common language, I will show how to describe a variety of sparse tensor accelerator designs and ultimately our state-of-art transformer accelerator.

時代基金會 敬邀

專案聯絡人:

夏煒璵 計畫總監 (Ivory Hsia) │+886-2-2511-2678 ext. 22│ivory@epoch.org.tw

徐雯瑄 專案經理 (Wen Hsu) │+886-2-2511-2678 ext. 10│wen@epoch.org.tw

張雅筑 計畫專員 (Judy Chang) │+886-2-2511-2678 ext. 13│judy@epoch.org.tw