国产美女自慰大秀_成人国产黄色毛片_青柠在线观看免费高清电视剧红桃_日韩最新毛片网站_午夜久久亚洲精品_国产最新精品黄色免费三级片_亚洲成av人片不卡无码播放_国产亚洲日韩在线播放更多_国产精品呦另类稀缺_日本1级黄色视频免费在线播放

 
We engineer tomorrow to build a better future.
Solutions to your liquid cooling challenges.
 
 
DANFOSS
數(shù)據(jù)中心液冷產(chǎn)品
  數(shù)據(jù)中心液冷產(chǎn)品
  FD83接頭
  UQD快速接頭
  UQDB盲插接頭
  BMQC盲插接頭
  EHW194液冷軟管
  EHW094液冷軟管
  5400制冷劑接頭
  Manifold 分水器
  液冷系統(tǒng)生產(chǎn)及集成
Danfoss流體管閥件
 
 
 
 
 
非標(biāo)定制液冷產(chǎn)品
液冷系統(tǒng)生產(chǎn)及集成
閥門
傳感器
選型資料下載
  新聞通告
  成功案例
  資料下載

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


   

 

NVIDIA Blackwell 平臺(tái)問(wèn)世,助力計(jì)算新時(shí)代
NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

2024 年 3 月 18 日


- NVIDIA Blackwell 平臺(tái)問(wèn)世,助力計(jì)算新時(shí)代
- 全新 Blackwell GPU、NVLink 和 Resilience 技術(shù)支持萬(wàn)億參數(shù)規(guī)模的 AI 模型
- 全新 Tensor Core 和 TensorRT- LLM 編譯器可將 LLM 推理運(yùn)行成本和能耗降低高達(dá) 25 倍
- 新型加速器助力數(shù)據(jù)處理、工程模擬、電子設(shè)計(jì)自動(dòng)化、計(jì)算機(jī)輔助藥物設(shè)計(jì)和量子計(jì)算取得突破
- 各大云提供商、服務(wù)器制造商和領(lǐng)先的人工智能公司廣泛采用


GTC — 為推動(dòng)計(jì)算新時(shí)代的發(fā)展,NVIDIA 今天宣布 NVIDIA Blackwell 平臺(tái)已經(jīng)面世,該平臺(tái)可幫助世界各地的組織在數(shù)萬(wàn)億參數(shù)大型語(yǔ)言模型上構(gòu)建和運(yùn)行實(shí)時(shí)生成式 AI,且成本和能耗較上一代平臺(tái)降低高達(dá) 25 倍。

Blackwell GPU 架構(gòu)具有 六項(xiàng)用于加速計(jì)算的變革性技術(shù),這些技術(shù)將有助于在數(shù)據(jù)處理、工程模擬、電子設(shè)計(jì)自動(dòng)化、計(jì)算機(jī)輔助藥物設(shè)計(jì)、量子計(jì)算和生成式 AI 等領(lǐng)域?qū)崿F(xiàn)突破——這些都是 NVIDIA 的新興行業(yè)機(jī)遇。

NVIDIA 創(chuàng)始人兼首席執(zhí)行官黃仁勛表示:“三十年來(lái),我們一直致力于加速計(jì)算,目標(biāo)是實(shí)現(xiàn)深度學(xué)習(xí)和人工智能等變革性突破。生成式人工智能是我們這個(gè)時(shí)代的決定性技術(shù)。Blackwell 是推動(dòng)這場(chǎng)新工業(yè)革命的引擎。通過(guò)與世界上最具活力的公司合作,我們將實(shí)現(xiàn)人工智能在每個(gè)行業(yè)的承諾!

預(yù)計(jì)將采用 Blackwell 的眾多組織包括亞馬遜網(wǎng)絡(luò)服務(wù)、戴爾科技、谷歌、Meta、微軟、OpenAI、甲骨文、特斯拉和 xAI。

Alphabet 和 Google 首席執(zhí)行官 Sundar Pichai 表示: “將搜索和 Gmail 等服務(wù)擴(kuò)展到數(shù)十億用戶讓我們學(xué)到了很多關(guān)于管理計(jì)算基礎(chǔ)設(shè)施的知識(shí)。隨著我們進(jìn)入 AI 平臺(tái)轉(zhuǎn)型,我們將繼續(xù)大力投資于我們自己的產(chǎn)品和服務(wù)以及我們的云客戶的基礎(chǔ)設(shè)施。我們很幸運(yùn)能與 NVIDIA 建立長(zhǎng)期合作伙伴關(guān)系,并期待將 Blackwell GPU 的突破性功能帶給我們的云客戶和 Google 團(tuán)隊(duì),包括 Google DeepMind,以加速未來(lái)的發(fā)現(xiàn)。”

亞馬遜總裁兼首席執(zhí)行官 Andy Jassy 表示: “我們與 NVIDIA 的深度合作可以追溯到 13 年前,當(dāng)時(shí)我們?cè)?AWS 上推出了世界上第一個(gè) GPU 云實(shí)例。如今,我們提供云端任何地方可用的最廣泛的 GPU 解決方案,支持世界上技術(shù)最先進(jìn)的加速工作負(fù)載。這就是新款 NVIDIA Blackwell GPU 能夠在 AWS 上運(yùn)行如此出色的原因,也是 NVIDIA 選擇 AWS 共同開(kāi)發(fā) Project Ceiba 的原因,該項(xiàng)目將 NVIDIA 的下一代 Grace Blackwell 超級(jí)芯片與 AWS Nitro System 的先進(jìn)虛擬化和超快 Elastic Fabric Adapter 網(wǎng)絡(luò)相結(jié)合,用于 NVIDIA 自己的 AI 研發(fā)。通過(guò) AWS 和 NVIDIA 工程師的共同努力,我們

戴爾科技創(chuàng)始人兼首席執(zhí)行官 Michael Dell 表示: “生成式人工智能對(duì)于打造更智能、更可靠、更高效的系統(tǒng)至關(guān)重要。戴爾科技和 NVIDIA 正在攜手塑造技術(shù)的未來(lái)。隨著 Blackwell 的推出,我們將繼續(xù)為客戶提供下一代加速產(chǎn)品和服務(wù),為他們提供推動(dòng)跨行業(yè)創(chuàng)新所需的工具。”

Google DeepMind 聯(lián)合創(chuàng)始人兼首席執(zhí)行官 Demis Hassabis 表示: “人工智能的變革潛力令人難以置信,它將幫助我們解決世界上一些最重要的科學(xué)問(wèn)題。Blackwell 的突破性技術(shù)能力將提供所需的關(guān)鍵計(jì)算能力,幫助世界上最聰明的人繪制新的科學(xué)發(fā)現(xiàn)!

Meta 創(chuàng)始人兼首席執(zhí)行官馬克·扎克伯格表示: “人工智能已經(jīng)為我們的大型語(yǔ)言模型、內(nèi)容推薦、廣告和安全系統(tǒng)等一切事物提供了支持,而且未來(lái)它只會(huì)變得更加重要。我們期待使用 NVIDIA 的 Blackwell 來(lái)幫助訓(xùn)練我們的開(kāi)源 Llama 模型并構(gòu)建下一代 Meta 人工智能和消費(fèi)產(chǎn)品。”

微軟執(zhí)行董事長(zhǎng)兼首席執(zhí)行官薩蒂亞·納德拉 (Satya Nadella) 表示: “我們致力于為客戶提供最先進(jìn)的基礎(chǔ)設(shè)施,以支持他們的 AI 工作負(fù)載。通過(guò)將 GB200 Grace Blackwell 處理器引入我們?nèi)虻臄?shù)據(jù)中心,我們將在長(zhǎng)期優(yōu)化 NVIDIA GPU 以適應(yīng)我們的云的基礎(chǔ)上繼續(xù)前進(jìn),為世界各地的組織實(shí)現(xiàn) AI 的承諾。”

OpenAI 首席執(zhí)行官 Sam Altman 表示: “Blackwell 實(shí)現(xiàn)了巨大的性能飛躍,并將加速我們提供尖端模型的能力。我們很高興能繼續(xù)與 NVIDIA 合作,以增強(qiáng) AI 計(jì)算能力!

Oracle 董事長(zhǎng)兼首席技術(shù)官拉里·埃里森 (Larry Ellison) 表示: “Oracle 與 NVIDIA 的密切合作將推動(dòng) AI、機(jī)器學(xué)習(xí)和數(shù)據(jù)分析領(lǐng)域?qū)崿F(xiàn)質(zhì)和量的突破。為了讓客戶發(fā)現(xiàn)更多可操作的見(jiàn)解,需要像 Blackwell 這樣更強(qiáng)大的引擎,它專為加速計(jì)算和生成 AI 而打造!

特斯拉和xAI首席執(zhí)行官埃隆·馬斯克: “目前沒(méi)有什么比NVIDIA硬件更適合AI!

新架構(gòu)以大衛(wèi)·哈羅德·布萊克威爾 (David Harold Blackwell) 的名字命名,他是一位專門研究博弈論和統(tǒng)計(jì)學(xué)的數(shù)學(xué)家,也是第一位入選美國(guó)國(guó)家科學(xué)院的黑人學(xué)者。新架構(gòu)取代了兩年前推出的 NVIDIA Hopper? 架構(gòu)。

Blackwell 創(chuàng)新助力加速計(jì)算和生成式 AI
Blackwell 的六大革命性技術(shù)共同支持高達(dá) 10 萬(wàn)億參數(shù)的模型進(jìn)行 AI 訓(xùn)練和實(shí)時(shí) LLM 推理,包括:

世界上最強(qiáng)大的芯片 — Blackwell 架構(gòu) GPU 配備了 2080 億個(gè)晶體管,采用定制的 4NP TSMC 工藝制造,具有兩個(gè)光罩極限 GPU 芯片,通過(guò) 10 TB/秒的芯片到芯片鏈路連接成單個(gè)統(tǒng)一的 GPU。


第二代 Transformer 引擎 ——借助新的微張量縮放支持以及集成到 NVIDIA TensorRT?-LLM 和 NeMo Megatron 框架中的 NVIDIA 先進(jìn)動(dòng)態(tài)范圍管理算法,Blackwell 將通過(guò)新的 4 位浮點(diǎn) AI 推理功能支持兩倍的計(jì)算和模型大小。


第五代 NVLink — 為了加速數(shù)萬(wàn)億參數(shù)和混合專家 AI 模型的性能,最新版本的 NVIDIA NVLink? 為每個(gè) GPU 提供了突破性的 1.8TB/s 雙向吞吐量,確保最復(fù)雜的 LLM 之間最多 576 個(gè) GPU 之間的無(wú)縫高速通信。


RAS 引擎 — Blackwell 驅(qū)動(dòng)的 GPU 包含一個(gè)專用引擎,用于提高可靠性、可用性和可維護(hù)性。此外,Blackwell 架構(gòu)在芯片級(jí)別增加了功能,可利用基于 AI 的預(yù)防性維護(hù)來(lái)運(yùn)行診斷并預(yù)測(cè)可靠性問(wèn)題。這可最大限度地延長(zhǎng)系統(tǒng)正常運(yùn)行時(shí)間,并提高大規(guī)模 AI 部署的彈性,使其能夠連續(xù)數(shù)周甚至數(shù)月不間斷運(yùn)行,并降低運(yùn)營(yíng)成本。
安全的 AI—— 先進(jìn)的機(jī)密計(jì)算功能可在不影響性能的情況下保護(hù) AI 模型和客戶數(shù)據(jù),并支持新的本機(jī)接口加密協(xié)議,這對(duì)于醫(yī)療保健和金融服務(wù)等隱私敏感行業(yè)至關(guān)重要。
解壓縮引擎 — 專用解壓縮引擎支持最新格式,加速數(shù)據(jù)庫(kù)查詢,以在數(shù)據(jù)分析和數(shù)據(jù)科學(xué)中提供最高的性能。未來(lái)幾年,公司每年花費(fèi)數(shù)百億美元的數(shù)據(jù)處理將越來(lái)越多地采用 GPU 加速。


大型超級(jí)芯片
NVIDIA GB200 Grace Blackwell 超級(jí)芯片通過(guò) 900GB/s 超低功耗 NVLink 芯片到芯片互連 將兩個(gè) NVIDIA B200 Tensor Core GPU 連接到 NVIDIA Grace CPU。

為了獲得最高的 AI 性能,基于 GB200 的系統(tǒng)可以與今天發(fā)布的NVIDIA Quantum-X800 InfiniBand 和 Spectrum?-X800 以太網(wǎng)平臺(tái)連接, 可提供高達(dá) 800Gb/s 速度的先進(jìn)網(wǎng)絡(luò)。

GB200 是 NVIDIA GB200 NVL72的關(guān)鍵組件,NVIDIA GB200 NVL72 是一款多節(jié)點(diǎn)、液冷、機(jī)架級(jí)系統(tǒng),適用于計(jì)算密集型工作負(fù)載。它結(jié)合了 36 個(gè) Grace Blackwell 超級(jí)芯片,其中包括 72 個(gè) Blackwell GPU 和 36 個(gè) Grace CPU,它們通過(guò)第五代 NVLink 互連。此外,GB200 NVL72 還包括 NVIDIA BlueField?-3 數(shù)據(jù)處理單元,可在超大規(guī)模 AI 云中實(shí)現(xiàn)云網(wǎng)絡(luò)加速、可組合存儲(chǔ)、零信任安全性和 GPU 計(jì)算彈性。與相同數(shù)量的 NVIDIA H100 Tensor Core GPU 相比,GB200 NVL72 可將 LLM 推理工作負(fù)載的性能提高 30 倍,并將成本和能耗降低 25 倍。

該平臺(tái)作為單個(gè) GPU,具有 1.4 exaflops 的 AI 性能和 30 TB 的快速內(nèi)存,是最新 DGX SuperPOD 的構(gòu)建模塊。

NVIDIA 提供 HGX B200,這是一款服務(wù)器主板,可通過(guò) NVLink 連接八個(gè) B200 GPU,以支持基于 x86 的生成式 AI 平臺(tái)。HGX B200 通過(guò) NVIDIA Quantum-2 InfiniBand 和 Spectrum-X 以太網(wǎng)網(wǎng)絡(luò)平臺(tái)支持高達(dá) 400Gb/s 的網(wǎng)絡(luò)速度。

Blackwell 合作伙伴全球網(wǎng)絡(luò)
基于 Blackwell 的產(chǎn)品將于今年晚些時(shí)候從合作伙伴處發(fā)售。

AWS、 Google Cloud、 Microsoft Azure 和 Oracle Cloud Infrastructure 將成為首批提供 Blackwell 驅(qū)動(dòng)實(shí)例的云服務(wù)提供商,NVIDIA Cloud 合作伙伴計(jì)劃公司 Applied Digital、CoreWeave、Crusoe、IBM Cloud、Lambda和 Nebius也將提供此類服務(wù)。Sovereign AI 云也將提供基于 Blackwell 的云服務(wù)和基礎(chǔ)設(shè)施,其中包括 Indosat Ooredoo Hutchinson、Nexgen Cloud、Oracle EU Sovereign Cloud、Oracle 美國(guó)、英國(guó)和澳大利亞政府云、Scaleway、Singtel、Northern Data Group 的 Taiga Cloud、Yotta Data Services 的 Shakti Cloud 和YTL Power International。

GB200 還將在 NVIDIA DGX? Cloud上提供,這是一個(gè)與領(lǐng)先的云服務(wù)提供商共同設(shè)計(jì)的 AI 平臺(tái),可為企業(yè)開(kāi)發(fā)人員提供構(gòu)建和部署高級(jí)生成式 AI 模型所需的基礎(chǔ)設(shè)施和軟件的專用訪問(wèn)權(quán)限。AWS、Google Cloud 和 Oracle Cloud Infrastructure 計(jì)劃在今年晚些時(shí)候托管新的基于 NVIDIA Grace Blackwell 的實(shí)例。

思科、 戴爾、 惠普企業(yè)、 聯(lián)想 和超微預(yù)計(jì)將推出大量基于 Blackwell 產(chǎn)品的服務(wù)器,Aivres、 ASRock Rack、 華碩、Eviden、 富士康、 技嘉、 英業(yè)達(dá)、 和碩、 QCT、緯創(chuàng)、 Wiwynn 和 ZT Systems 也將如此。

此外,越來(lái)越多的軟件制造商,包括 Ansys、Cadence 和 Synopsys(工程模擬領(lǐng)域的全球領(lǐng)導(dǎo)者)將使用基于 Blackwell 的處理器來(lái)加速其用于設(shè)計(jì)和模擬電氣、機(jī)械和制造系統(tǒng)及零件的軟件。他們的客戶可以使用生成式人工智能和加速計(jì)算以更快、更低成本和更高能效將產(chǎn)品推向市場(chǎng)。

NVIDIA 軟件支持Blackwell 產(chǎn)品組合由NVIDIA AI Enterprise(用于生產(chǎn)級(jí) AI 的端到端操作系統(tǒng))
提供支持 。NVIDIA AI Enterprise 包括NVIDIA NIM? 推理微服務(wù) (今天也宣布)以及企業(yè)可以在 NVIDIA 加速云、數(shù)據(jù)中心和工作站上部署的 AI 框架、庫(kù)和工具。

要了解有關(guān) NVIDIA Blackwell 平臺(tái)的更多信息,請(qǐng)觀看 GTC 主題演講 ,并 注冊(cè)參加 NVIDIA 和行業(yè)領(lǐng)導(dǎo)者在 GTC 上舉辦的會(huì)議,會(huì)議將持續(xù)到 3 月 21 日。

媒體聯(lián)系人
克里斯汀·內(nèi)山
企業(yè)和邊緣計(jì)算
+1-408-486-2248
kuchiyama@nvidia.com
下載
下載新聞稿
下載附件
更多圖片
NVIDIA Blackwell
NVIDIA GB200 NVL72
NVIDIA GB200 Grace Blackwell 超級(jí)芯片
更多新聞

NVIDIA 將召開(kāi)第二季度財(cái)務(wù)業(yè)績(jī)電話會(huì)議
2024 年 7 月 31 日

NVIDIA 宣布推出適用于 OpenUSD 語(yǔ)言、幾何、物理和材料的生成式 AI 模型和 NIM 微服務(wù)
2024 年 7 月 29 日

NVIDIA 加速人形機(jī)器人開(kāi)發(fā)
2024 年 7 月 29 日

NVIDIA AI Foundry 為全球企業(yè)構(gòu)建定制 Llama 3.1 生成式 AI 模型
2024 年 7 月 23 日

惠普企業(yè)和 NVIDIA 宣布推出“NVIDIA AI 計(jì)算 by HPE”,加速生成式 AI 工業(yè)革命
2024 年 6 月 18 日
關(guān)于 NVIDIA
自 1993 年成立以來(lái), NVIDIA (NASDAQ: NVDA) 一直是加速計(jì)算領(lǐng)域的先驅(qū)。該公司于 1999 年發(fā)明的 GPU 激發(fā)了 PC 游戲市場(chǎng)的增長(zhǎng),重新定義了計(jì)算機(jī)圖形,開(kāi)啟了現(xiàn)代 AI 時(shí)代,并推動(dòng)了各個(gè)市場(chǎng)的工業(yè)數(shù)字化。NVIDIA 現(xiàn)在是一家全棧計(jì)算基礎(chǔ)設(shè)施公司,提供數(shù)據(jù)中心規(guī)模的產(chǎn)品,正在重塑行業(yè)。更多信息請(qǐng)?jiān)L問(wèn) https://nvidianews.nvidia.com/。

本新聞稿中的某些聲明包括但不限于以下聲明:NVIDIA 產(chǎn)品和技術(shù)的優(yōu)勢(shì)、影響、性能、功能和可用性,包括 NVIDIA Blackwell 平臺(tái)、Blackwell GPU 架構(gòu)、Resilience Technologies、Custom Tensor Core 技術(shù)、NVIDIA TensorRT-LLM、NeMo Megatron 框架、NVLink、NVIDIA GB200 Grace Blackwell 超級(jí)芯片、B200 Tensor Core GPU、NVIDIA Grace CPU、NVIDIA H100 Tensor Core GPU、NVIDIA Quantum-X800 InfiniBand 和 Spectrum-X800 以太網(wǎng)平臺(tái)、NVIDIA GB200 NVL72、NVIDIA BlueField-3 數(shù)據(jù)處理單元、DGX SuperPOD、HGX B200、Quantum-2 InfiniBand 和 Spectrum-X 以太網(wǎng)平臺(tái)、BlueField-3 DPU、NVIDIA DGX Cloud、NVIDIA AI Enterprise 和 NVIDIA NIM 推理微服務(wù);我們的目標(biāo)是實(shí)現(xiàn)深度學(xué)習(xí)和人工智能等變革性突破;Blackwell GPU 是推動(dòng)新工業(yè)革命的引擎;我們與世界上最具活力的公司合作,實(shí)現(xiàn)人工智能應(yīng)用于每個(gè)行業(yè)的能力;我們與第三方的合作與伙伴關(guān)系及其利益和影響;將提供或使用我們的產(chǎn)品、服務(wù)和基礎(chǔ)設(shè)施以及將基于我們的產(chǎn)品交付服務(wù)器的第三方;以及全球工程模擬領(lǐng)導(dǎo)者的客戶使用生成人工智能和加速計(jì)算以更快、更低成本和更高能源效率將產(chǎn)品推向市場(chǎng)的能力,這些都是前瞻性陳述,受風(fēng)險(xiǎn)和不確定性的影響,這些風(fēng)險(xiǎn)和不確定性可能導(dǎo)致結(jié)果與預(yù)期存在重大差異?赡軐(dǎo)致實(shí)際結(jié)果出現(xiàn)重大差異的重要因素包括:全球經(jīng)濟(jì)狀況;我們對(duì)第三方制造、組裝、包裝和測(cè)試我們產(chǎn)品的依賴;技術(shù)發(fā)展和競(jìng)爭(zhēng)的影響;新產(chǎn)品和技術(shù)的開(kāi)發(fā)或現(xiàn)有產(chǎn)品和技術(shù)的增強(qiáng);市場(chǎng)對(duì)我們產(chǎn)品或我們合作伙伴產(chǎn)品的接受度;設(shè)計(jì)、制造或軟件缺陷;消費(fèi)者偏好或需求的變化;行業(yè)標(biāo)準(zhǔn)和界面的變化;我們的產(chǎn)品或技術(shù)集成到系統(tǒng)中時(shí)出現(xiàn)的意外性能損失;以及 NVIDIA 不時(shí)向美國(guó)證券交易委員會(huì) (SEC) 提交的最新報(bào)告中詳述的其他因素,包括但不限于:其年度報(bào)告(表格 10-K)和季度報(bào)告(表格 10-Q)均已提交給 SEC。提交給 SEC 的報(bào)告副本已發(fā)布在公司網(wǎng)站上,可從 NVIDIA 免費(fèi)獲取。這些前瞻性聲明并非未來(lái)業(yè)績(jī)的保證,僅代表截至本文發(fā)布之日的情況,除法律要求外,NVIDIA 不承擔(dān)更新這些前瞻性聲明以反映未來(lái)事件或情況的任何義務(wù)。NVIDIA 不承擔(dān)更新這些前瞻性陳述以反映未來(lái)事件或情況的義務(wù)。NVIDIA 不承擔(dān)更新這些前瞻性陳述以反映未來(lái)事件或情況的義務(wù)。

? 2024 NVIDIA Corporation。保留所有權(quán)利。NVIDIA、NVIDIA 徽標(biāo)、BlueField、DGX、NVIDIA HGX、NVIDIA Hopper、NVIDIA NeMo、NVIDIA NIM、NVIDIA Spectrum、NVLink 和 TensorRT 是 NVIDIA Corporation 在美國(guó)和其他國(guó)家/地區(qū)的商標(biāo)和/或注冊(cè)商標(biāo)。其他公司和產(chǎn)品名稱可能是與其相關(guān)的各自公司的商標(biāo)。功能、價(jià)格、可用性和規(guī)格如有變更,恕不另行通知。

 

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing
March 18, 2024


NVIDIA Blackwell Platform Arrives to Power a New Era of Computing
New Blackwell GPU, NVLink and Resilience Technologies Enable Trillion-Parameter-Scale AI Models
New Tensor Cores and TensorRT- LLM Compiler Reduce LLM Inference Operating Cost and Energy by up to 25x
New Accelerators Enable Breakthroughs in Data Processing, Engineering Simulation, Electronic Design Automation, Computer-Aided Drug Design and Quantum Computing
Widespread Adoption by Every Major Cloud Provider, Server Maker and Leading AI Company

GTC—Powering a new era of computing, NVIDIA today announced that the NVIDIA Blackwell platform has arrived — enabling organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.

The Blackwell GPU architecture features six transformative technologies for accelerated computing, which will help unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing and generative AI — all emerging industry opportunities for NVIDIA.

“For three decades we’ve pursued accelerated computing, with the goal of enabling transformative breakthroughs like deep learning and AI,” said Jensen Huang, founder and CEO of NVIDIA. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.”

Among the many organizations expected to adopt Blackwell are Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla and xAI.

Sundar Pichai, CEO of Alphabet and Google: “Scaling services like Search and Gmail to billions of users has taught us a lot about managing compute infrastructure. As we enter the AI platform shift, we continue to invest deeply in infrastructure for our own products and services, and for our Cloud customers. We are fortunate to have a longstanding partnership with NVIDIA, and look forward to bringing the breakthrough capabilities of the Blackwell GPU to our Cloud customers and teams across Google, including Google DeepMind, to accelerate future discoveries.”

Andy Jassy, president and CEO of Amazon: “Our deep collaboration with NVIDIA goes back more than 13 years, when we launched the world’s first GPU cloud instance on AWS. Today we offer the widest range of GPU solutions available anywhere in the cloud, supporting the world’s most technologically advanced accelerated workloads. It's why the new NVIDIA Blackwell GPU will run so well on AWS and the reason that NVIDIA chose AWS to co-develop Project Ceiba, combining NVIDIA’s next-generation Grace Blackwell Superchips with the AWS Nitro System's advanced virtualization and ultra-fast Elastic Fabric Adapter networking, for NVIDIA's own AI research and development. Through this joint effort between AWS and NVIDIA engineers, we're continuing to innovate together to make AWS the best place for anyone to run NVIDIA GPUs in the cloud.”

Michael Dell, founder and CEO of Dell Technologies: “Generative AI is critical to creating smarter, more reliable and efficient systems. Dell Technologies and NVIDIA are working together to shape the future of technology. With the launch of Blackwell, we will continue to deliver the next-generation of accelerated products and services to our customers, providing them with the tools they need to drive innovation across industries.”

Demis Hassabis, cofounder and CEO of Google DeepMind: “The transformative potential of AI is incredible, and it will help us solve some of the world’s most important scientific problems. Blackwell’s breakthrough technological capabilities will provide the critical compute needed to help the world’s brightest minds chart new scientific discoveries.”

Mark Zuckerberg, founder and CEO of Meta: “AI already powers everything from our large language models to our content recommendations, ads, and safety systems, and it's only going to get more important in the future. We're looking forward to using NVIDIA's Blackwell to help train our open-source Llama models and build the next generation of Meta AI and consumer products.”

Satya Nadella, executive chairman and CEO of Microsoft: “We are committed to offering our customers the most advanced infrastructure to power their AI workloads. By bringing the GB200 Grace Blackwell processor to our datacenters globally, we are building on our long-standing history of optimizing NVIDIA GPUs for our cloud, as we make the promise of AI real for organizations everywhere.”

Sam Altman, CEO of OpenAI: “Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We’re excited to continue working with NVIDIA to enhance AI compute.”

Larry Ellison, chairman and CTO of Oracle: "Oracle’s close collaboration with NVIDIA will enable qualitative and quantitative breakthroughs in AI, machine learning and data analytics. In order for customers to uncover more actionable insights, an even more powerful engine like Blackwell is needed, which is purpose-built for accelerated computing and generative AI.”

Elon Musk, CEO of Tesla and xAI: “There is currently nothing better than NVIDIA hardware for AI.”

Named in honor of David Harold Blackwell — a mathematician who specialized in game theory and statistics, and the first Black scholar inducted into the National Academy of Sciences — the new architecture succeeds the NVIDIA Hopper? architecture, launched two years ago.

Blackwell Innovations to Fuel Accelerated Computing and Generative AI
Blackwell’s six revolutionary technologies, which together enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters, include:

World’s Most Powerful Chip — Packed with 208 billion transistors, Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
Second-Generation Transformer Engine — Fueled by new micro-tensor scaling support and NVIDIA’s advanced dynamic range management algorithms integrated into NVIDIA TensorRT?-LLM and NeMo Megatron frameworks, Blackwell will support double the compute and model sizes with new 4-bit floating point AI inference capabilities.
Fifth-Generation NVLink — To accelerate performance for multitrillion-parameter and mixture-of-experts AI models, the latest iteration of NVIDIA NVLink? delivers groundbreaking 1.8TB/s bidirectional throughput per GPU, ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs.
RAS Engine — Blackwell-powered GPUs include a dedicated engine for reliability, availability and serviceability. Additionally, the Blackwell architecture adds capabilities at the chip level to utilize AI-based preventative maintenance to run diagnostics and forecast reliability issues. This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs.
Secure AI — Advanced confidential computing capabilities protect AI models and customer data without compromising performance, with support for new native interface encryption protocols, which are critical for privacy-sensitive industries like healthcare and financial services.
Decompression Engine — A dedicated decompression engine supports the latest formats, accelerating database queries to deliver the highest performance in data analytics and data science. In the coming years, data processing, on which companies spend tens of billions of dollars annually, will be increasingly GPU-accelerated.
A Massive Superchip
The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

For the highest AI performance, GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum?-X800 Ethernet platforms, also announced today, which deliver advanced networking at speeds up to 800Gb/s.

The GB200 is a key component of the NVIDIA GB200 NVL72, a multi-node, liquid-cooled, rack-scale system for the most compute-intensive workloads. It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. Additionally, GB200 NVL72 includes NVIDIA BlueField?-3 data processing units to enable cloud network acceleration, composable storage, zero-trust security and GPU compute elasticity in hyperscale AI clouds. The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads, and reduces cost and energy consumption by up to 25x.

The platform acts as a single GPU with 1.4 exaflops of AI performance and 30TB of fast memory, and is a building block for the newest DGX SuperPOD.

NVIDIA offers the HGX B200, a server board that links eight B200 GPUs through NVLink to support x86-based generative AI platforms. HGX B200 supports networking speeds up to 400Gb/s through the NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking platforms.

Global Network of Blackwell Partners
Blackwell-based products will be available from partners starting later this year.

AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to offer Blackwell-powered instances, as will NVIDIA Cloud Partner program companies Applied Digital, CoreWeave, Crusoe, IBM Cloud, Lambda and Nebius. Sovereign AI clouds will also provide Blackwell-based cloud services and infrastructure, including Indosat Ooredoo Hutchinson, Nexgen Cloud, Oracle EU Sovereign Cloud, the Oracle US, UK, and Australian Government Clouds, Scaleway, Singtel, Northern Data Group's Taiga Cloud, Yotta Data Services’ Shakti Cloud and YTL Power International.

GB200 will also be available on NVIDIA DGX? Cloud, an AI platform co-engineered with leading cloud service providers that gives enterprise developers dedicated access to the infrastructure and software needed to build and deploy advanced generative AI models. AWS, Google Cloud and Oracle Cloud Infrastructure plan to host new NVIDIA Grace Blackwell-based instances later this year.

Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers based on Blackwell products, as are Aivres, ASRock Rack, ASUS, Eviden, Foxconn, GIGABYTE, Inventec, Pegatron, QCT, Wistron, Wiwynn and ZT Systems.

Additionally, a growing network of software makers, including Ansys, Cadence and Synopsys — global leaders in engineering simulation — will use Blackwell-based processors to accelerate their software for designing and simulating electrical, mechanical and manufacturing systems and parts. Their customers can use generative AI and accelerated computing to bring products to market faster, at lower cost and with higher energy efficiency.

NVIDIA Software Support
The Blackwell product portfolio is supported by NVIDIA AI Enterprise, the end-to-end operating system for production-grade AI. NVIDIA AI Enterprise includes NVIDIA NIM? inference microservices — also announced today — as well as AI frameworks, libraries and tools that enterprises can deploy on NVIDIA-accelerated clouds, data centers and workstations.

To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

Media Contacts
Kristin Uchiyama
Enterprise and Edge Computing
+1-408-486-2248
kuchiyama@nvidia.com
Downloads
Download Press Release
Download Attachments
More Images
NVIDIA Blackwell
NVIDIA GB200 NVL72
NVIDIA GB200 Grace Blackwell Superchip
More News

NVIDIA Sets Conference Call for Second-Quarter Financial Results
July 31, 2024

NVIDIA Announces Generative AI Models and NIM Microservices for OpenUSD Language, Geometry, Physics and Materials
July 29, 2024

NVIDIA Accelerates Humanoid Robotics Development
July 29, 2024

NVIDIA AI Foundry Builds Custom Llama 3.1 Generative AI Models for the World’s Enterprises
July 23, 2024

Hewlett Packard Enterprise and NVIDIA Announce ‘NVIDIA AI Computing by HPE’ to Accelerate Generative AI Industrial Revolution
June 18, 2024
About NVIDIA
Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing infrastructure company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features, and availability of NVIDIA’s products and technologies, including NVIDIA Blackwell platform, Blackwell GPU architecture, Resilience Technologies, Custom Tensor Core technology, NVIDIA TensorRT-LLM, NeMo Megatron framework, NVLink, NVIDIA GB200 Grace Blackwell Superchip, B200 Tensor Core GPUs, NVIDIA Grace CPU, NVIDIA H100 Tensor Core GPU, NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms, NVIDIA GB200 NVL72, NVIDIA BlueField-3 data processing units, DGX SuperPOD, HGX B200, Quantum-2 InfiniBand and Spectrum-X Ethernet platforms, BlueField-3 DPUs, NVIDIA DGX Cloud, NVIDIA AI Enterprise, and NVIDIA NIM inference microservices; our goal of enabling transformative breakthroughs like deep learning and AI; Blackwell GPUs being the engine to power a new industrial revolution; our ability to realize the promise of AI for every industry as we working with the most dynamic companies in the world; our collaborations and partnerships with third parties and the benefits and impacts thereof; third parties who will offer or use our products, services and infrastructures and who will deliver servers based on our products; and the ability of the customers of global leaders in engineering simulation to use generative AI and accelerated computing to bring products to market faster, at lower cost and with higher energy efficiency are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

? 2024 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, DGX, NVIDIA HGX, NVIDIA Hopper, NVIDIA NeMo, NVIDIA NIM, NVIDIA Spectrum, NVLink, and TensorRT are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

 

關(guān)于我們

北京漢深流體技術(shù)有限公司是丹佛斯中國(guó)數(shù)據(jù)中心簽約代理商。產(chǎn)品包括FD83全流量自鎖球閥接頭,UQD系列液冷快速接頭、EHW194 EPDM液冷軟管、電磁閥、壓力和溫度傳感器及Manifold的生產(chǎn)和集成服務(wù)。在國(guó)家數(shù)字經(jīng)濟(jì)、東數(shù)西算、雙碳、新基建戰(zhàn)略的交匯點(diǎn),公司聚焦組建高素質(zhì)、經(jīng)驗(yàn)豐富的液冷工程師團(tuán)隊(duì),為客戶提供卓越的工程設(shè)計(jì)和強(qiáng)大的客戶服務(wù)。

公司產(chǎn)品涵蓋:丹佛斯液冷流體連接器、EPDM軟管、電磁閥、壓力和溫度傳感器及Manifold。
未來(lái)公司發(fā)展規(guī)劃:數(shù)據(jù)中心液冷基礎(chǔ)設(shè)施解決方案廠家,具備冷量分配單元(CDU)、二次側(cè)管路(SFN)和Manifold的專業(yè)研發(fā)設(shè)計(jì)制造能力。


- 針對(duì)機(jī)架式服務(wù)器中Manifold/節(jié)點(diǎn)、CDU/主回路等應(yīng)用場(chǎng)景,提供不同口徑及鎖緊方式的手動(dòng)和全自動(dòng)快速連接器。
- 針對(duì)高可用和高密度要求的刀片式機(jī)架,可提供帶浮動(dòng)、自動(dòng)校正不對(duì)中誤差的盲插連接器。以實(shí)現(xiàn)狹小空間的精準(zhǔn)對(duì)接。
- 基于OCP標(biāo)準(zhǔn)全新打造的UQD/UQDB通用快速連接器也將首次亮相, 支持全球范圍內(nèi)的大批量交付。

 

 

北京漢深流體技術(shù)有限公司 Hansen Fluid
丹佛斯簽約中國(guó)經(jīng)銷商 Danfoss Authorized Distributor

地址:北京市朝陽(yáng)區(qū)望京街10號(hào)望京SOHO塔1C座2115室
郵編:100102
電話:010-8428 2935 , 8428 3983 , 13910962635
手機(jī):15801532751,17310484595 ,13910122694
13011089770,15313809303
Http://shanghaining.com.cn
E-mail:sales@cnmec.biz

傳真:010-8428 8762

京ICP備2023024665號(hào)
京公網(wǎng)安備 11010502019740

Since 2007 Strong Distribution & Powerful Partnerships