国产美女自慰大秀_成人国产黄色毛片_青柠在线观看免费高清电视剧红桃_日韩最新毛片网站_午夜久久亚洲精品_国产最新精品黄色免费三级片_亚洲成av人片不卡无码播放_国产亚洲日韩在线播放更多_国产精品呦另类稀缺_日本1级黄色视频免费在线播放

 
We engineer tomorrow to build a better future.
Solutions to your liquid cooling challenges.
 
 
DANFOSS
數(shù)據(jù)中心液冷產(chǎn)品
  數(shù)據(jù)中心液冷產(chǎn)品
  FD83接頭
  UQD快速接頭
  UQDB盲插接頭
  BMQC盲插接頭
  EHW194液冷軟管
  EHW094液冷軟管
  5400制冷劑接頭
  Manifold 分水器
  液冷系統(tǒng)生產(chǎn)及集成
Danfoss流體管閥件
 
 
 
 
 
非標(biāo)定制液冷產(chǎn)品
液冷系統(tǒng)生產(chǎn)及集成
閥門
傳感器
選型資料下載
  新聞通告
  成功案例
  資料下載

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


   

 

 

Nvidia shows off Blackwell server installations in progress — AI and data center roadmap has Blackwell Ultra coming next year with Vera CPUs and Rubin GPUs in 2026
Nvidia also emphasized that it sees Blackwell and Rubin as platforms and not just GPUs

By Jarred Walton published 2 days ago

Nvidia 展示了正在進(jìn)行的 Blackwell 服務(wù)器安裝——人工智能和數(shù)據(jù)中心路線圖顯示 Blackwell Ultra 將于明年推出,并于 2026 年配備 Vera CPU 和 Rubin GPU。Nvidia 還強(qiáng)調(diào),它將 Blackwell 和 Rubin 視為平臺(tái)而不僅僅是 GPU

 

 

英偉達(dá)產(chǎn)品路線圖曝光,事關(guān)Blackwell和Rubin

在 Hot Chips 2024 展會(huì)開始之前,Nvidia 展示了其 Blackwell 平臺(tái)的更多元素,包括正在安裝和配置的服務(wù)器。這是一種不太含蓄的說(shuō)法,表明 Blackwell 仍在發(fā)展中——?jiǎng)e介意延遲。它還談到了現(xiàn)有的 Hopper H200 解決方案,展示了使用其新 Quasar 量化系統(tǒng)進(jìn)行的 FP4 LLM 優(yōu)化,討論了數(shù)據(jù)中心的溫水液體冷卻,并談到了使用 AI 來(lái)幫助構(gòu)建更好的 AI 芯片。它重申,Blackwell 不僅僅是一個(gè) GPU,它是一個(gè)完整的平臺(tái)和生態(tài)系統(tǒng)。

Nvidia在 Hot Chips 2024 上展示的大部分內(nèi)容已經(jīng)為人所知,例如數(shù)據(jù)中心和 AI 路線圖顯示 Blackwell Ultra 將于明年推出,Vera CPU 和 Rubin GPU 將于 2026 年推出,隨后是 2027 年的 Vera Ultra。Nvidia 早在 6 月份就在Computex上首次確認(rèn)了這些細(xì)節(jié)。但 AI 仍然是一個(gè)大話題,Nvidia 非常樂意繼續(xù)推動(dòng) AI 的發(fā)展。

據(jù)報(bào)道,Blackwell 的發(fā)布推遲了三個(gè)月,但Nvidia 既沒有證實(shí)也沒有否認(rèn)這一信息,而是選擇展示正在安裝的 Blackwell 系統(tǒng)的圖像,以及展示 Blackwell GB200 機(jī)架和 NVLink 交換機(jī)中更多內(nèi)部硬件的照片和渲染圖。除了該硬件看起來(lái)可以消耗大量電量并且具有相當(dāng)強(qiáng)大的冷卻功能之外,沒有太多可說(shuō)的。它看起來(lái)也很昂貴。

Nvidia還展示了其現(xiàn)有 H200 的一些性能結(jié)果,運(yùn)行時(shí)使用和不使用 NVSwitch。它表示,與運(yùn)行點(diǎn)對(duì)點(diǎn)設(shè)計(jì)相比,推理工作負(fù)載的性能可以提高 1.5 倍——這是使用 Llama 3.1 70B 參數(shù)模型。Blackwell 將 NVLink 帶寬翻倍以提供進(jìn)一步的改進(jìn),NVLink Switch Tray 提供總計(jì) 14.4 TB/s 的總帶寬。

由于數(shù)據(jù)中心的功率需求不斷增加,Nvidia 也在與合作伙伴合作以提高性能和效率。其中一個(gè)更有希望的結(jié)果是使用溫水冷卻,其中加熱的水可以再循環(huán)用于加熱以進(jìn)一步降低成本。Nvidia 聲稱,使用該技術(shù)可以使數(shù)據(jù)中心的用電量減少 28%,其中很大一部分來(lái)自移除低于環(huán)境溫度的冷卻硬件。

上面是 Nvidia 演示文稿的完整幻燈片。還有一些其他值得注意的有趣內(nèi)容。

為了準(zhǔn)備 Blackwell,現(xiàn)在增加了原生 FP4 支持,可以進(jìn)一步提高性能,Nvidia 一直致力于確保其最新軟件從新硬件功能中受益,而不會(huì)犧牲準(zhǔn)確性。在使用其 Quasar 量化系統(tǒng)調(diào)整工作負(fù)載結(jié)果后,Nvidia 能夠提供與 FP16 基本相同的質(zhì)量,同時(shí)使用四分之一的帶寬。生成的兩個(gè)兔子圖像可能在細(xì)微方面有所不同,但這對(duì)于 Stable Diffusion 等文本到圖像工具來(lái)說(shuō)非常典型。Nvidia

還談到了使用 AI 工具來(lái)設(shè)計(jì)更好的芯片——AI 構(gòu)建 AI,一路向下都是烏龜。Nvidia 創(chuàng)建了一個(gè)內(nèi)部使用的 LLM,有助于加快設(shè)計(jì)、調(diào)試、分析和優(yōu)化。它與用于描述電路的 Verilog 語(yǔ)言一起工作,是創(chuàng)建 2080 億個(gè)晶體管 Blackwell B200 GPU 的關(guān)鍵因素。然后,這將用于創(chuàng)建更好的模型,使 Nvidia 能夠在下一代 Rubin GPU 及以后的產(chǎn)品上工作。[此時(shí),您可以隨意插入您自己的 Skynet 笑話。]

總結(jié)一下,我們對(duì) Nvidia 未來(lái)幾年的 AI 路線圖有了更高質(zhì)量的了解,該路線圖再次將“Rubin 平臺(tái)”與交換機(jī)和互連定義為一個(gè)整體包。Nvidia 將在下周的 Hot Chips 會(huì)議上介紹有關(guān) Blackwell 架構(gòu)、使用生成 AI 進(jìn)行計(jì)算機(jī)輔助工程和液體冷卻的更多細(xì)節(jié)。

 

參考鏈接

https://www.tomshardware.com/tech-industry/artificial-intelligence/nvidia-shows-off-blackwell-server-installations-in-progress-ai-and-data-center-roadmap-has-blackwell-ultra-coming-next-year-with-vera-cpus-and-rubin-gpus-in-2026

END

 

Nvidia shows off Blackwell server installations in progress — AI and data center roadmap has Blackwell Ultra coming next year with Vera CPUs and Rubin GPUs in 2026
By Jarred Walton published 2 days ago


Nvidia also emphasized that it sees Blackwell and Rubin as platforms and not just GPUs.

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Nvidia Hot Chips 2024
(Image credit: Nvidia)
Prior to the start of the Hot Chips 2024 tradeshow, Nvidia showed off more elements of its Blackwell platform, including servers being installed and configured. It's a less than subtle way of saying that Blackwell is still coming — never mind the delays. It also talked about its existing Hopper H200 solutions, showed FP4 LLM optimizations using its new Quasar Quantization System, discussed warm water liquid cooling for data centers, and talked about using AI to help build even better chips for AI. It reiterated that Blackwell is more than just a GPU, it's an entire platform and ecosystem.

Much of what will be presented by Nvidia at Hot Chips 2024 is already known, like the data center and AI roadmap showing Blackwell Ultra coming next year, with Vera CPUs and Rubin GPUs in 2026, followed by Vera Ultra in 2027. Nvidia first confirmed those details at Computex back in June. But AI remains a big topic and Nvidia is more than happy to keep beating the AI drum.

While Blackwell was reportedly delayed three months, Nvidia neither confirmed nor denied that information, instead opting to show images of Blackwell systems being installed, as well as providing photos and renders showing more of the internal hardware in the Blackwell GB200 racks and NVLink switches. There's not much to say, other than the hardware looks like it can suck down a lot of juice and has some pretty robust cooling. It also looks very expensive.

Nvidia also showed some performance results from its existing H200, running with and without NVSwitch. It says performance can be up to 1.5X higher on inference workloads compared to running point-to-point designs — that was using a Llama 3.1 70B parameter model. Blackwell doubles the NVLink bandwidth to offer further improvements, with an NVLink Switch Tray offering an aggregate 14.4 TB/s of total bandwidth.

Because data center power requirements keep increasing, Nvidia is also working with partners to boost performance and efficiency. One of the more promising results is using warm water cooling, where the heated water can potentially be recirculated for heating to further reduce costs. Nvidia claims it has seen up to a 28% reduction in data center power use using the tech, with a large portion of that coming from the removal of below ambient cooling hardware.

Above you can see the full slide deck from Nvidia's presentation. There are a few other interesting items of note.

To prepare for Blackwell, which now adds native FP4 support that can further boost performance, Nvidia has worked to ensure it's latest software benefits from the new hardware features without sacrificing accuracy. After using its Quasar Quantization System to tune the workloads results, Nvidia is able to deliver basically the same quality as FP16 while using one quarter as much bandwidth. The two generated bunny images may very in minor ways, but that's pretty typical of text-to-image tools like Stable Diffusion.

Nvidia also talked about using AI tools to design better chips — AI building AI, with turtles all the way down. Nvidia created an LLM for internal use that helps to speed up design, debug, analysis, and optimization. It works with the Verilog language that's used to describe circuits and was a key factor in the creation of the 208 billion transistor Blackwell B200 GPU. This will then be used to create even better models to enable Nvidia to work on the next generation Rubin GPUs and beyond. [Feel free to insert your own Skynet joke at this point.]

Wrapping things up, we have a better quality image of Nvidia's AI roadmap for the next several years, which again defines the "Rubin platform" with switches and interlinks as an entire package. Nvidia will be presenting more details on the Blackwell architecture, using generative AI for computer aided engineering, and liquid cooling at the Hot Chips conference next week.

 

關(guān)于我們

北京漢深流體技術(shù)有限公司是丹佛斯中國(guó)數(shù)據(jù)中心簽約代理商。產(chǎn)品包括FD83全流量自鎖球閥接頭,UQD系列液冷快速接頭、EHW194 EPDM液冷軟管、電磁閥、壓力和溫度傳感器及Manifold的生產(chǎn)和集成服務(wù)。在國(guó)家數(shù)字經(jīng)濟(jì)、東數(shù)西算、雙碳、新基建戰(zhàn)略的交匯點(diǎn),公司聚焦組建高素質(zhì)、經(jīng)驗(yàn)豐富的液冷工程師團(tuán)隊(duì),為客戶提供卓越的工程設(shè)計(jì)和強(qiáng)大的客戶服務(wù)。

公司產(chǎn)品涵蓋:丹佛斯液冷流體連接器、EPDM軟管、電磁閥、壓力和溫度傳感器及Manifold。
未來(lái)公司發(fā)展規(guī)劃:數(shù)據(jù)中心液冷基礎(chǔ)設(shè)施解決方案廠家,具備冷量分配單元(CDU)、二次側(cè)管路(SFN)和Manifold的專業(yè)研發(fā)設(shè)計(jì)制造能力。


- 針對(duì)機(jī)架式服務(wù)器中Manifold/節(jié)點(diǎn)、CDU/主回路等應(yīng)用場(chǎng)景,提供不同口徑及鎖緊方式的手動(dòng)和全自動(dòng)快速連接器。
- 針對(duì)高可用和高密度要求的刀片式機(jī)架,可提供帶浮動(dòng)、自動(dòng)校正不對(duì)中誤差的盲插連接器。以實(shí)現(xiàn)狹小空間的精準(zhǔn)對(duì)接。
- 基于OCP標(biāo)準(zhǔn)全新打造的UQD/UQDB通用快速連接器也將首次亮相, 支持全球范圍內(nèi)的大批量交付。

 

 

 

北京漢深流體技術(shù)有限公司 Hansen Fluid
丹佛斯簽約中國(guó)經(jīng)銷商 Danfoss Authorized Distributor

地址:北京市朝陽(yáng)區(qū)望京街10號(hào)望京SOHO塔1C座2115室
郵編:100102
電話:010-8428 2935 , 8428 3983 , 13910962635
手機(jī):15801532751,17310484595 ,13910122694
13011089770,15313809303
Http://shanghaining.com.cn
E-mail:sales@cnmec.biz

傳真:010-8428 8762

京ICP備2023024665號(hào)
京公網(wǎng)安備 11010502019740

Since 2007 Strong Distribution & Powerful Partnerships