(This blog post represents my personal opinion, and should not be interpreted as any official statements from my employer NVIDIA.)
During March 21 – 24, NVIDIA will host its GPU Technology Conference (GTC). I will be involved in one training event and two presentations.
The training event is named Introduction to Learning Deep Learning, which takes the form of an NVIDIA Deep Learning Institute Training Lab. The presentation covers material from the first five chapters of LDL, including some programming examples.
Session: Introduction to Learning Deep Learning
Date: Monday, March 21, 2022
Time: 7.00 AM – 9.00 AM PDT
Link: https://reg.rainfocus.com/flow/nvidia/gtcspring2022/aplive/page/ap/session/1644442404842001Phn2
Next, Jared Casper and I will jointly present Bridging the Gap Between Basic Neural Language Models, Transformers, and Megatron. We will connect the dots between basic neural language models and the Transformer architecture. We will also describe how NVIDIA’s Megatron implementation enables the Transformer to scale to a huge number of GPUs.
Session: Bridging the Gap Between Basic Neural Language Models, Transformers, and Megatron
Date: Monday, March 21, 2022
Time: 12.00 PM – 12.50 PM PDT
Link: https://reg.rainfocus.com/flow/nvidia/gtcspring2022/aplive/page/ap/session/1638834529505001nmKV
Finally, Ramesh Narayanaswamy and I will jointly present Large-scale IC Simulation Workload on ARM Server CPU. We will describe how Synopsys’ VCS Logic Simulator has now been ported to run on machines implementing the ARM instruction set architecture.
Session: Large-scale IC Simulation Workload on ARM Server CPU
Date: Thursday, March 24, 2022
Time: 9.00 AM – 9.25 PM PDT
Link: https://reg.rainfocus.com/flow/nvidia/gtcspring2022/aplive/page/ap/session/1637794526303001Zj6t