NVIDIA CEO Jen-Hsun Huang on Tuesday let a packed GPU Technology Conference crowd peer into the future. While the company is barely beginning to put out cards running its new Maxwell architecture, the company gave us a glimpse at Pascal, named after 17th century French mathmetician, Blaise Pascal. The thing is tiny, about one-third the size of standard boards used today, but it promises to be more powerful than anything we’ve seen before it. Unfortunately, we won’t get an opportunity to experience that promise until 2016, when Pascal is expected to hit the market.
NVIDIA’s Pascal will be smaller, faster and much more efficient than anything out there, and includes three key features: stacked DRAM, unified memory and NVLink. Below is Nvidia’s more thorough explanation of what each of the three features does, and why they’re important for the future of computing.
- 3D Memory: Stacks DRAM chips into dense modules with wide interfaces, and brings them inside the same package as the GPU. This lets GPUs get data from memory more quickly – boosting throughput and efficiency – allowing us to build more compact GPUs that put more power into smaller devices. The result: several times greater bandwidth, more than twice the memory capacity and quadrupled energy efficiency.
- Unified Memory: This will make building applications that take advantage of what both GPUs and CPUs can do quicker and easier by allowing the CPU to access the GPU’s memory, and the GPU to access the CPU’s memory, so developers don’t have to allocate resources between the two.
- NVLink: Today’s computers are constrained by the speed at which data can move between the CPU and GPU. NVLink puts a fatter pipe between the CPU and GPU, allowing data to flow at more than 80GB per second, compared to the 16GB per second available now.
The year 2016 is still a long way off, so we won’t see the benefits of NVIDIA’s fancy engineering for awhile. But the company already has it all mapped out, and it is working to bring incredible new architecture to the masses.