There are few areas of the technology world quite as esoteric to most mere mortals as the memory sector – where an elite coterie of industry magi talk of the improvement 1znms technology and DDR5-registered DIMMs can make to your RAS, via VDD/VDDQ/VPP duty cycle adjustment, and improved burst rate.
To the casual enthusiast, or indeed busy CIO wanting a little more firepower from their data centre’s server workloads, things can turn Full Alphabet Soup alarmingly fast: thus it ever was, thus it is with Micron’s latest memory module innovation, unveiled this week at consumer electronics show CES in Las Vegas.
The Idaho-based chipmaker big reveal was the start of DDR5 Registered DIMMs (memory modules of SDRAM chips) production – based on its 1znm process technology -– for select customers. The news matters: those eyeing data centre upgrades in coming years will be hearing a lot more about about DDR5.
(DDR5 -– short for Double Data Rate 5 – is an emerging DRAM standard; designed to ensure that the compute-intensive applications fuelling processor core count growth will not be bandwidth-starved by existing DRAM technology.)
So when will customers start getting access to this latest form of memory – which promises 85 percent better data-transfer performance – in their data centre servers? Computer Business Review joined the semiconductor firm’s Jim Jardine (director, business development, cloud and networking) and marketing VP Ryan Baxter to learn more about the outlook for the technology.
So You’ve Started “Sampling” DDR5 SDRAMs on 1znm.
Why Does This Matter?
DDR5 is the next step in terms of the I/O transition that the industry is about to undertake.
From DDR3 through DDR4, essentially what happened in that transition was that the data rate increased, the density increased — but that was kind of it.
There are a lot of architectural features that exist in DDR5 that will really take the performance level up, not only from a data rate perspective… but if you think about how data traverses across a data bus, the efficiency with which that is done.
We’ve been able to accomplish that through the inclusion of a couple of really interesting features that depart from the DDR4 spec.
The first of those is the burst length: we’re basically putting more data on a single burst than existed in DDR4. We’re also looking at a lot better concurrency within the DDR5 chip itself. [With] significantly larger numbers of banks, when doing bank refresh — and of course, when you’re refreshing DRAM you’re not writing to it or reading from it — we can actually target those refreshes on a per bank basis; so while you’re refreshing one bank, you can access data from another bank.
We also do some tricks… with adding things like on-die ECC and relaxing some of the timing parameters that enables that. It really is a technology that addresses a lot of the performance challenges posed by the server system [in the face of] insatiable appetite for more and more data. And it does so just much more efficiently. And we we we get scaling runway to boot. In a nutshell, that’s what DDR5 from a feature set perspective.
Memory and CPU Innovation are CAPEX-Heavy and Pretty Slow – How Long Have You Been Working on This and How is Production Going?
DDR5 has been discussed within key standards body within in our industry called JEDEC for a number of years now. This isn’t a spec that just gets created and within a month gets distributed. This is many, many, many years in the making.
Lots of industry ecosystem players from the likes of the DRAM producers to their customers to CPU enablers to IP folks, they all contribute to generating the specification. From a DRAM production perspective [the challenge is] is ramping on the right particular node. That’s where a really good chunk of the R&D budget goes: just getting the process ready to go.
The good thing for us is that we’re actually leveraging our 1z node: today we’re sampling and ramping a DDR4 16 gigabit device as well as a mobile device on that exact same node. So by the time we’re really ramping DDR5, all of the “gotchas” that weren’t initially anticipated for the node have all been essentially worked out and we’ve de-risked some of the transition from our perspective.
What is the Outlook for Ramping Up Micron DDR5 SDRAM Production?
It’s going to still remain majority DDR4 through 2020. But by early to mid 2021, we’ll start seeing the first DDR5 production at 16GB density in both servers as well as high-end client desktops and notebooks, and it ramps from there.
As an industry we’ve been doing a lot of tricks like adding channels and increasing the data rates and things like that. But that kind of runs out of gas, with the next generation CPUs…
How Much Finessing Are You Needing to Do with CPU Partners and Others in the Ecosystem?
I can certainly tell you that we are highly engaged with all of the server CPU enablers — there aren’t many, so you can guess who they are!
That is around a lot of technology enablement: is the memory going to be ready when the CPUs need it to be ready? What do the memory subsystem to CPU interactions need to look like? And what do we need to do right now to do performance and thermal and power modelling and then put this into the limelight?
We’ve been working quite extensively with the with the IP folks, the signaling folks that are generating IP to be able to connect with the technology.
Most of those folks are working with end-customers that are developing purpose-built ASICS that can benefit from, leverage and hopefully monetise DDR5 performance.
We work with all of these extensively through JEDEC (the standards body for the microelectronics industry). Then there’s the customer base from from enterprise server client, PC, data centre, hyperscale and pretty much everybody in between working extensively to help understand not only the benefits but also some of the challenges.
What are the Biggest Challenges?
If you increase your performance it doesn’t necessarily come for absolutely free.
There’s some power challenges you’ve got to deal with; even though DDR5 is much better in terms of energy per bit than the previous technology, it still drives an absolute power number that is a bit higher than DDR4.
Given the fact that there are a lot of folks packing their servers full of memory, it’s becoming a bigger and bigger portion of the budget you have to be concerned with. And then thermal and all the way down to system design and signal integrity… we’re really plugged in to just about every single ecosystem player that you can think of that stands to benefit from high performance.
Speaking to the C-Suite, What Will This Technology Be Doing for My Business in a Few Years’ Time?
If you’re talking to the perhaps a CFO, it’s TCO: you’re basically doing a lot more with with the capital outlay. Your servers are working more efficiently, and essentially squeezing more ROI out of the investment that you made in the server, which is inclusive of both CPU as well as memory.
If you’re talking to the CIO, DDR5 is the answer to driving future solutions beyond the envelope of DDR4 performance. And by the time you’re getting to 2023, 2024 there, there isn’t another option… If you’re talking to the far-out system architects that are looking at the kind of 2026, 2027 type systems, this is the technology that enables the DRAM industry to really scale.
Ultimately, if you want to to sort of pay fewer dollars per year per gigabyte, you need to be on DDR5 long term.