DRAM kind of plateaued in 2011, when it hit $4/GB; since then it's gotten faster and bigger, but not appreciably cheaper per bit.
This could change if there was a way to do 3D DRAM, like 3D NAND flash, but that doesn't appear to be on the table at present. Note that this isn't the "stacking" they talk about with IGZO-DRAM, where they build layers on top of each other - it's not 3D stacking itself that made flash cheap.
Flash got insanely cheap because of the single-pass 3D architecture - it's pretty cheap to put a large number (~400 nowadays) of featureless layers onto a chip, then you drill precise holes through all the layers and coat the inside of the hole with the right stuff, turning each hole into a stack of ~400 flash cells.
The cost of a wafer (and thus a chip) is proportional to the time it spends in the ultra-expensive part of the fab. 3D NAND puts maybe 100x as many cells onto a wafer as the old planar flash (you can't pack those holes as closely as the old cells), but what's important is that the wafer only spends maybe 2x as long (I'm totally guessing here) in the fab. If it took 100x as long, laying down a few hundred layers, the price advantage would vanish.
Five year lifetime isn't getting anywhere near my setup. Also notably absent was anything about read or write times. It sounded promising all the way up to that last paragraph.
Then again, there is wear, so either you accept a level of performance degeneration (dynamic capacity cache?) or you go to DIMMs anyway servicability.
The graph on this page is awful, but those endurance lines on the right side are going up toward a century at optimal temperature.
I think we'll have to wait and see.
Like any DRAM cell, it has a capacitor, which in this case is the gate capacitor of a MOS transistor.
The separate capacitor of current DRAM cells is replaced by a second transistor, whose gate capacitor stores the charge that distinguishes stored "1"s from stored "0"s. Such 2-transistor cells, where the second transistor used for sensing the cell state also includes the capacitor, are not new, they had been used in many early DRAM devices, decades ago.
The main advantage of the new cell is in replacing silicon for the transistors of the DRAM cell with another semiconductor material with wider bandgap than silicon.
This reduces a lot the leakage currents, allowing a smaller capacitor. The gate capacitor of the storage transistor is sufficient, so no other separate capacitor is needed.
One thing that is not mentioned is how resistant are the new DRAM cells to radiation. The smaller stored charge makes them more susceptible, but the wider bandgap of the transistors makes them less susceptible, so it is uncertain which is the actual behavior.
(you may need to adjust the volume, the audio is 5 LU below reference)
For example: https://www.science.org/doi/10.1126/sciadv.adu4323
We could have higher capacity, faster and most importantly far more energy efficient DRAM.
Thinking in the same vein - transistors are really small now. Aren't we at the stage where we can drop DRAM and replace transistor+capacitor with 2 transistors as a flip-flop? It would save all that refresh time and controller circuitry.
The capacitor in DRAM is usually realised as an enlarged gate of a transistor AFAIK.
You can make a latch with 2 transistors and 2 resistors, but resistors are more expensive than transistors in an integrated circuit, so the minimal latch has 4 transistors.
However with simple latches you cannot build a memory, because there must be a way to address them. Therefore you must add 2 pass transistors for cell addressing, thus you end with a 6-transistor SRAM cell. In the past there have been some SRAMs with so-called 4-transistor cells, but that is a misnomer, because those were 6-transistor cells where 2 transistors were passive, having their gates connected permanently to fixed potentials, making the transistors a substitute for resistors.
A 6-T SRAM cell has a few not-so-good characteristics, so for maximum performance, like in the first-level cache memory, more complex 8-transistor SRAM cells are used.
Even the simplest SRAM cell is much more complex than a DRAM cell, flash memory cell or ROM cell.
The ideal memory cell has the area of a square whose side is double the resolution of the available photolithography, because any cell must be at the intersection of a row and a column, where the row and the column must contain at least one conductor trace.
By 3-dimensional stacking of the memory cell components, some variants of DRAM cells, including the cell from TFA, may reach the ideal memory cell size.
Replacing your DRAM sticks every five years may be okay, but what about for boards with soldered on memory?
It almost makes it seem like they want their memory to last five years, as though it's a feature.
DRAM appears to be closer to 300 hours, at reasonable temperatures [1], at the worst case workload.
It would be interesting if Google released their failure rates, like they did with hard disk vs ssd.
[1] modeled failures, page 75: https://repository.tudelft.nl/record/uuid:e36c2de7-a8d3-4dfa...
The abstract does indeed say "It was found that the system reliability decreases to 0.84 after 1·10^8s at a stressing temperature of 300K", but I can't find anything close to that in the sections about Bias Temperature Instability or Hot Carrier Injection.
The only thing which to me looks close is the rather acute failure in the Radiation Trapping section - but that also states that the failure mode is dependent more on the total dose than time, and the total dose at which it fails is somewhere between 126 krad - 1.26 Mrad. For reference, a dose of 1 krad is universally fatal to a human.
In other words: don't put unshielded DRAM in a nuclear reactor?
Yikes! Things that you don't necessarily want to know. Another one is that GPUs are released crawling with bugs - only the ones without cheap driver workarounds are fixed.
The implication is that it can theoretically hold a value for 10^14s (~3 million years).