“To enable the massive 256GB/s memory bandwidth that Ryzen AI Max delivers, the LPDDR5x is soldered,” writes Framework CEO Nirav Patel in a post about today’s announcements. “We spent months working with AMD to explore ways around this but ultimately determined that it wasn’t technically feasible to land modular memory at high throughput with the 256-bit memory bus. Because the memory is non-upgradeable, we’re being deliberate in making memory pricing more reasonable than you might find with other brands.”
😒🍎
Edit: to be clear, I was only trying to point out that “we’re being deliberate in making memory pricing more reasonable than you might find with other brands” is clearly targeting the Mac Mini, because Apple likes to price-gouge on RAM upgrades. (“Unamused face looking at Apple,” get it? Maybe I emoji’d wrong.) My comment is not meant to be an opinion about the soldered RAM.
Many LLM operations rely on fast memory and gpus seem to have that. Even though their memory is soldered and vbios is practically a black box that is tightly controlled. Nothing on a GPU is modular or repairable without soldering skills(and tools).
To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.
If you really need an upgradeable machine you wouldn’t be buying a mini-PC anyways, seems like they’re trying to capture a different market entirely.
According to the CEO in the LTT video about this thing it was a design choice made by AMD because otherwise they cannot get the ram speed they advertise.
To be fair, you didn’t ask a question. You made a statement and ended it with a question mark, so I don’t really understand exactly what it is that you were asking.
😒🍎
Edit: to be clear, I was only trying to point out that “we’re being deliberate in making memory pricing more reasonable than you might find with other brands” is clearly targeting the Mac Mini, because Apple likes to price-gouge on RAM upgrades. (“Unamused face looking at Apple,” get it? Maybe I emoji’d wrong.) My comment is not meant to be an opinion about the soldered RAM.
Would 256GB/s be too slow for large llms?
It runs on the gpu
Many LLM operations rely on fast memory and gpus seem to have that. Even though their memory is soldered and vbios is practically a black box that is tightly controlled. Nothing on a GPU is modular or repairable without soldering skills(and tools).
To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.
If you really need an upgradeable machine you wouldn’t be buying a mini-PC anyways, seems like they’re trying to capture a different market entirely.
According to the CEO in the LTT video about this thing it was a design choice made by AMD because otherwise they cannot get the ram speed they advertise.
Which is fine, but there was no obligation for Framework to use that chip either.
Yes that’s the problem.
That they want to sell cheap ai research machines to use for workstation?
That’s a poor attempt to knowingly misrepresent my statement.
No, it is a question
The answer is that they’re abandoning their principles to pursue some other market segment.
Although I guess it could be said to be like Porsche and Lamborghini selling SUVs to support the development of their sports cars…
I don’t understand how that answers my question
To be fair, you didn’t ask a question. You made a statement and ended it with a question mark, so I don’t really understand exactly what it is that you were asking.