Was looking through my office window at the data closet and (due to angle, objects, field of view) could only see one server light cluster out of the 6 racks full. And thought it would be nice to scale everything down to 2U. Then day-dreamed about a future where a warehouse data center was reduced to a single hypercube sitting alone in the vast darkness.

  • partial_accumen@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    The future is 12 years ago: HP Moonshot 1500

    “The HP Moonshot 1500 System chassis is a proprietary 4.3U chassis that is pretty heavy: 180 lbs or 81.6 Kg. The chassis hosts 45 hot-pluggable Atom S1260 based server nodes”

    source

      • InverseParallax@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        22 hours ago

        It made some sense before virtualization for job separation.

        Then docker/k8s came along and nuked everything from orbit.

        • MNByChoice@midwest.social
          link
          fedilink
          arrow-up
          0
          ·
          16 hours ago

          VMs were a thing in 2013.

          Interestinly, Docker was released in March 2013. So it might have prevented a better company from trying the same thing.

          • InverseParallax@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            16 hours ago

            Yes, but they weren’t as fast, vt-x and the like were still fairly new, and the VM stacks were kind of shit.

            Yeah, docker is a shame, I wrote a thin stack on lxc, but BSD Jails are much nicer, if only they improved their deployment system

        • partial_accumen@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          21 hours ago

          The other use case was for hosting companies. They could sell “5 servers” to one customer and “10 servers” to another and have full CPU/memory isolation. I think that use case still exists and we see it used all over the place in public cloud hyperscalers.

          Meltdown and Spectre vulnerabilities are a good argument for discrete servers like this. We’ll see if a new generation of CPUs will make this more worth it.

          • InverseParallax@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            20 hours ago

            128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

            Also, I happen to know they’re working on even more hardware isolation mechanisms, similar to sriov but more enforced.

            • partial_accumen@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              17 hours ago

              128-192 cores on a single epyc makes almost nothing worth it, the scaling is incredible.

              Sure, which is why we haven’t seen a huge adoption. However, in some cases it isn’t so much an issue of total compute power, its autonomy. If there’s a rogue process running on one of those 192 cores and it can end up accessing the memory in your space, its a problem. There are some regulatory rules I’ve run into that actually forbid company processes on shared CPU infrastructure.