• jqubed@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    7 months ago

    I don’t really know if ARM adds benefits I’d really notice as an end user, but it’ll be interesting to see if this really goes through and upends the dominant architecture we’ve seen for really 40+ years.

    • SMillerNL@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      2
      ·
      7 months ago

      As an ARM Mac user, I wouldn’t trade all this new battery life for an x86 processor

      • Aniki 🌱🌿@lemm.ee
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        7 months ago

        Second this. Not to mention INSTANT resume from hibernation! It’s fucking crazy. I can use this thing ALL DAY doing webGL CAD work and Orca Slicer and barely scratch 50%.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          7 months ago

          With a modern system, I honestly don’t think there’s a noticeable difference between suspend to ram and suspend to disk. They’ve gotten the boot times down so much that it’s lightning-fast. My work laptop’s default is suspend to disk, and I don’t notice a difference except when it prompts for the bitlocker password.

          • fuckwit_mcbumcrumble@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            S0 standby is borderline unusable on many PCs. On Apple silicon macs it’s damn near flawless.

            My current laptop is probably the last machine to support S3 standby and I do not look forward to replacing it and being forced back into a laptop that overheats and crashes in my backpack in less than 15 minutes. On my basic T14 it works ok for the most part, but my full fat Thinkpad P1 with an i9 is in S0 standby for longer than a few minutes, and sometimes uses more power than when it was fully on. Maybe Meteor lake with it’s LP E cores will fix this but I doubt it.

            • voxel@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              7 months ago

              tbh it has been nearly flawless on win11 for me with an amd cpu

              (just make sure to disable automatic windows/defender updates unless you want to get woken up by jet turbine sounds in the middle of the night)

      • pycorax@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        7 months ago

        There’s nothing stopping x86-64 processors from being power efficient. This article is pretty technical but does a really good explanation of why that’s the case: https://chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/

        It’s just that traditionally Intel and AMD earn most of their money from the server and enterprise sectors where high performance is more important than super low power usage. And even with that, AMD’s Z1 Extreme also gets within striking distance of the M3 at a similar power draw. It also helps that Apple is generally one node ahead.

        • SquiffSquiff@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          If there’s ‘nothing stopping’ it then why has nobody done it? Apple moved from x86 to ARM. Mobile is all ARM. All the big cloud providers are doing their own ARM chips. Intel killed off much of the architectural competition with Itanic in the early 2000’s. Why stop?

          • pycorax@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Their primary money makers are what’s stopping them I reckon. Apple’s move to ARM is because they already had a ton of experience with building their own in house processors for their mobile devices and ARM licenses stock chip designs, making it easier for other companies to come up with their own custom chips whereas there really isn’t any equivalent for x86-64. There were some disagreements between Intel and AMD over patents on the x86 instruction set too.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 months ago

          This article fails to mention the single biggest differentiator between x86 and ARM: their memory models. Considering the sheer amount of everyday software that is going multithreaded, this is a huge issue, and the reason why ARM drastically outperforms x86 running software like modern web browsers.

          • pycorax@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Do you mind elaborating what is it about the difference on their memory models that makes a difference?

            • QuaternionsRock@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              Here is a great article on the topic. Basically, x86 spends a comparatively enormous amount of energy ensuring that its strong memory guarantees are not violated, even in cases where such violations would not affect program behavior. As it turns out, the majority of modern multithreaded programs only occasionally rely on these guarantees, and including special (expensive) instructions to provide these guarantees when necessary is still beneficial for performance/efficiency in the long run.

              For additional context, the special sauce behind Apple’s Rosetta 2 is that the M family of SoCs actually implement an x86 memory model mode that is selectively enabled when executing dynamically translated multithreaded x86 programs.

              • pycorax@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                Thanks for the links, they’re really informative. That said, it doesn’t seem to be entirely certain that the extra work done by the x86 arch would incur a comparatively huge difference in energy consumption. Granted, that isn’t really the point of the article. I would love to hear from someone who’s more well versed in CPU design on the impact of it’s memory model. The paper is more interesting with regards to performance but I don’t find it very conclusive since it’s comparing ARM vs TSO on an ARM processor. It does link this paper which seems more relevant to our discussion but a shame that it’s paywalled.

            • sunbeam60@lemmy.one
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              On the x86 architecture, RAM is used by the CPU and the GPU has a huge penalty when accessing main RAM. It therefore has onboard graphics memory.

              On ARM this is unified so GPU and CPU can both access the same memory, at the same penalty. This means a huge class of embarrassingly parallel problems can be solved quicker on this architecture.

              • pycorax@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                Do x86 CPUs with iGPUs not already use unified memory? I’m not exactly sure what you mean but are you referring to the overhead of having to do data copying over from CPU to GPU memory on discrete graphics cards when performing GPU calculations?

                • sunbeam60@lemmy.one
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  Yes unified and extremely slow compared to an ARM architecture’s unified memory, as the GPU sort of acts as if it was discrete.

                  • pycorax@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    7 months ago

                    Do you have any sources for this? Can’t seem to find anything specific describing the behaviour. It’s quite surprising to me since the Xbox and PS5 uses unified memory on x86-64 and would be strange if it is extremely slow for such a use case.

    • PeachMan@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      7 months ago

      I’m not expert, but I can tell you that Apple Silicon gave the new Macbooks insane battery life, and they run a lot cooler with less overheating. Intel really fucked up the processors in the 2015-2019 Macbooks, especially the higher-spec i7 and i9 variants. Those things overheat constantly. All Intel did was take existing architectures and raise the clock speeds. Apple really exposed Intel’s laziness by releasing processors that were just as performant in quick tasks, they REALLY kicked Intel’s ass in sustained workloads, not because they were faster on paper, but simply because they didn’t have to thermal throttle after 2 minutes of work. Hell, the Macbook Air doesn’t even have any active cooling!

      I’m not saying these Snapdragon chips will do exactly the same thing for Windows PC’s, obviously we can’t say that for sure yet. But if they do, it will be fucking awesome for end users.

    • partial_accumen@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      7 months ago

      If nothing else it breaks the stranglehold the 2.1 x86 licensees (Intel and AMD) have on the Windows market. Its just that that market is much MUCH smaller than it was 20 or 30 years ago.

        • atocci@kbin.social
          link
          fedilink
          arrow-up
          9
          ·
          7 months ago

          ARM is the licensor, not the licensee. At the very least, they are willing to license the ARM architecture to more companies (the licensees) than Intel is with x86. More RISC-V support would be ideal though for sure…

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          Right? I’m much more excited to see RISC-V start to become more powerful and have more commercial offers of hardware to compete against the global tech brokers. We need the FOSS version of hardware or else our future privacy and ownership rights will forever be in jeopardy with info tech.

          • Richard@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 months ago

            RISC-V is just an ISA, the same for ARM and other RISCs and CISCs. There’s no guarantee that RISC-V will be any freer than current CPUs, because the actual implementation and manufacturing are the job of the OEMs.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              7 months ago

              RISC-V is an open standard under an open and free license. Which means that it doesn’t require an expensive proprietary licensing fee. It is the necessary development bed upon which open source hardware can be created. Effectively it means that it has the potential of creating cheaper hardware that manufacturers can create with lower cost overhead and whatever improvements they make upon the designs can be used for free by other manufacturers.

              The RISC-V ISA is free and open with a permissive license for use by anyone in all types of implementations. Designers are free to develop proprietary or open source implementations for commercial or other exploitations as they see fit. RISC-V International encourages all implementations that are compliant to the specifications. […] There is no fee to use the RISC-V ISA. FAQ

              While all other ISAs are proprietary standards that charge chip designers up the nose to even look at the specifications. Hence why there’s so few chip manufacturers in the world.

    • WolfLink@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      The idea is ARM can be more efficient, which translates as longer battery life and/or faster computers for the end user.

      • sunbeam60@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Let’s spend all that new-found battery life by translating x86 code to ARM code.

    • simple@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      You will definitely notice better battery life as an end user.