Note: Unfortunately the research paper linked in the article is a dead/broken/wrong link. Perhaps the author will update it later.

From the limited coverage, it doesn’t sound like there’s an actual optical drive that utilizes this yet and that it’s just theoretical based on the properties of the material the researchers developed.

I’m not holding my breath, but I would absolutely love to be able to back up my storage system to a single optical disc (even if tens of TBs go unused).

If they could make a R/W version of that, holy crap.

  • Yawweee877h444@lemmy.world
    link
    fedilink
    English
    arrow-up
    157
    arrow-down
    1
    ·
    10 months ago

    It’s “only” 125 TB. Still a lot, and impressive. But I just hate the stupid click baity ‘petabit’ term. We use bytes GB and TB as a standard, just use the standard term it’s impressive enough.

    • SinningStromgald@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      10 months ago

      But then the headline would have to say “Scientists Develop Optical Disc with measly 125TB’s of Storage”

    • BassTurd@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      10 months ago

      I like to express my storage sizes in nibs. I think that makes this a 250 teranib disk.

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      4
      ·
      10 months ago

      IMO the whole byte stuff is pretty confusing, people should have just sticked with bits, because that avoids implementation details.

      One bit is the smallest amount of information. Bytes historically had different amounts of bits, depending on the architecture. With ASCII and the success of the 8 bit processor word of the Intel 8080/8085 processor, it is now defacto 8 Bit long.

      But personally, byte seems a bit (no pun intended) like the imperial measurement system.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Agreed. Bits are used more commonly when talking about transfer speeds, and bytes regarding storage.

      • CleoTheWizard@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I feel like I’ve seen bits used for storage on the scientific level since stuff like the pits and lands on a disc are expressed that way. To anyone in CS, you’d regard storage as a discrete whole part in some way. So bytes are fine. But when you’re developing storage, I believe you’d be concerned about bit density. Would need to read the paper though.

        • Victor@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Sure, I did say commonly though. So for an article title in popular media you should probably use the common units to be as relatable as possible. But it’s whatever. I guess doing it this way gets people talking, eh.

      • buzz86us@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        I just want this disc in a DVD-RAM format… It doesn’t have to be extremely fast just readable and writable… I used to love DVD-RAM until 4.25gb became nothing

      • cynar@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        10 months ago

        Data is stored in bytes (as the minimum size), it’s moved as a bitstream (continuous flow, without regard to individual byte boarders).

        Hence storage is measured in bytes, network connections are measured in bits/second.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Are disks though?

        I think the last time I saw storage measured in bits was a SNES cartridge.

      • Jojo@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        10 months ago

        They’re not even, they’re measured in bits per second. That’s like saying temperature is measured in calories.

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          We are talking about the size of a unit of data, not how much time elapses for whatever you’re talking about.

          There are 8 bits in a byte, regardless if you’re talking about 1Mbps or 1MB/s of transfer speed calculation.

          • Jojo@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            Storage are measured in bytes because data are stored in that form, with an individual bit being meaningless but a single byte often being significant. Network throughputs are measured in bits per second because the time-density of data is the significant thing there, not the total number of bytes transmitted.

            There are 8 bits in a byte and there are 9 degrees Rankine in every 5 degrees Celsius, but if I told you the temperature for tomorrow in degrees Rankine, you would still think me weird for saying it that way and you might wonder what I was hiding.

            There are almost always dozens of units we could use to describe something, but it’s okay to call it out when someone says something unusually as the original headline did.

            • yeehaw@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              I never claimed disks should be measured in bytes… And still with this per second thing which has no bearing on this. How data is stored is irrelevant to how it’s measured in transit. That’s kind of like saying kilometers are measured in kilometers per hour, but a drag strip is a quarter mile. So you’ve lost me on whatever point you’re trying to make there.

              • Jojo@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                The original comment in this thread was about how the article lists the capacity of this experimental disk in bits, and posited that bytes are the usual unit to use.

                The next comment was about how networks are measured in bits.

                So my replies since then have been about two points, first that bits are still inappropriate to use here even if networks use them, and second that networks use bits per second, which is a different unit than bits.

                That’s kind of like saying kilometers are measured in kilometers per hour, but a drag strip is a quarter mile

                It’s more like saying speed is measured in kilometers per hour rather than kilometers (point 2) while also saying that the country we’re talking about measures distance in miles usually (point 1).

    • rockSlayer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      10 months ago

      Pb is still a standard measurement. While it’s not very standard to use petabit instead of TB for data storage, it’s still a recognized unit.

      • Yawweee877h444@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        10 months ago

        Yeah “standard” was a poorly chosen word. I meant common, as bytes are much more commonly used for disk storage.

    • DacoTaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      edit-2
      10 months ago

      Gigabytes, or gibibyte? Yes gibibyte is a thing.
      As much as i hate to say it, but due to marketting fuckery the usage of byte has ruined it all as a 2TB drive is not 2 * 1024 * 1024 * 1024 * 8 bits but instead 2 terabit ( 2*1000000000 )

      Then comes the discussion if “1KB” is 1024 bytes or if 1000 bytes is a kilobyte. If you ask me, 1KB is 1024 bytes. If you ask the people using the kibibytes system, 1KB is 1000 bytes…

      Shits fucking complex and fucked up. Cant go wrong if you say it in bits though

      • Flumpkin@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Then comes the discussion if “1KB” is 1024 bytes or if 1000 bytes is a kilobyte.

        It’s the metric system and it’s standard now. 1 kilobyte is 1000 bytes, just like 1 kilometer is 1000 meters. It is much easier to convert 20.415.823 bytes into megabytes - 23.4 MB.

        Only windows insists on mislabeling the base 1024 kibibyte as kilobyte. The metric unit is much easier to use.

        • mb_@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          What? Every BIOS in the world still uses the same system. Same thing for me on Linux.

          Only hard driver manufacturers used a different system to inflate their numbers and pushed a market campaign, a lot of people who didn’t even use computers said “oh that makes sense - approved”

          People who actually work with computer, memory, CPU, and other components in base 8 just ignores this non-sense of “x1000”

        • DacoTaco@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          I never knew the whole thing was considered part of the metric system, makes sense though.
          I love the metric system to death because its so simple and easy, and it links different measurements together ( 1l of water = 1kg etc ).

          That said, a computer works differently and because we work in factors of 2, 1000 bytes being a kilobyte makes no sense once you start working with bits and low level stuff. Other than that, i can see why the stuff was redefined.

          Also, i think linux also works in factors of 1024, but id need to check

          • Flumpkin@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            There is nothing to keep you from using factors of 1024 (except he slightly ludicrous prefix “kibi” and “mebi”), but other than low level stuff like disc sectors or bios where you might want to use bit logic instead of division it’s rather rare. I too started in the time when division op was more costly than bit level logic.

            I’d argue that any user facing applications are better off with base 1000, except by convention. Like a majority of users don’t know or care or need to care what bits or bytes do. It’s programmers that like the beauty of the bit logic, not users. @[email protected]

            • DacoTaco@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              I agree with what you said, and its imo why the discussion of a factor of 1000 and 1024 will always rage on. Im a developer, and do embedded stuff in my free time. Everything around me is factor 1024 because of it, and i hate the factor 1000. But from a generic user standpoint, i agree its a lot more user friendly, as they are used to the metric system of a factor of 10

              • mb_@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                It is user friendly, and technically incorrect, since nothing ever lines up with reality when you use 1000 because the underlying system is base 8.

                Or you get the weird non-sense all around “my computer has 18.8gb of memory”…

    • odelik@lemmy.today
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      15
      ·
      edit-2
      10 months ago

      Petabit/byte is not a buzz word.

      We use bits, megabits, terabits, and petabits fairly standardly in tech.

      That’s not to be confused with bytes, megabytes, terabytes, and petabytes. Server farms will contain Petabytes (PB) of data.

      Technically there’s also exabit/byte, zettabit/byte, and yottabit/byte as we continue to climb the chain of technical capabilities. It’s estimated that the internet overall has nearly 200 Zettabytes(ZB) of information in 2024.

      • Yawweee877h444@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        10 months ago

        I will refrain from using the word “standard”, but when it comes to data storage the most common terminology is in bytes, as I said TB(terabytes), GB, etc. Saying Pb(petabits) isn’t as common and gimmicky imo when referring to a new disk storage technology. 125 TB is impressive enough without having to throw the Peta in there.

        • odelik@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          11
          ·
          edit-2
          10 months ago

          Researchers and low level technology engineers tend to work in bits. I don’t have access to the full journal publication to verify, but it’s likely that the journal publication used that number and that the Gizmodo author/editor that choose the title just didn’t bother converting it to more “consumer friendly” terms.

          However, the author did boast that it would be “125,000,000 GB!”. So I’m gonna go with that this was an AI written article and doesn’t really know what a technology reader would actually prefer to see.

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            An LLM would absolutely know what the average reader would prefer to see, that’s kinda their whole schtick.

            • otp@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              The average (non-technical) reader would prefer to see click on the bigger number

      • hperrin@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 months ago

        I don’t think there are any storage media that advertise their capacity in *bits though.

    • mipadaitu@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      10
      ·
      10 months ago

      Bits are probably more useful when talking about specialized storage. Byte usually means 8 bits, but doesn’t always need to, and not all data is stored in byte chunks.

      A bit is the smallest piece of usable datum, so that makes sense when discussing this technology.

      • Yawweee877h444@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        10 months ago

        Sorry to be that guy, but in this context byte is strictly defined as 8 bits, never anything else. It’s a strict definition in digital.

        • davidgro@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          While I strongly agree with the idea behind your comment and gave you an upvote, at the physical layer it’s not strictly true - especially for optical discs. See https://en.wikipedia.org/wiki/Eight-to-fourteen_modulation for example.

          That said, capacity listings should always be the capacity of the data that can be stored and retrieved as seen by the user, and that data would be in 8-bit bytes.

        • Prizephitah@feddit.nu
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          10 months ago

          That’s not true either. Byte can be both powers of 10 and powers of 2. When talking about storage devices like hard drives etc. we usually refer to them in powers of 10, but OS’s usually do it in powers of 2. That’s why your hard drive looks smaller than advertised.

          Bits are used for flash memory as individual chips. Assembled devices such as RAM and memory cards are advertised in bytes. I’m imagining that the same goes for hard drive platters and possibly disc media as well.

          • Setarkus.LW@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            ·
            10 months ago

            A byte in this context always means 8 bit though, it has nothing to do with powers of 10 or 2. The prefix of K (kilo), M (mega), G (giga) or Ki (kibi), Mi (mebi), Gi (gibi) doesn’t change the meaning of “byte”.

            • PlexSheep@feddit.de
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              Yes this is right. There may be confusion happening with binary and metric prefixes.

              For example:

              Kibbibyte (1024 bytes) vs Kilobyte (1000 bytes).

  • TwilightVulpine@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    10 months ago

    I so wish we had some affordable, high-density storage technology that we could record and then forget it in the attic for 20 years.

  • thehatfox@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    10 months ago

    Research is one thing, getting from concept to production is another. There was a lot of hype about holographic disc formats years ago that was promising capacities from 100 GB to several TBs but they never actually made it to the market.

    With the ongoing “death” of physical media playing out in the consumer space, it will also probably be hard for these esoteric disc formats to attract the investment needed to develop them. There might be some enterprise interest if the tech is stable enough for archival use I suppose.

    • Admiral Patrick@dubvee.orgOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      I could see it easily replacing tape libraries as backup devices in data centers. Without the economies of scale like we saw with DVD-RW, I doubt I’d be able to afford one until they hit the secondhand market. It would also be interesting to see something like that integrated into storage appliances which would let you have something approaching an on-prem version of Amazon’s Glacier tier.

  • 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 months ago

    If they could make a R/W version of that, holy crap.

    If those turn up at any sort of reasonable cost, it would simplify my home backups so much. I only have about 14TB currently on my NAS (including workstation backups) but even at that size backups are a problem. The irreplaceable stuff (about 3TB worth) is backed up in the cloud. My ripped DVDs/BRDs would all have to be reripped, other stuff I’d just have to find again or live without. I’ve been looking at the advancements being made in tape drives, but those are all priced for business.

    • Neato@ttrpg.network
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      I honestly can’t think of a commercial application for storing that much data to disseminate outside an industrial use. Well I mean in a few years I guess video games will get that big, but other than that…

      • 👍Maximum Derek👍@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Possibly. I honestly haven’t looked at the ecosystem in a few years. Back then BB’s plan structures would have forced me into their business solution and bill over $800/yr - more than 8x my current backup costs.

        • qaz@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          You could also go for cold storage, scaleway glacier costs €28/m. You could also consider a BX31 (10TB) Hetzner storage box for €25/m if you don’t need everything to be backed but do want quick retrieval times.

          • 👍Maximum Derek👍@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Since I work with AWS daily, Glacier was my first attempt. Glacier + Synology hyperbackup proved too fragile for my needs. I ended up needing to rebuild my archive about once a year or so. Then I had to choose between an expensive and time consuming cleanup, or paying for multiple copies.

  • hruzgar@feddit.de
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    10 months ago

    They don’t want us (consumers) to own anything. The world will turn up and down before this gets released to consumers.

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      A big part of the problem is that most consumers don’t want to own things either. Subscriptions are exactly what too many people want.

      • TwilightVulpine@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I think even that goes back around to business interests. We can’t store that many physical copies in shrinking, expensive housing. Digital purchasable media is somehow just as expensive despite having tiny manufacturing and logistical costs, on top of being unreliable due to DRM.

        Subscriptions so far seemed like a better value proposition but between splitting and vanishing libraries, increasing prices and the addition of ads, that’s becoming more questionable. Even average people aren’t so thrilled of having to subscribe to a dozen different services to watch, listen and play what they want.

  • Flumpkin@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    10 months ago

    That would be amazing! You could store the entire 450TB of ebooks in annas-archive on 4 of those disks!!!

  • Toes♀@ani.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    The longer I live the more it feels like I’m living in the startrek timeline

    • gregorum@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      10 months ago

      Keep in mind that that timeline predicts World War III in 2026. It gets very bad for a long time before it gets better.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          10 months ago

          Timeline was altered. They are now set to occur between 2022 and 2056.

          (No, I had nothing to do with it…😋)

          • gregorum@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            10 months ago

            Yeah, in a very odd and confusing episode of SNW, they retcon the bananas out of it. And I particularly dislike that they made me feel bad for KiddieKhan, annoyer of Kirk, wielder of ear eels, and creator of world.

            Oh, and in the background shot of 2022 Toronto, I definitely saw a squirrel there. So I called you on your bullshit. You’re actually Wesley crusher, traveler, and squirrel-shaped shapedshifter. J’accuse!

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    10 months ago

    This is the best summary I could come up with:


    Researchers from Scientists from the University of Shanghai for Science and Technology just figured out how to fit up to a petabit of data onto an optical disk by storing information in 3D.

    Well, you can kiss those puny disks goodbye thanks to a new technique that can read and write up to 100 layers of data in the space of just 54-nanometres, as described in a new paper published in the journal Nature.

    “This could greatly reduce the footprint as well as the energy consumption of the future big data centers, providing a sustainable solution for the digital economy,” said Min Gu, a professor at the University of Shanghai for Science and Technology, and one of the paper’s co-authors.

    The technique required the researchers to develop a brand new material, which has the easy-to-remember name dye-doped photoresist with aggregation-induced emission luminogens, or AIE-DDPR if you’re in a hurry.

    AIE-DDPR is a highly uniform and transparent film that lets researchers blast it with lasers at the nanoparticle scale with precision, allowing for an unprecedented storage method.

    Shrinking the size and scale of data storage could have huge implications, not just for the business of the internet but also for the environmental footprint of the tech industry.


    The original article contains 338 words, the summary contains 206 words. Saved 39%. I’m a bot and I’m open source!