• Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      8 months ago

      I fear that will only happen when storage manufacturers are forced to use 1024 bytes per KB like everyone else.

      In fairness it’s a very longstanding tradition that serial transfer devices measure the speed in bits per second rather than bytes. Bytes used to be variable size, although we settled on eight a long time ago.

      • pafu@feddit.de
        link
        fedilink
        arrow-up
        7
        ·
        8 months ago

        1024 bytes per KB

        Technically, it’s 1000 bytes per KB and 1024 bytes per KiB. Hard drive manufacturers are simply using a different unit.

      • AProfessional@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        19
        ·
        edit-2
        8 months ago

        Base 10 is correct and more understandable by humans. Everyone uses it except Windows and old tools. macOS, Android (AOSP), etc.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          5
          ·
          8 months ago

          Found the hard drive manufacturer.

          It’s 1024. It’s always been 1024. It’ll always be 1024.

          Unless fo course we should start using 17.2GB RAM sticks.

          • QuaternionsRock@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            8 months ago

            There’s a conflict between the linguistic and practical implications here.

            “kilo-“ means 1,000 everywhere. 1,000 is literally the definition of “kilo-“. In theory, it’s a good thing we created “kibi-“ to mean 2^10 (1024).

            Why does everyone expect a kilobyte to be 1024 bytes, then? Because “kibi-“ didn’t exist yet, and some dumb fucking IBM(?) engineers decided that 1,000 was close enough to 1,024 and called it a day. That legacy carries over to today, where most people expect “kilo-“ to mean 1024 within the context of computing.

            Since product terminology should generally match what the end-user expects it to mean, perhaps we should redefine “kilobyte” to mean 1024 bytes. That runs into another problem, though: if we change it now, when you look at a 512GB SSD, you’ll have to ask, “512 old gigabytes or 512 new gigabytes?”, arguably creating even more of a mess than we already have. That problem is why “kibi-“ was invented in the first place.

            • Semi-Hemi-Lemmygod@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              It’s not just the difference between kilo- and kibi-. It’s also the difference between bits and bytes. A kilobit is only 125 eight-bit bytes, whereas a kilobyte is 8,000 bits.

        • Blaster M@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          8 months ago

          Computers run on binary, base 2. 1000 vs 1024, one is byte aligned(2^10), the other is not.

          • AProfessional@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            8 months ago

            Thats an irrelevant technical detail for modern storage. We regularly use billions, trillions of bytes. The world has mostly standardized on base 10 for large numbers as it’s easy to understand and convert.

            Literally all of the devices I own use this.