• 0 Posts
  • 255 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle







  • Probably ~15TB through file-level syncing tools (rsync or similar; I forget exactly what I used), just copying my internal RAID array to an external HDD. I’ve done this a few times, either for backup purposes or to prepare to reformat my array. I originally used ZFS on the array, but converted it to something with built-in kernel support a while back because it got troublesome when switching distros. Might switch it to bcachefs at some point.

    With dd specifically, maybe 1TB? I’ve used it to temporarily back up my boot drive on occasion, on the assumption that restoring my entire system that way would be simpler in case whatever I was planning blew up in my face. Fortunately never needed to restore it that way.





  • YES.

    And not just the cloud, but internet connectivity and automatic updates on local machines, too. There are basically a hundred “arbitrary code execution” mechanisms built into every production machine.

    If it doesn’t truly need to be online, it probably shouldn’t be. Figure out another way to install security patches. If it’s offline, you won’t need to worry about them half as much anyway.



  • Both.

    The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.

    The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You’ll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.

    That said, AMD’s equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.

    The easy way is to just use OpenCL. But that’s not going to give you the best performance, and it’s not going to be compatible with everything out there.






  • Short answer: Enterprise bullshit and Adobe.

    On the home computing side, I can’t think of much that has specific OS requirements besides gaming and DRM’d 4K streaming. For better or worse, most desktop apps nowadays are glorified web sites. It’s a different world today than it was 20 years ago.

    On the enterprise side, nah. Way too many vendors with either no Linux support or shitty Linux support.

    Microsoft is working hard to shove “New Outlook” down everyone’s throats despite still not having feature parity with old Outlook. Nobody in my company will want to use it until it is forced because we need delegated and shared calendars to actually work. And then there’s the “you can take my 80GB .pst files when you pry them from my cold dead hands” crowd. Advanced Excel users are not happy with the web version either, and I don’t blame them.


  • GenderNeutralBro@lemmy.sdf.orgtoLinux@lemmy.mlLindowsOS, 2001
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    Its gimmick was that it was compatible with Windows apps, and an easy transition for Windows users. It didn’t really live up to that promise. Wine was not nearly as mature then as it is today, and even today it would be pretty bold to present any Linux distro as being Windows-compatible.