• xlash123@sh.itjust.works
    link
    fedilink
    arrow-up
    21
    ·
    9 months ago

    How do you measure how much CPU time a program needs?

    While I have no specific examples, that is the task of scheduling algorithms. The kernel is responsible for looking at running processes and figuring out how to assign it CPU time efficiently. That can include a variety of metrics, such as past behavior, if it is IO blocked, process priority, etc.

    There is no perfect scheduling algorithm. Each one has tradeoffs depending on what the priority of the system is.

    Also, you don’t have to relinquish all control to the process if you have multiple cores. If you do, I believe that the process is interrupted after some time to allow the kernel to always be able to check in, but again, it depends on implementation.

    How does the OS even yank the CPU away from the currently running process?

    That is called context switching. Simplified, a process is a list of instructions and a bundle of memory. The memory is composed of RAM and CPU registers (again, simplified). The process memory can stay in the same spot in RAM, but the registers need to move out of the way for another process to take its spot. When a process is yanked away, the state of the registers for that process is snapshotted and stored in RAM managed by the kernel. This allows another process to be allocated to that core without deleting important process data. To resume the paused process, you just need to restore the registers back to the snapshotted state and have the core execute the next instruction for that process, and the process would be none the wiser.

    Of course, there’s a lot more that happens internally, but that’s the main gist.

    • Captain Janeway@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      Is there a perfect scheduler that is non-optimal in the Big(O) sense but is optimal if you’re looking at maximizing hardware utilization? In other words, scheduler that takes a long time to determine CPU utilization for each process, but provides an optimal total CPU utilization? I realize that it would not be ideal since we’d essentially have these “sudden stops” as it recalculates the schedule. I’m just more interested in the theory.

      • myslsl@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        9 months ago

        If you have a fixed collection of processes to run on a single processor and unlimited time to schedule them in, you can always brute force all permutations of the processes and then pick whichever permutation maximizes and/or minimizes whatever property you like. The problem with this approach is that it has awful time complexity.

        Edit: There’s probably other subtle issues that can arise, like I/O interrupts and other weird events fwiw.

      • kbotc@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        How would you deal with iowaits in a system like that? I can perfectly burn 100% of CPU time running a poll(), but that’s not useful work…