• datelmd5sum@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    We’re hitting logarithmic scaling with the model trainings. GPT-5 is going to cost 10x more than GPT-4 to train, but are people going to pay $200 / month for the gpt-5 subscription?

    • Skates@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      Is it necessary to pay more, or is it enough to just pay for more time? If the product is good, it will be used.

    • Madis@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      2 months ago

      But it would use less energy afterwards? At least that was claimed with the 4o model for example.

      • fuck_u_spez_in_particular@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        4o is also not really much better than 4, they likely just optimized it among others by reducing the model size. IME the “intelligence” has somewhat degraded over time. Also bigger Model (which in tha past was the deciding factor for better intelligence) needs more energy, and GPT5 will likely be much bigger than 4 unless they somehow make a breakthrough with the training/optimization of the model…

        • hglman@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 months ago

          4o is optimization of the model evaluation phase. The loss of intelligence is due to the addition of more and more safeguards and constraints by the use of adjunct models doing fine turning, or just rules that limit whole classes of responses.

    • GeneralInterest@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      2 months ago

      Businesses might pay big money for LLMs to do specific tasks. And if chip makers invest more in NPUs then maybe LLMs will become cheaper to train. But I am just speculating because I don’t have any special knowledge of this area whatsoever.