• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    7 months ago

    There’s already more than enough training data out there. The important thing that remains is to filter it so it doesn’t also include humanity’s stupidest data.

    That and make the algorithms smarter so they are resistant to hallucination and misinformation - that’s not a data problem, it’s an architecture problem.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      19
      ·
      7 months ago

      Stupid data can be useful for training as a negative example. Image generators use negative prompts to good effect.

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      Butbutbut my ignorant racism is the truth!! That’s why I hear it from everyone, including [insert near by relatives here]!!

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      Well, it’s established wisdom that the dataset size needs to scale with the number of model parameters. Quadratically, IIRC. If you don’t have that much data the training basically won’t work; it will overfit or just not progress.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      You also have to filter out the AI generated garbage that is rapidly becoming a majority of content on the internet.