• andrew_bidlaw@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    I read that sentiment about quests a lot and have something for it myself, but I find it questionable.

    Formulaic is what makes the quest work with the system. It should, just as a raw code, have a list of triggers for events, responses, all nailed to the system and the world that already exist. It needs to place an NPC with a question mark that has a fetch quest that updates your map\journal when you get it and correctly respond with an award when conditions are met. That’s on a basic level.

    The LLM to create such complex and strict manipulations should be narrowly guided to generate a working fetch quest without a mistake. We’d basically kill off most of what it is good at. And we’d need to build pipelines for it to lay more complex quests, all ourselves. At this point it’s not easier than creating a sophisticated procedural generation engine for quests, to the same result.

    Furthermore, it’s a pain in the ass to create enough learning material to teach it about the world’s lore alone, so it won’t occasionally say what it doesn’t know, and would actually speak - because to achieve the level of ChatGPT’s responses they fed them petabytes of data. A model learning on LotR, some kilobytes, won’t be able to greet you back, and making an existing model to speak in-character about the world it’s yet to know is, well, conplicated. In your free time, you can try to make Bard speak like a dunmer fisherman from Huul that knows what’s going on around him on all levels of worldbuilding young Bethesda put in. To make it correct, you’d end up printing a whole book into it’s prompt window and it would still spit nonsense.

    Instead, I see LLMs being injected in places they are good at, and the voicing of NPC’s lines you’ve mentioned is one of the things they can excel at. Quick drafts of texts and quests that you’d then put into development? Okay. But making them communicating with existing systems is putting a triangle peg in a square hole imho.

    On procedural generation at it’s finest, you can read about the saga of the Boatmurder in Dwarf Fortress: https://lparchive.org/Dwarf-Fortress-Boatmurdered/Introduction/

    • Fubarberry@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      I don’t have time right now to write a full proper response, but for quests I would imagine starting out we would still use traditional random generation the bones of the quest, but use an LLM to create the narrative and NPC dialogs for it. Games like Shadows of Doubt already do a good job with randomly generated objectives, but there’s no motive for the crimes. Just taking the already existing gameplay and using LLM to generate a reason why the crime happened would help with the atmosphere a lot. Also, you can question suspects and sometimes solve the case by them telling you they saw [person] at [location] at [time], but I think an LLM could provide actual witness interrogation where you have to ask the right question, or try to catch them in a lie.

      As far as the mechanics for LLMs to actually provide dialog, I expect to see some 3rd party AI startups work on it. Some kind of system where they have some base language packages that provide general knowledge and dialog abilities, and then a collection of smaller models/loras to specialize. Finally you would have behind the scenes prompting that tells the NPC who their character is, any character/quest specific knowledge they have, their disposition towards the player, etc. I don’t expect every game company to come up with this on their own, I suspect we’ll get a few individual companies offering a built solution for it starting out, before it eventually becomes built into the larger game engines.