Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.

The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

The platform’s Professional Community Policies direct users to “share information that is real and authentic” – a standard to which LinkedIn is not holding its own tools.

  • TachyonTele@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    26 days ago

    I think the massive push for it by every single company gives the layman a picture of “everyone uses it so it must be good”, combined with most people just simply not caring enough to think too much into it.

    Kind of an aside, but I’m really hoping for technology platue of some sort, with the hopes that people really have a chance to look at everything and ditch all the crap.

    And then another period of growth from there.