• Stefen Auris
    link
    fedilink
    English
    162 years ago

    It’s the end result of training your AI on mountains of biased human thoughts

      • FaceDeer
        link
        fedilink
        82 years ago

        Does there have to be one? It’d be nice if there were, of course, but this is currently the only way we know of to make these AIs.

      • RickRussell_CA
        link
        fedilink
        English
        52 years ago

        Well, you can focus on rule-based/expert system style AI, a la WolframAlpha. Actually build algorithms to answer questions that are based on scientific fact and theory, rather than an approximated consensus of many sources of dubious origin.

        • @parlaptie@feddit.de
          link
          fedilink
          32 years ago

          Ooo, old school AI 😍

          In our current cultural consciousness, I’m not sure that even qualifies as AI anymore. It’s all about neutral networks and machine learning nowadays.

      • Stefen Auris
        link
        fedilink
        English
        42 years ago

        I guess shoving an encyclopedia into it. I’m not sure really, it is a good point. Perhaps AI bias is as inevitable as human bias…

        • interolivary
          link
          fedilink
          English
          82 years ago

          Despite what you might assume, an encyclopedia wouldn’t be free from bias. It might not be as biased as, say, getting your training data from a dump of 4chan, but it’d absolutely still have bias. As an on-the-nose example, think about the definition of homosexuality; training on an older encyclopedia would mean the AI now thinks homosexuality is a crime.

          • RickRussell_CA
            link
            fedilink
            English
            42 years ago

            And imagine how badly most encyclopedias would reflect on languages and cultures other than the one that made them.

      • radix
        link
        fedilink
        22 years ago

        The alternative is being extremely careful about what data you allow the LLM to learn from. Then it would have your bias, but hopefully that’ll be a less flagrantly racist bias.

  • Rikudou_Sage
    link
    fedilink
    132 years ago

    The models that were trained with left-wing data were more sensitive to hate speech targeting ethnic, religious, and sexual minorities in the US, such as Black and LGBTQ+ people. The models that were trained on right-wing data were more sensitive to hate speech against white Christian men.

    • 𝒍𝒆𝒎𝒂𝒏𝒏
      link
      fedilink
      112 years ago

      White christian men is an awfully specific thing for the model to be sensitive towards IMO.

      Right-wing media is perceived to be funded by white christian men, so if that is the source of the data then I’m not too surprised their writing and articles would protect themselves - but still intriguing how the model picked up on this from online discussions & news data, and was sensitive to hate speech aimed at that group specifically, compared with the Left data which appears more inclusive - although this is probably indicative of the bias they’re studying in the article

      • radix
        link
        fedilink
        8
        edit-2
        2 years ago

        I mean, hate speech aimed at left-wing people is more diverse generally than hate speech aimed at right-wing people because the left simply is more diverse in gender, orientation, ethnicity, religion, etc. Isn’t that universally accepted?

        (Please correct me if I’m wrong, I approach in good faith!)

        • 𝒍𝒆𝒎𝒂𝒏𝒏
          link
          fedilink
          22 years ago

          I don’t think you’re wrong at all tbh - from my perspective the left is always going to be more diverse, whereas the right isn’t very inclusive by default unless you “fit in” IMO

  • Heresy_generator
    link
    fedilink
    52 years ago

    It’s a large part of the point. Launder biases into an algorithm so you can blame the algorithm for enforcing biases while taking no responsibility. It’s how every automated police tool has ever worked.

  • Excel
    link
    fedilink
    12 years ago

    The Alignment Problem by Brian Christian should be required reading for this community