cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year Americaā€™s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

  • FaceDeer
    link
    fedilink
    -4ā€¢2 years ago

    Ironically, I think you also are overlooking some details about how LLMs work. They are not just word generators. Stuff is going on inside those neural networks that weā€™re still unsure of.

    For example, I read about a study a little while back that was testing the mathematical abilities of LLMs. The researchers would give them simple math problems like ā€œ2+2=ā€ and the LLM would fill in 4, which was unsurprising because that equation could be found in the LLMā€™s training data. But as they went to higher numbers the LLM kept giving mostly correct results, even when they knew for a fact that the specific math problem being presented wasnā€™t in the training data. After training on enough simple addition problems the LLM had actually ā€œfigured outā€ some of the underlying rules of math and was using those to make its predictions.

    Being overly dismissive of this technology is as fallacious as overly hyping it.

    • Norgur
      link
      fedilink
      6ā€¢2 years ago

      No. Justā€¦ No. The LLM has not ā€œfigured outā€ whatā€™s going on. It canā€™t. These things are just good at prediction. The main indicator is in your text: ā€œmostly correctā€. A computer that knows what to calculate will not be ā€œmostly correctā€. One false answer proves one hundred percent that it has no clue what itā€™s supposed to do.
      What we are seeing with those ā€œstudiesā€ is that social study people try to apply the same rules they apply to humans (where ā€œmostly correctā€ is as good as ā€œalways correctā€) which is bonkers, or behavioral researchers try to prove some behavior they attribute to the AI as if it was a living being, which is also bonkers because the AI will mimic the results in the training data which is human so the data will be biased as fuck and its impossible to determine if the AI did anything by itself at all (which it didnā€™t, because thatā€™s not how the software works).

      • Kogasa
        link
        fedilink
        1ā€¢2 years ago

        No, youā€™re wrong. All interesting behavior of ML models is emergent. It is learned, not programmed. The fact that it can perform what we consider an abstract task with success clearly distinguishable from random chance is irrefutable proof that some model of the task has been learned.

        • Norgur
          link
          fedilink
          4ā€¢2 years ago

          No one said anyhting about ā€œlearnedā€ vs ā€œprogrammedā€. Literally no one.

          • Kogasa
            link
            fedilink
            3ā€¢2 years ago

            OP is saying itā€™s impossible for a LLM to have ā€œfigured outā€ how something it works, and that if it understood anything it would be able to perform related tasks perfectly reliably. They didnā€™t use the words, but thatā€™s what they meant. Sorry for your reading comprehension.

            • Norgur
              link
              fedilink
              1ā€¢2 years ago

              ā€œopā€ you are referring to isā€¦ wellā€¦ myself, Since you didnā€™t comprehend that from the posts above, my reading comprehension might not be the issue here. \

              But in all seriousness: I think this is an issue with concepts. No one is saying that LLMs canā€™t ā€œlearnā€ that would be stupid. But the discussion is not ā€œis everything programmed into the LLM or does it recombine stuffā€. You seem to reason that when someone says the LLM canā€™t ā€œunderstandā€, that person means ā€œthe LLM canā€™t learnā€, but ā€œlearningā€ and ā€œunderstandingā€ are not the same at all. The question is not if LLMs can learn, Itā€™s wether it can grasp concepts from the content of the words it absorbs as it itā€™s learning data. If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem. The fact that it canā€™t do that shows that the only thing it does is chain words together by stochastic calculation. Really sophisticated stachastic calculation with lots of possible outcomes, but still.

              • Kogasa
                link
                fedilink
                2ā€¢2 years ago

                ā€œopā€ you are referring to isā€¦ wellā€¦ myself, Since you didnā€™t comprehend that from the posts above, my reading comprehension might not be the issue here.

                I donā€™t care. It doesnā€™t matter, so I didnā€™t check. Your reading comprehension is still, in fact, the issue, since you didnā€™t understand that the ā€œlearnedā€ vs ā€œprogrammedā€ distinction I had referred to is completely relevant to your post.

                Itā€™s wether it can grasp concepts from the content of the words it absorbs as it itā€™s learning data.

                Thatā€™s what learning is. The fact that it can construct syntactically and semantically correct, relevant responses in perfect English means that it has a highly developed inner model of many things we would consider to be abstract concepts (like the syntax of the English language).

                If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem

                This is wrong. It is obvious and irrefutable that it models sophisticated approximations of abstract concepts. Humans are literally no different. Humans who consider themselves to understand a concept can obviously misunderstand some aspect of the concept in some contexts. The fact that these models are not as robust as that of a humanā€™s doesnā€™t mean what youā€™re saying it means.

                the only thing it does is chain words together by stochastic calculation.

                This is a meaningless point, youā€™re thinking at the wrong level of abstraction. This argument is equivalent to ā€œa computer cannot convey meaningful information to a human because it simply activates and deactivates bits according to simple rules.ā€ Your statement about an implementation detail says literally nothing about the emergent behavior weā€™re talking about.

        • Norgur
          link
          fedilink
          5ā€¢2 years ago

          How does behaviour that is present in LLMs but not in SLMs show that an LLM can ā€œthinkā€?`It only shows that the amount of stuff an LLM can guess increases when you feed it more data. Thatā€™s not the hot take you think it is.

      • FaceDeer
        link
        fedilink
        -6ā€¢2 years ago

        These things are just good at prediction.

        Indeed, and it turns out that in order to predict the next word these things may be thinking about stuff.

        Thereā€™s a huge amount of complex work that can go into predicting stuff. If you were to try to predict the next word that a person youā€™re speaking with was going to say, how would you go about it? Developing a mental model of that personā€™s thought processes would be a really good approach. How would you predict what the next thing that comes after ā€œ126+118=ā€ is? Would you always get it exactly correct, or might you occasionally predict the wrong number?

        I think youā€™re starting from the premise that these things canā€™t possibly be ā€œthinkingā€, on any level, and are trying to reinterpret everything to fit that premise. These things are largely opaque black boxes, just like human brains are. Is it really so impossible that thought-like processes are going on inside both of them?

        • Norgur
          link
          fedilink
          5ā€¢2 years ago

          Yes, it is impossible. There are no ā€œthoughtsā€. The bloody thing doesnā€™t know what an Apple is if you ask it to write a 500 page book about them. It just guesses a word, then from there guesses the next one and so on. Thatā€™s why it will very often confidently tell you aggravating bullshit. It has no concept of the things it spits out. Itā€™s a ā€œword calculatorā€ so to speak. The whole thing is not ā€œrevolutionaryā€ or ā€œnewā€ by any stretch. What is new is the ability to use tons and tons and tons of reference data which makes the output halfway decent and the GPU power that will make itā€™s speed halfway decent. Other than that, LLMs are.not.ā€œthinkingā€.

          • Kogasa
            link
            fedilink
            0ā€¢
            edit-2
            2 years ago

            A computer program is just a series of single bits activating and deactivating. Thatā€™s what youā€™re saying when you say a LLM is simply predicting words. Youā€™re not thinking at the appropriate level of abstraction. The whole point is the mechanism by which words are produced and the information encoded.

          • FaceDeer
            link
            fedilink
            -3ā€¢2 years ago

            A rather categorical statement given that you didnā€™t say anything with regards to how you think.

            Maybe wait until we actually know more whatā€™s going on under the hood - both in LLMs and in the human brain - before stating with such confident finality that thereā€™s absolutely no similarities.

            If it turns out that LLMs arenā€™t thinking, but theyā€™re still producing the same sort of interaction that humans are capable of, perhaps that says more about humans than it does about LLMs.

            • CarlsIII
              link
              fedilink
              8ā€¢2 years ago

              sees a plastic bag being blown by the wind

              Holy shit that bag must be alive

            • Norgur
              link
              fedilink
              1ā€¢2 years ago

              They produce this kind of output because they break doen one mostly logical system (language) onto another (numbers). The irregularities language has get compensated by the vast number of sources.

              We donā€™t need to know more about anything. If I tell you ā€œhey, donā€™t think of an Appleā€, your brain will conceptualize an Apple and then go from there. LLMs donā€™t know ā€œconceptsā€. They spit out numbers just as mindlessly as your Casio calculator watch.

            • @SirGolan@lemmy.sdf.org
              link
              fedilink
              -1ā€¢2 years ago

              Iā€™ve been making the same or similar arguments you are here in a lot of places. I use LLMs every day for my job, and itā€™s quite clear that beyond a certain scale, thereā€™s definitely more going on than ā€œfancy autocomplete.ā€

              Iā€™m not sure whatā€™s up with people hating on AI all of a sudden, but there seems quite a few who are confidently giving out incorrect information. I find it most amusing when theyā€™re doing that at the same time as bashing LLMs for also confidently giving out wrong information.

              • FaceDeer
                link
                fedilink
                -2ā€¢2 years ago

                I suspect itā€™s rooted in defensive reactions. People are worried about their jobs, and after being raised to believe that human thought is special and unique theyā€™re worried that that ā€œspecialnessā€ and ā€œuniquenessā€ might be threatened. So they form very strong opinions that these things are nothing to worry about.

                Iā€™m not really sure what to do other than just keep pointing out what information we do have about this stuff. It works, so in the end itā€™ll be used regardless of hurt feelings. It would be better if we get ready for that sooner rather than later, though, and denial is going to delay that.

                • @SirGolan@lemmy.sdf.org
                  link
                  fedilink
                  0ā€¢2 years ago

                  Yeah, I think thatā€™s a big part of it. I also wonder if people are getting tired of the hype and seeing every company advertise AI enabled products (which I can sort of get because a lot of them are just dumb and obvious cash grabs).

                  At this point, itā€™s pretty clear to me that thereā€™s going to be a shift in how the world works over the next 2 to 5 years, and people will have a choice of whether to embrace it or get left behind. Iā€™ve estimated that for some programming tasks, Iā€™m about 7 to 10x faster when using Copilot and ChatGPT4. I donā€™t see how someone who isnā€™t using AI could compete with that. And before anyone asks, I donā€™t think the error rate in the code is any higher.

                  • SokathHisEyesOpen
                    link
                    fedilink
                    0ā€¢2 years ago

                    I had some training at work a few weeks ago that stated 80% of all jobs on the planet are going to be changed by AI in the next 10 years. Some of those jobs are already rapidly changing, and others will take some time to spin-up the support structures required for AI integration, but the majority of people on the planet are going to be impacted by something that most people donā€™t even know exists yet. AI is the biggest shake-up to industry in human history. Itā€™s bigger than the wheel, itā€™s bigger than the production line, itā€™s bigger than the dot com boom. The world is about to completely change forever, and like you said, pretending that AI is stupid isnā€™t going to stop those changes, or even slow them. Theyā€™re coming. Learn to use AI or get left behind.

            • SokathHisEyesOpen
              link
              fedilink
              -4ā€¢2 years ago

              The engineers of ChatGPT-4 themselves have stated that it is beginning to show signs of general intelligence. I put a lot more value in their opinion on the subject than a person on the Internet who doesnā€™t work in the field of artificial intelligence.

              • @eskimofry@lemmy.ml
                link
                fedilink
                7ā€¢2 years ago

                Itā€™s PR by Microsoft. I am beginning to doubt the intelligence of many humans rather than that of ChatGPT considering these kinds of comments.

              • Norgur
                link
                fedilink
                6ā€¢2 years ago

                That wasnā€™t the engineers of GPT-4, it was Microsoft who have been fanning the hype pretty heavily to recoup their investment and push their own Bing integration and then opened their ā€œstudyā€ with:

                ā€œWe acknowledge that this approach is somewhat subjective and informal, and that it may not satisfy the rigorous standards of scientific evaluation.ā€

                An actual AI researcher (Maarten Sap) regarding this statement:

                The ā€˜Sparks of A.G.I.ā€™ is an example of some of these big companies co-opting the research paper format into P.R. pitches. They literally acknowledge in their paperā€™s introduction that their approach is subjective and informal and may not satisfy the rigorous standards of scientific evaluation.