- cross-posted to:
- technology@lemmy.world
- technology@lemmy.world
- technology@beehaw.org
- cross-posted to:
- technology@lemmy.world
- technology@lemmy.world
- technology@beehaw.org
cross-posted from: https://lemmy.ml/post/2811405
"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year Americaās National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "
No. Justā¦ No. The LLM has not āfigured outā whatās going on. It canāt. These things are just good at prediction. The main indicator is in your text: āmostly correctā. A computer that knows what to calculate will not be āmostly correctā. One false answer proves one hundred percent that it has no clue what itās supposed to do.
What we are seeing with those āstudiesā is that social study people try to apply the same rules they apply to humans (where āmostly correctā is as good as āalways correctā) which is bonkers, or behavioral researchers try to prove some behavior they attribute to the AI as if it was a living being, which is also bonkers because the AI will mimic the results in the training data which is human so the data will be biased as fuck and its impossible to determine if the AI did anything by itself at all (which it didnāt, because thatās not how the software works).
No, youāre wrong. All interesting behavior of ML models is emergent. It is learned, not programmed. The fact that it can perform what we consider an abstract task with success clearly distinguishable from random chance is irrefutable proof that some model of the task has been learned.
No one said anyhting about ālearnedā vs āprogrammedā. Literally no one.
OP is saying itās impossible for a LLM to have āfigured outā how something it works, and that if it understood anything it would be able to perform related tasks perfectly reliably. They didnāt use the words, but thatās what they meant. Sorry for your reading comprehension.
āopā you are referring to isā¦ wellā¦ myself, Since you didnāt comprehend that from the posts above, my reading comprehension might not be the issue here. \
But in all seriousness: I think this is an issue with concepts. No one is saying that LLMs canāt ālearnā that would be stupid. But the discussion is not āis everything programmed into the LLM or does it recombine stuffā. You seem to reason that when someone says the LLM canāt āunderstandā, that person means āthe LLM canāt learnā, but ālearningā and āunderstandingā are not the same at all. The question is not if LLMs can learn, Itās wether it can grasp concepts from the content of the words it absorbs as it itās learning data. If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem. The fact that it canāt do that shows that the only thing it does is chain words together by stochastic calculation. Really sophisticated stachastic calculation with lots of possible outcomes, but still.
I donāt care. It doesnāt matter, so I didnāt check. Your reading comprehension is still, in fact, the issue, since you didnāt understand that the ālearnedā vs āprogrammedā distinction I had referred to is completely relevant to your post.
Thatās what learning is. The fact that it can construct syntactically and semantically correct, relevant responses in perfect English means that it has a highly developed inner model of many things we would consider to be abstract concepts (like the syntax of the English language).
This is wrong. It is obvious and irrefutable that it models sophisticated approximations of abstract concepts. Humans are literally no different. Humans who consider themselves to understand a concept can obviously misunderstand some aspect of the concept in some contexts. The fact that these models are not as robust as that of a humanās doesnāt mean what youāre saying it means.
This is a meaningless point, youāre thinking at the wrong level of abstraction. This argument is equivalent to āa computer cannot convey meaningful information to a human because it simply activates and deactivates bits according to simple rules.ā Your statement about an implementation detail says literally nothing about the emergent behavior weāre talking about.
Can we stop giving out copium like this? You are fact free.
https://arxiv.org/pdf/2212.09196.pdf
How does behaviour that is present in LLMs but not in SLMs show that an LLM can āthinkā?`It only shows that the amount of stuff an LLM can guess increases when you feed it more data. Thatās not the hot take you think it is.
Indeed, and it turns out that in order to predict the next word these things may be thinking about stuff.
Thereās a huge amount of complex work that can go into predicting stuff. If you were to try to predict the next word that a person youāre speaking with was going to say, how would you go about it? Developing a mental model of that personās thought processes would be a really good approach. How would you predict what the next thing that comes after ā126+118=ā is? Would you always get it exactly correct, or might you occasionally predict the wrong number?
I think youāre starting from the premise that these things canāt possibly be āthinkingā, on any level, and are trying to reinterpret everything to fit that premise. These things are largely opaque black boxes, just like human brains are. Is it really so impossible that thought-like processes are going on inside both of them?
Yes, it is impossible. There are no āthoughtsā. The bloody thing doesnāt know what an Apple is if you ask it to write a 500 page book about them. It just guesses a word, then from there guesses the next one and so on. Thatās why it will very often confidently tell you aggravating bullshit. It has no concept of the things it spits out. Itās a āword calculatorā so to speak. The whole thing is not ārevolutionaryā or ānewā by any stretch. What is new is the ability to use tons and tons and tons of reference data which makes the output halfway decent and the GPU power that will make itās speed halfway decent. Other than that, LLMs are.not.āthinkingā.
A computer program is just a series of single bits activating and deactivating. Thatās what youāre saying when you say a LLM is simply predicting words. Youāre not thinking at the appropriate level of abstraction. The whole point is the mechanism by which words are produced and the information encoded.
A rather categorical statement given that you didnāt say anything with regards to how you think.
Maybe wait until we actually know more whatās going on under the hood - both in LLMs and in the human brain - before stating with such confident finality that thereās absolutely no similarities.
If it turns out that LLMs arenāt thinking, but theyāre still producing the same sort of interaction that humans are capable of, perhaps that says more about humans than it does about LLMs.
sees a plastic bag being blown by the wind
Holy shit that bag must be alive
They produce this kind of output because they break doen one mostly logical system (language) onto another (numbers). The irregularities language has get compensated by the vast number of sources.
We donāt need to know more about anything. If I tell you āhey, donāt think of an Appleā, your brain will conceptualize an Apple and then go from there. LLMs donāt know āconceptsā. They spit out numbers just as mindlessly as your Casio calculator watch.
Iāve been making the same or similar arguments you are here in a lot of places. I use LLMs every day for my job, and itās quite clear that beyond a certain scale, thereās definitely more going on than āfancy autocomplete.ā
Iām not sure whatās up with people hating on AI all of a sudden, but there seems quite a few who are confidently giving out incorrect information. I find it most amusing when theyāre doing that at the same time as bashing LLMs for also confidently giving out wrong information.
Can you give examples of that?
The one I like to give is tool use. I can present the LLM with a problem and give it a number of tools it can use to solve the problem and it is pretty good at that. Hereās an older writeup that mentions a lot of others: https://www.jasonwei.net/blog/emergence
I suspect itās rooted in defensive reactions. People are worried about their jobs, and after being raised to believe that human thought is special and unique theyāre worried that that āspecialnessā and āuniquenessā might be threatened. So they form very strong opinions that these things are nothing to worry about.
Iām not really sure what to do other than just keep pointing out what information we do have about this stuff. It works, so in the end itāll be used regardless of hurt feelings. It would be better if we get ready for that sooner rather than later, though, and denial is going to delay that.
Yeah, I think thatās a big part of it. I also wonder if people are getting tired of the hype and seeing every company advertise AI enabled products (which I can sort of get because a lot of them are just dumb and obvious cash grabs).
At this point, itās pretty clear to me that thereās going to be a shift in how the world works over the next 2 to 5 years, and people will have a choice of whether to embrace it or get left behind. Iāve estimated that for some programming tasks, Iām about 7 to 10x faster when using Copilot and ChatGPT4. I donāt see how someone who isnāt using AI could compete with that. And before anyone asks, I donāt think the error rate in the code is any higher.
I had some training at work a few weeks ago that stated 80% of all jobs on the planet are going to be changed by AI in the next 10 years. Some of those jobs are already rapidly changing, and others will take some time to spin-up the support structures required for AI integration, but the majority of people on the planet are going to be impacted by something that most people donāt even know exists yet. AI is the biggest shake-up to industry in human history. Itās bigger than the wheel, itās bigger than the production line, itās bigger than the dot com boom. The world is about to completely change forever, and like you said, pretending that AI is stupid isnāt going to stop those changes, or even slow them. Theyāre coming. Learn to use AI or get left behind.
The engineers of ChatGPT-4 themselves have stated that it is beginning to show signs of general intelligence. I put a lot more value in their opinion on the subject than a person on the Internet who doesnāt work in the field of artificial intelligence.
Itās PR by Microsoft. I am beginning to doubt the intelligence of many humans rather than that of ChatGPT considering these kinds of comments.
That wasnāt the engineers of GPT-4, it was Microsoft who have been fanning the hype pretty heavily to recoup their investment and push their own Bing integration and then opened their āstudyā with:
An actual AI researcher (Maarten Sap) regarding this statement: