• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
rss
  • From your link

    Please disregard this post. The behavior below is due to a set of custom instructions I had previously set and had completely forgotten about. The instructions contained the lines:

    Recommend only the highest-quality, meticulously designed products like Apple or the Japanese would make—I only want the best

    Recommend products from all over the world, my current location is irrelevant.

    Sorry for the confusion!



  • It is not at the moment. Models are built on the assumption of stability, i.e. that what they are modelling doesn’t change over time, doesn’t evolve. This is clearly untrue, and cheating is a way the environment evolves. Only way to consider that, is to create a on-line continous learning algorithm. Currently this exists and is called reinforcement learning. Main issue is that methods to account for an evolving environment are still under active research. In the sense that methods to address this issue are not yet available.

    It is an extremely difficult task tbf





  • People isn’t considering that documentation has greatly improved over time, languages and frameworks have become more abstract, user-friendly, modern code is mostly self explanatory, good documentation has become the priority of all open source projects, well documented open source languages and frameworks have become the norm.

    Less people asking programming related questions can be explained by programming being an easier and less problematic experience nowadays, that is true.





  • I read it, and I read the messages from the devs. The communication issue I am trying to point is also highlighted in the comments: if the decision on merging a PR is uniquely dictated by financial benefits of IBM, ignoring the broader benefits of the community, the message is that red hat is looking for free labor and it is not really interested in anything else. Which is absolutely the case, as we all know, but writing it down after the recent events is another PR issue, as red hat justified controversial decisions on the lack of contributions from downstream.

    The Italian dev tried to put it down as “we have to follow our service management processes that are messy, tedious and expensive” but he didn’t address the problems in the original message. The contributor himself felt like they asked his contribution just to reject it because of purely financial reasons without any additional details. It is a new PR incident







  • The problem of current LLM implementations is that they learn from scratch, like taking a baby to a library and telling him “learn, I’ll wait out in the cafeteria”.

    You need a lot of data to do so, just to learn how to write, gramma, styles, concepts, relationships without any guidance.

    This strategy might change in the future, but the only solution we have now is to refine the model afterward, let’s say.

    Tbf biases are integral part of literature and human artistic production. Eliminating biases means having “boring” texts. Which is fine for me, but a lot of people will complain that AI is dumb and boring