• FauxPseudo
    link
    fedilink
    English
    7421 days ago

    But the only way to learn debugging is to have experience coding. So if we let AI do the coding then all the entry level coding jobs go away and no one learns to debug.

    This isn’t just a code thing. This is all kinds of professions. AI will kill the entry level which will prevent new people from getting experience which will have downstream effects throughout entire industries.

    • MigratingApe
      link
      fedilink
      English
      3121 days ago

      It already started happening before LLM AI. Have you heard the joke that we were teaching our parents how to use printers and PCs with mouse and keyboard and now we have to do the same with our children? It’s really not a joke. We are the last generation that have seen it all evolving before our eyes, we know the fundamentals of each layer of abstraction the current technology is built upon. It was natural process for us to learn all of this and now suddenly we expect “fresh people” to grasp 50 years or so of progress in 5 or so years?

      Interesting times ahead of us.

    • @AdamEatsAss@lemmy.world
      link
      fedilink
      English
      1420 days ago

      Have you used any AI for programming? There is 0 chance entry level jobs will be replaced. AI only works well if what it needs to do is well defined, as a dev that is almost never the case. Also companies understand that to create a senior dev they need a junior dev they can train. Also cooperations do not trust Google, openAI, meta, ect with their intellectual property. My company made it a firedable offense if they catch you uploading IP to an AI.

      • FauxPseudo
        link
        fedilink
        English
        1720 days ago

        Also companies understand that to create a senior dev they need a junior dev they can train.

        We live in a world where every company wants people that can hit the ground running, requires 5 years of experience for an entry level job on a language that’s only been out for three years. On the job training died long ago.

    • @metaldream@sopuli.xyz
      link
      fedilink
      English
      720 days ago

      The junior devs are my job are way better at debugging than AI, lol. Granted they are top talent hires because no one else can break in these days.

    • @zenpocalypse@lemm.ee
      link
      fedilink
      English
      120 days ago

      In my experience, LLMs are good for code snippets and input on best practices.

      I use it as a tool to speed up my work, but I don’t see it replacing even entry jobs any time soon.

  • @thefluffiest@feddit.nl
    link
    fedilink
    English
    3721 days ago

    So, AI gets to create problems, and actually capable people get to deal with the consequences. Yeah that sounds about right

    • @WanderingThoughts@europe.pub
      link
      fedilink
      English
      2721 days ago

      And it’ll be used to suppress wages, because “you’re not making new stuff, just fixing some problems in existing code.” That you have to rewrite most of it is conveniently not counted.

      That’s at least what was tried with movie writers.

      • @sach@lemmy.world
        link
        fedilink
        English
        1621 days ago

        Most programmers agree debugging can be harder than writing code, so basically the easy part is automated, but the more challenging and interesting parts, architecture and the debugging remain for programmers. Still it’s possible they’ll try to sell it to programmers as less work.

        • @brsrklf@jlai.lu
          link
          fedilink
          English
          13
          edit-2
          21 days ago

          but the more challenging and interesting parts, architecture and the debugging remain for programmers

          And is made harder for them. Because it turns out the “easy” part is not that easy to do correctly, and if not it just makes maintaining the thing miserable.

          • @atrielienz@lemmy.world
            link
            fedilink
            English
            820 days ago

            Additionally, as others have said in the thread, programmers learn the skills required for debugging at least partially from writing code. So there goes a big part of the learning curve, turning into a bell curve.

  • @NigelFrobisher@aussie.zone
    link
    fedilink
    English
    35
    edit-2
    21 days ago

    I’m actually quite enjoying watching the LLM evangelists fall into the trough of despair after their initial inflated expectations of what they thought stochastic text generation would achieve for the business. After a while you get used to the waves of magic bullet solutions that promise to revolutionise the industry but introduce as many new problems as they solve.

  • @resipsaloquitur@lemm.ee
    link
    fedilink
    English
    2420 days ago

    So we “fixed” the easiest part of software development (writing code) and now humans have to clean up the AI slop.

    I’ll bet this lovely new career field comes with a pay cut.

    • @IllNess@infosec.pub
      link
      fedilink
      English
      1120 days ago

      I would charge more. Fixing my own code is easier than fixing someone elses code.

      I think I might go insane if that was my career.

    • Bappity
      link
      fedilink
      English
      1721 days ago

      LLMs are so fundamentally different to AGI, it’s a wonder people believe that balderdash

  • @hera@feddit.uk
    link
    fedilink
    English
    2321 days ago

    As a very experienced python developer, I have tried using chatgpt for debugging and vibe coding multiple times and you just end up going in circles and never get to a working solution. It ends up being a lot faster just to do it yourself

    • @gigachad@sh.itjust.works
      link
      fedilink
      English
      12
      edit-2
      21 days ago

      Absolutely agree. I just use it for some simple stuff like “every nth row in a pandas dataframe slice a string from x to y if column z is True” or something like that. These logics take time to write, and GPT usually comes up with a right solution or one that doesn’t need a lot of modification.

      But debugging or analyzing an error? No thanks

      • @AdamEatsAss@lemmy.world
        link
        fedilink
        English
        920 days ago

        I have on multiple occasions told it exactly what the error is and how to fix it. The AI agrees, apologizes, and gives me the same broken code again. It takes the same amount of time to describe the error as it would have for me to fix it.

        • @BreadstickNinja@lemmy.world
          link
          fedilink
          English
          820 days ago

          This is my experience as well. Best case scenario it gives me a rough idea of what functions to use or how to set up the logic, but then it always screws up the actual implementation. I’ve never asked ChatGPT for coding help and gotten something I can use off the bat. I always have to rewrite it before it’s functional.

        • spirinolas
          link
          fedilink
          English
          320 days ago

          My rule of thumb is, if he doesn’t give you the solution right off the bat he won’t give you one. If that happens either fix it yourself or start a new chat and reformulate the question completely.

    • @j0ester@lemmy.world
      link
      fedilink
      English
      219 days ago

      Thank you! So many morons saying you can just use Generative AI to build whatever you need. That’s a no…

    • @Serinus@lemmy.world
      link
      fedilink
      English
      120 days ago

      “Give me some good warning message css” was a pretty nice use case. It’s a nice tool that’s near the importance of Google search.

      But you have to know when its answers are good and when they’re useless or harmful. That requires a developer.

  • @kyub@discuss.tchncs.de
    link
    fedilink
    English
    14
    edit-2
    20 days ago

    “AI” is good for pattern matching, generating boiler plate / template code and text, and generating images. Maybe also translation. That’s about it. And it’s of course often flawed/inaccurate so it needs human oversight. Everything else is like a sales scam. A very profitable one.

  • @Simulation6@sopuli.xyz
    link
    fedilink
    English
    1121 days ago

    Can AI fix itself so that it gets better at a task? I don’t see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
    Personally, I will always lie to AI when asked for feed back.

      • @jj4211@lemmy.world
        link
        fedilink
        English
        420 days ago

        That’s been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there’s just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won’t work and could also make other things worse.

    • @LinyosT@sopuli.xyz
      link
      fedilink
      English
      219 days ago

      It’s always the people that don’t have a clue.

      It’s also always the people that think they’ll get some benefit out of AI taking over. When they’re absolutely part of the group that’ll be replaced by AI.

      • @lennivelkant@discuss.tchncs.de
        link
        fedilink
        English
        119 days ago

        It’s a cargo cult. They don’t understand, but they like what it promises, so they blindly worship. Sceptics become unbelievers, visionaries become prophets and collateral damages become sacrifices.

        They may use different terms, but if some job became obsolete, that’s just the price of a better future to them. And when the day of Revelation comes, they’ll surely be among the faithful delivered from the shackles of human labour to enjoy the paradise built on this technology. Any day now…

  • Bappity
    link
    fedilink
    English
    421 days ago

    the tool can’t replace the person or whatever

  • @VagueAnodyneComments@lemmy.blahaj.zone
    link
    fedilink
    English
    420 days ago

    Ars Technica would die of an aneurysm if it stopped posting about generative AI for even 30 seconds

    as they’re the authority on tech, and all they write about is shitty generative AI from 2017, that means shitty generative AI from 2017 is the only tech worth writing about