• Lemminary
    link
    fedilink
    English
    45 months ago

    If they didn’t exist, it wouldn’t have returned them

    And yet I’ve had Bing’s Copilot/ChatGPT (with plugins like Consensus), Gemini, and Perplexity do exactly that, but worse. Sometimes they’ll cite sources that don’t mention anything related to the answer they’ve provided because the information they’re giving is based on some other training data they can’t source. They were asked to provide a source, but won’t necessarily give you the source. Hell, sometimes they’ll answer an adjacent just to spit out an answer–any answer–to fulfill the request.

    LLMs are simply not the appropriate tool for the job. This is most obvious when you need the specificity and accuracy.

    • Riskable
      link
      fedilink
      English
      25 months ago

      Yeah… The big commercial models have system prompts that fuck it all up. That’s my hypothesis, anyway.

      You have to try it with an open source model. You tell it to turn the titles, URLs, and nothing else. That seems to work fantastic 👍

      I’m doing it with Open WebUI and ollama cloud which is open source models that you could run locally—if you have like $5,000 worth of hardware.

      • Lemminary
        link
        fedilink
        English
        15 months ago

        Interesting. I have a couple automations in mind, I’ll have to try it out, then.