• @over_clox@lemmy.world
      link
      fedilink
      English
      215 months ago

      Prolly like if you look up something that doesn’t actually exist, it’ll find you results anyways.

      This ought to be fun for Rule 34…

    • @Meron35@lemmy.world
      link
      fedilink
      English
      175 months ago

      There is an AI browser war going on right now, see Comet (Perplexity), Atlas (ChatGPT), Claude AI Agent from Chrome etc.

      They work by letting the AI continuously see everything your browser can see, such as your emails, banking details, financial habits, online shopping accounts, etc.

      By doing so, they promise to be better digital assistants, so that you ask just ask the browser to do tasks such as online shopping, booking holidays, etc.

      Even ignoring the severe privacy concerns, AI browsers are significantly prone to prompt injection. That is, any random webpage with hidden text can override the instructions you give it to carry out malicious attacks.

    • Riskable
      link
      fedilink
      English
      -5
      edit-2
      5 months ago

      Today, when you search for something you get a handful of ads, SEO-optimized bullshit, and maybe the 6th link will be what you were actually looking for.

      When you tell the AI agent (in your browser) to search for something, not only do you get the most relevant results (because you can make your prompt vastly more specific and detailed), you completely skip all that other stuff that you didn’t want.

      I’ve been saying for some time now that AI is going to kill free search engines because it’s such a better way to search for stuff. Free search engines like Google and SEO-optimizing companies are hindrances to efficient browsing and drowning the web in bullshit. Poisoning search results.

      An AI agent will skip past all that stuff and give you just what you want; you never see any ads!

      • @seathru@lemmy.sdf.org
        link
        fedilink
        English
        125 months ago

        An AI agent will skip past all that stuff and give you just what you want; you never see any ads!

        You’re wondering, “Why is my cat sneezing?”

        While you ponder that, why not treat yourself to something delicious? At Carl’s Jr., our Charbroiled Burgers are made with 100% Angus beef, grilled to perfection, and packed with bold flavors. Treat yourself today—you deserve a meal that satisfies!

        Now, back to your question: Sneezing in cats can be caused by allergies, respiratory infections, or irritants. It’s always a good idea to consult with your veterinarian to ensure your feline friend is healthy.

        Let me know if you have more questions!

        • Riskable
          link
          fedilink
          English
          -35 months ago

          You’ve obviously never used an open source AI model (running locally on your PC) if you think that’s how it’d go.

          • @seathru@lemmy.sdf.org
            link
            fedilink
            English
            35 months ago

            Likewise, a properly set up and locally hosted searx instance should not be returning ads and “SEO-optimized bullshit”. You’re comparing “free” corporate owned services to privately owned ones. OFC corporate owned is going to feed you ads.

            • Riskable
              link
              fedilink
              English
              15 months ago

              The takeaway here is: Open source doesn’t suffer from enshittification.

              Learn and contribute to FOSS or stop bitching 🤣

      • Lemminary
        link
        fedilink
        English
        75 months ago

        That’s assuming the AI won’t look at the results and still make shit up. I’ve used AI-assisted search and i know that it’s not reliable.

        • Riskable
          link
          fedilink
          English
          -35 months ago

          Ok how would that work:

          find me some good recipes for hibachi style ginger butter

          AI model returns 10 links, 4 of which don’t actually exist (because it hallucinated them)? No. If they didn’t exist, it wouldn’t have returned them because it wouldn’t have been able to load those URLs.

          It’s possible that it could get it wrong because of some new kind of LLM scamming method but that’s not “making shit up” it’s malicious URLs.

          • Lemminary
            link
            fedilink
            English
            45 months ago

            If they didn’t exist, it wouldn’t have returned them

            And yet I’ve had Bing’s Copilot/ChatGPT (with plugins like Consensus), Gemini, and Perplexity do exactly that, but worse. Sometimes they’ll cite sources that don’t mention anything related to the answer they’ve provided because the information they’re giving is based on some other training data they can’t source. They were asked to provide a source, but won’t necessarily give you the source. Hell, sometimes they’ll answer an adjacent just to spit out an answer–any answer–to fulfill the request.

            LLMs are simply not the appropriate tool for the job. This is most obvious when you need the specificity and accuracy.

            • Riskable
              link
              fedilink
              English
              25 months ago

              Yeah… The big commercial models have system prompts that fuck it all up. That’s my hypothesis, anyway.

              You have to try it with an open source model. You tell it to turn the titles, URLs, and nothing else. That seems to work fantastic 👍

              I’m doing it with Open WebUI and ollama cloud which is open source models that you could run locally—if you have like $5,000 worth of hardware.

              • Lemminary
                link
                fedilink
                English
                15 months ago

                Interesting. I have a couple automations in mind, I’ll have to try it out, then.

        • Riskable
          link
          fedilink
          English
          15 months ago

          If you don’t like your current open source AI, just use a different one or an embedding that works around whatever bias you don’t like. Maybe open a ticket?