• @iopq@lemmy.world
    link
    fedilink
    635 months ago

    I have some suggestions: let’s not make people translate to English unless they are learning English. I don’t want to be thinking about whether “I’m coming Friday” is correct grammar in English. I want to be thinking about my target language!

    • Cr4yfishOP
      link
      fedilink
      33
      edit-2
      5 months ago

      Thanks for the suggestion, I’ll definitely try to make the app as language inclusive as possible!

      Also, sorry if I might’ve been too vague with the post title. The app is just similar to Duolingo in terms of structure and the idea, however it’s not specific to language learning but supposed to cater to any subject, really.

      For example, I personally use it to study for my university subjects.

    • @OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      135 months ago

      This app seems to be about any generic courses, not just language learning. So someone can make a language course in the way you’ve described

    • 𝒎𝒂𝒏𝒊𝒆𝒍
      link
      fedilink
      11
      edit-2
      5 months ago

      Yeah, it’s my minor pet peeve with Duolingo, like source language and my language doesn’t have/need suffixes like “the” or “a” so I often forget about it, it’s soo annoying to fail because of such minor thing, especially when their suggested English often looks terrible

  • 𝕸𝖔𝖘𝖘
    link
    fedilink
    English
    17
    edit-2
    5 months ago

    This is a really great use of LLM! Seriously great job! Once it’s fully self-hostable (including the LLM model), I will absolutely find it space on the home server. Maybe using Rupeshs fastdcpu as the model and generation backend could work. I don’t remember what his license is, though.

    Edit: added link.

  • @AliasAKA@lemmy.world
    link
    fedilink
    English
    135 months ago

    Is there any interest in getting local models to run using this? I’d rather not use Gemini, and then all the data can reside locally (and not require a login).

    I’d be happy to work on this, though I’m a python developer not a typescript one.

    • Cr4yfishOP
      link
      fedilink
      65 months ago

      Yeah, good idea. It’s possible to do that with WebLLM & Langchain. Once Langchain is integrated, it’s kinda similar to the Python Version so should be do-able I think.

      • @AliasAKA@lemmy.world
        link
        fedilink
        English
        25 months ago

        Ah interesting — again happy to help out if there’s anything I can contribute to. I can make a feature request on github if there’s interest.

  • Silmathoron ⁂
    link
    fedilink
    9
    edit-2
    5 months ago

    @Cr4yfish nice project 🙂
    I’m a bit worried about the AI part, though, as you’d want an app whose main purpose is “learning” to guarantee, if not the reliability of the material (since anyone can contribute), at least the reliability of the course generation process that it proposes.
    As far as I know, this is not possible with current generative AI tools, so what’s your plan to make sure hallucinations do not creep in?

    • Cr4yfishOP
      link
      fedilink
      45 months ago

      Thanks. My general strategy regarding GenAI and reducing the amount of hallucinations is by not giving it the task to make stuff up, but to just work on existing text - that’s why I’m not allowing users to create content without source material.

      However, LLMs will be LLMs and I’ve been testing it out a lot and found already multiple hallucinations. I built in a reporting system, although only reporting stuff works right now, not viewing reported questions.

      That’s my short term plan to get a good content quality, at least. I also want to move away from Vercel AI & Gemini to a Langchain Agent system or Graph maybe, which will increase the output Quality.

      Maybe in some parallel Universe this really takes off and many people work on high quality Courses together…

    • Cr4yfishOP
      link
      fedilink
      65 months ago

      The UI mostly works offline once loaded in due to aggressive caching. Downloading Course Content was on the initial Roadmap but I removed it since I wasn’t sure if anyone would like the feature.

      Syncing stuff is a real pain in the ass but I’ll implement it if at least a couple people want it.

      • Auster
        link
        fedilink
        45 months ago

        I don’t know how much of a subset I am, but I still use dictionary softwares from Windows 95~2000 era and Android softwares on a completely offline and vanilla VM, partly due to internet randomly going bad, and partly because I am neurotic about digital contents vanishing once support ends.

        • Cr4yfishOP
          link
          fedilink
          35 months ago

          Understandable. I added a proper offline mode back to the Roadmap on github.

    • Cr4yfishOP
      link
      fedilink
      85 months ago

      I use Gemini, which supports PDF File uploads, combined with structured outputs to generate Course Sections, Levels & Question JSON.

      When you upload a PDF, it first gets uploaded to a S3 Database directly from the Browser, which then sends the Filename and other data to the Server. The Server then downloads that Document from the S3 and sends it to Gemini, which then streams JSON back to the Browser. After that, the PDF is permanently deleted from the S3.

      Data Privacy wise, I wouldn’t upload anything sensitive since idk what Google does with PDFs uploaded to Gemini.

      The Prompts are in English, so the output language is English as well. However, I actually only tested it with German Lecture PDFs myself.

      So, yes, it probably works with any language that Gemini supports.

      Here is the Source Code for the core function for this feature:

      export async function createLevelFromDocument(
          { docName, apiKey, numLevels, courseSectionTitle, courseSectionDescription }: 
          { docName: string, apiKey: string, numLevels: number, courseSectionTitle: string, courseSectionDescription: string }) 
          {
          
          const hasCourseSection = courseSectionTitle.length > 0 && courseSectionDescription.length > 0;
      
          // Step 1: Download the PDF and get a buffer from it
          const blob = await downloadObject({ filename: docName, path: "/", bucketName: "documents" });
          const arrayBuffer = await blob.arrayBuffer();
          
          // Step 2: call the model and pass the PDF
          //const openai = createOpenAI({ apiKey: apiKey });
          const gooogle = createGoogleGenerativeAI({ apiKey: apiKey });
      
          const courseSectionsPrompt = createLevelPrompt({ hasCourseSection, title: courseSectionTitle, description: courseSectionDescription });
          
          const isPDF = docName.endsWith(".pdf");
      
          const content: UserContent = [];
      
          if(isPDF) {
              content.push(pdfUserMessage(numLevels, courseSectionsPrompt) as any);
              content.push(pdfAttatchment(arrayBuffer) as any);
          } else {
              const html = await blob.text();
              content.push(htmlUserMessage(numLevels, courseSectionsPrompt, html) as any);
          }
      
          const result = await streamObject({ 
              model: gooogle("gemini-1.5-flash"),
              schema: multipleLevelSchema,
              messages: [
                  {
                      role: "user",
                      content: content
                  }
              ]
          })
          
      
          return result;
      }
      
  • @grapemix@lemmy.ml
    link
    fedilink
    45 months ago

    Is it for self-host ppl too?

    For all projects/apps, I am looking for OIDC, S3 and PgSQL. It’s easier to implement these features earlier and these features will make any projects more popular in the self host community.

    • Cr4yfishOP
      link
      fedilink
      25 months ago

      Is it for self-host ppl too?

      In theory not an issue. I use Supabase, which you can self host as well.

      You can also self host the Mistral Client, but not Gemini. However, I am planning to move away from Gemini towards a more open solution which would also support self hosting, or in-browser AI.

      I am looking for OIDC, S3 and PgSQL

      Since I use Supabase, it runs on PgSQL and Supabase Storage, which is just an Adapter to AWS S3 - or any S3, really. For Auth, I use Supabase Auth which uses OAuth 2.0, that’s the same as OIDC right?

      • @grapemix@lemmy.ml
        link
        fedilink
        25 months ago

        Very cool. You can check out ollama for hosting local ai model.

        OIDC is an extension of OAuth2 that focuses on user authentication rather than user authorization. Once OIDC authenticates a user, it uses OAuth2 specifications to perform authorization.

        The easiest way to support oidc is thru using lib from your framework/language. All major language should already have oidc lib. Take a look for authelia which has pretty nice doc. We host lots of app and we don’t want to login hundred times for each apps. It’s nice to login once only and all apps play nice to each other ;)

    • Cr4yfishOP
      link
      fedilink
      65 months ago

      Haha. Well we can’t all actually be Duolingo and employ people to create the courses :D

        • Robust Mirror
          link
          fedilink
          35 months ago

          I’ve made custom flashcards for anki to study stuff and I tested this for some similar things and it was a lot faster and easier. Anki feels like it takes forever so the investment to make a custom set is only worth it for things you need to study for a long time.

          If all you want is to generate a bunch of flashcards fast and you have a pdf with the info presented clearly it’s an easy method.

        • Cr4yfishOP
          link
          fedilink
          15 months ago

          Well, yes, in a way at least. I’m not pretending to invent something never done before. Although it already has multiple features that Anki doesn’t have.

  • @bloubz@lemmygrad.ml
    link
    fedilink
    1
    edit-2
    5 months ago

    Cool concept! Good luck with it

    Hope you can get around to let switch models, and maybe let people use open-source/open data models?

    I’ve also heard about Vercel AI SDK that let’s you use different models with a common SDK so that it doesn’t rely on implementation