[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-devel
Subject:    Interest in building an LLM frontend for KDE
From:       Loren Burkholder <computersemiexpert () outlook ! com>
Date:       2023-12-01 2:53:22
Message-ID: SA3PR20MB5936B8C9D0ACA1C3B05DAC15C781A () SA3PR20MB5936 ! namprd20 ! prod ! outlook ! com
[Download RAW message or body]


Howdy, everyone!

You are all undoubtedly aware of the buzz around LLMs for the past year. Of course, \
there are many opinions on LLMs, ranging from "AI is the future/endgame for web \
search or programming or even running your OS" to "AI should be avoided like the \
plague because it hallucinates and isn't fundamentally intelligent" to "AI is evil \
because it was trained on massive datasets that were scraped without permission and \
regurgitates that data without a license". I personally am of the opinion that while \
output from LLMs should be taken with a grain of salt and cross-examined against \
trustworthy sources, they can be quite useful for tasks like programming.

KDE obviously is not out to sell cloud services; that's why going to https://kde.org \
doesn't show you a banner "Special offer! Get 1 TB of cloud storage for $25 per \
month!" Therefore, I'm *not* here to talk about hosting a (paywalled) cloud LLM. \
However, I do think that it is worthwhile opening discussion about a KDE-built LLM \
frontend app for local, self-hosted, or third-party-hosted models.

From a technical standpoint, such an app would be fairly easy to implement. It could \
rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't focused on a server \
mode) to host the actual LLM; either of those backends support a wide variety of \
hardware (including running on CPU; no fancy GPU required), as well as many \
open-source LLM models like Llama 2. Additionally, using Ollama could allow users to \
easily interact with remote Ollama instances, making this an appealing path for users \
who wished to offload LLM work to a home server or even offload from a laptop to a \
more powerful desktop.

From an ideological standpoint, things get a little more nuanced. Does KDE condone or \
condemn the abstract concept of an LLM? What about actual models we have available \
(i.e. are there no models today that were trained in a way we view as morally OK)? \
Should we limit support to open models like Llama 2 or would we be OK with adding API \
support for proprietary models like GPT-4? Should we be joining the mainstream push \
to put AI into everything or should we stand apart and let Microsoft have its fun \
focusing on AI instead of potentially more useful features? I don't recall seeing any \
discussion about this before (at least not here), so I think those are all questions \
that should be fairly considered before development on a KDE LLM frontend begins.

I think it's also worth pointing out that while we can sit behind our screens and \
spout out our ideals about AI, there are many users who aren't really concerned about \
that and just like having a chatbot that responds in what at least appears to be an \
intelligent manner about whatever they ask it. I have personally made use of AI while \
programming to help me understand APIs, and I'm sure that other people here have also \
had positive experiences with AI and plan to continue using it.

I fully understand that by sending this email I will likely be setting off a \
firestorm of arguments about the morality of AI, but I'd like to remind everyone to \
(obviously) keep it civil. And for the record, if public opinion comes down in favor \
of building a client, I will happily assume the responsibility of kicking off and \
potentially maintaining development of said client.

Cheers,
Loren Burkholder

P.S. If development of such an app goes through, you can get internet points by \
adding support for Stable Diffusion and/or DALL-E :)

[0]: https://github.com/jmorganca/ollama
[1]: https://github.com/ggerganov/llama.cpp


["signature.asc" (application/pgp-signature)]

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic