From kde-devel Fri Dec 01 03:07:04 2023 From: Ethan Barry Date: Fri, 01 Dec 2023 03:07:04 +0000 To: kde-devel Subject: Re: Interest in building an LLM frontend for KDE Message-Id: X-MARC-Message: https://marc.info/?l=kde-devel&m=170140614122528 On Thursday, November 30th, 2023 at 8:53 PM, Loren Burkholder wrote: >=20 >=20 > Howdy, everyone! >=20 > You are all undoubtedly aware of the buzz around LLMs for the past year. = Of course, there are many opinions on LLMs, ranging from "AI is the future/= endgame for web search or programming or even running your OS" to "AI shoul= d be avoided like the plague because it hallucinates and isn't fundamentall= y intelligent" to "AI is evil because it was trained on massive datasets th= at were scraped without permission and regurgitates that data without a lic= ense". I personally am of the opinion that while output from LLMs should be= taken with a grain of salt and cross-examined against trustworthy sources,= they can be quite useful for tasks like programming. >=20 > KDE obviously is not out to sell cloud services; that's why going to http= s://kde.org doesn't show you a banner "Special offer! Get 1 TB of cloud sto= rage for $25 per month!" Therefore, I'm not here to talk about hosting a (p= aywalled) cloud LLM. However, I do think that it is worthwhile opening disc= ussion about a KDE-built LLM frontend app for local, self-hosted, or third-= party-hosted models. >=20 > From a technical standpoint, such an app would be fairly easy to implemen= t. It could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't fo= cused on a server mode) to host the actual LLM; either of those backends su= pport a wide variety of hardware (including running on CPU; no fancy GPU re= quired), as well as many open-source LLM models like Llama 2. Additionally,= using Ollama could allow users to easily interact with remote Ollama insta= nces, making this an appealing path for users who wished to offload LLM wor= k to a home server or even offload from a laptop to a more powerful desktop= . >=20 > From an ideological standpoint, things get a little more nuanced. Does KD= E condone or condemn the abstract concept of an LLM? What about actual mode= ls we have available (i.e. are there no models today that were trained in a= way we view as morally OK)? Should we limit support to open models like Ll= ama 2 or would we be OK with adding API support for proprietary models like= GPT-4? Should we be joining the mainstream push to put AI into everything = or should we stand apart and let Microsoft have its fun focusing on AI inst= ead of potentially more useful features? I don't recall seeing any discussi= on about this before (at least not here), so I think those are all question= s that should be fairly considered before development on a KDE LLM frontend= begins. >=20 > I think it's also worth pointing out that while we can sit behind our scr= eens and spout out our ideals about AI, there are many users who aren't rea= lly concerned about that and just like having a chatbot that responds in wh= at at least appears to be an intelligent manner about whatever they ask it.= I have personally made use of AI while programming to help me understand A= PIs, and I'm sure that other people here have also had positive experiences= with AI and plan to continue using it. >=20 > I fully understand that by sending this email I will likely be setting of= f a firestorm of arguments about the morality of AI, but I'd like to remind= everyone to (obviously) keep it civil. And for the record, if public opini= on comes down in favor of building a client, I will happily assume the resp= onsibility of kicking off and potentially maintaining development of said c= lient. >=20 > Cheers, > Loren Burkholder >=20 > P.S. If development of such an app goes through, you can get internet poi= nts by adding support for Stable Diffusion and/or DALL-E :) >=20 > [0]: https://github.com/jmorganca/ollama > [1]: https://github.com/ggerganov/llama.cpp I am anti-LLM on the grounds that the training sets were created without th= e original authors' consent. I see no issue with a libre/ethical LLM, if th= ere is one, though. If a developer or team of developers wants to implement= a Qt and KDE-integrated LLM app, I have no problem with that, but I believ= e KDE as an organization should probably steer clear of such a thorny subje= ct. It's sure to upset a lot of users no matter what position is taken. On = the other hand, for those people who do make use of AI tools, a native inte= rface would be nice, especially one as feature-ful as you're describing... Regards, Ethan B.