Between wanting to do more with local LLMs, wsl annoyances, and the direction tech companies have been going lately, I think it’s time I start exploring a full Linux migration

I’m a software dev, I’m comfortable in the command line, and I used to write the node configuration piece of something similar to chef (flavor/version agnostic setup of cloud environments)

So for me, Linux has always been a “modify the script and rebuild fresh” kind of deal… Even my dev VMs involved a lot of scripts and snapshots. I don’t enjoy configuration and I really hate debugging it, but I can muddle through when I have to

Web searches have pushed me towards Ubuntu for LLM work, but I’ve never been a big fan of the window Managers. I like little flourishes like animation and lots of options I can set graphically, I use multiple desktop multiple monitors

I’ve tried the one it comes standard with, gnome, and kde (although it’s been about 5 years since I’ve last given them a real shot).

I’m mostly looking for the most reasonable footprint that is “good enough”, something that feels polished to at least the Windows XP level - subtle animations instead of instant popups, rounded borders, maybe a bit of transparency here and there.

I’m looking at Ubuntu w/

  • kde w/ plasma (I understand it’s very configurable, I don’t love the look and it seems to be a bigger footprint

  • budgie (looks nice, never heard of it before today)

  • kylin (looks very Windows 10 which is nice, a bit skeptical about the Chinese focus)

  • mate (I like the look, but it seems a bit dubiously centralized)

  • unity (looks like the standard Ubuntu taken to it’s natural conclusion)

  • rhino Linux (something new which makes me skeptical, but pretty and seems more like existing tools packaged together which makes me think the issues might not impact actual workflow)

  • anything the community is big on for this, personally I’d pick opensuze, but I need to maximize compatibility with bleeding edge LLM projects

My hardware and hard requirements are:

  • nvidia 1060ti
  • ryzen 5500u
  • 16g ram
  • 4 drives nearly full, because it’s a computer of Theseus running the same (upgraded) vista license that came with the case like 15 years ago
  • multi desktop, multi monitor
  • can handle a lot of browser Windows/tabs
  • ideally the setup is just a package mana ger install script with all my dependencies
  • gaming support would be nice, but I’ll be dual booting for VR anyways

I’ve been out of the game for a while, I’d love to hear what the feeling is in the community these days

(Side note, is pine as cool a company as it seems?)

  • @theneverfoxOP
    link
    English
    19 months ago

    I had a contract come up and had to shelve this for a bit, and your comment immediately annoyed me, because it really isn’t what I wanted to hear

    But it also stuck with me because it sounded like the advice I throw at new devs starting a project, knowing it’s a PITA up front, but pays dividends pretty quick.
    So I looked it up, and despite my bad experiences with docker and kubernetes (I was tasked with doing weird, off label things with them and it sucked), I’ve decided to take your advice and stop looking for docker workarounds

    And since it seems like it comes from a place of experience, I figured I’d share a bit more about what I want to do and see if you had any more advice

    Basically, I want to link together basic models trained to do different things, with the end goal being something between a conversation partner and an assistant. The idea being I build very specific prompts to bypass the limitations of smaller models - the first goal is to take one LLM and a conventional management program and summarize key information, then use very specific structured prompts to generate a response to be vocalized and metadata that changes the state of the management system.

    My thought is to take something like alpaca or falcon 7B to track and summarize relevant information, feed it into another such model trained as a conversation partner with this input and output format, then throw together a web interface and do text<->speech on my phone or dev computer.

    When it comes to neural networks and LLMs, I have a good understanding of the theory of them and a great one of how brains work, but I’m mostly looking to use these systems as a black box initially. My initial goals are to generate dialogue trees for games and maybe practice my Spanish with a chatbot - accuracy and capabilities don’t matter too much, I’ve played with projects that could do this by just sending prompts to an endpoint

    Down the road, the goal is to have something extremely modular. This tech is moving fast and I envision linking a bunch of modules together to perform different tasks, and as better modules come out or I add/upgrade hardware, I want to be able to write something to act like autopilot in my ide or pilot a model in a game engine

    The main objective is to learn and to run agents on my own hardware. I’m looking for a side project that will be useful enough to keep up my interest, but also give me a starting point to modify from so I’m not sitting at a python terminal forcing myself through a tensor flow course before I get to the good stuff

    Any thoughts, advice, or projects you think I should know about when starting this journey?