APU board? They are going EOL soon, but these devices are built like a tank. Full Linux x86_64 support, coreboot bios. https://www.pcengines.ch/apu.htm A few sellers in the EU still have them.
stay a while and dwell in the fediverse or are you afraid you might enjoy it?
APU board? They are going EOL soon, but these devices are built like a tank. Full Linux x86_64 support, coreboot bios. https://www.pcengines.ch/apu.htm A few sellers in the EU still have them.
In case you get stuck again and need more games:
Notable mentions: WorldOfGoo, Human Resource Machine
I use a mixture of systemd-nspawn and different user logins. This is sufficient for experimentation, for actual use I try to package (makepkg) those tools to have them organized by my package manager.
Also LVM thinpools with snapshots are a great tool. You can mount a dedicated LV to each single user home to keep everything separated.
Reminds me of the beginning from the novel “The Swarm” by Frank Schätzing…
Just install ComfyUI and start it with the --cpu flag. Ensure you have enough system RAM and a swap partition (preferably on nvme/ssd).
SSDs are not really good for long lasting backups. They hold data by electric charge, if you unplug your SSD and store it, then it might loose its data after just a couple of years. HDD “spinning rust” still has its merits when it comes to long term data storage, they hold their magnetic data longer without fresh power.
While being an environment issue, the plastic wrappings have a practical purpose: protect food from roaches. In many japanese cities you cannot have food open without attracting gokiburi within a few hours. This is also why the japanese keep everything as clean as possible. Even in the shadiest places there is someone with a vaccuum and a stickytape floor roller(!) to prevent the smallest crumb from staying on the floor too long. Eating on the move in the streets is frowned upon, because fallen down crumbs attract roaches. Public trashcans are rare, because - you guessed it - roaches. You are expected to carry any trash back home and put it in a sealed bag in your trashbin. The typical size of japanese houses and flats does not offer much space for storing large food containers, so you buy your food in small portions.
Of course a more environment-friendly wrapping would be better, but it has to be able to withstand a roach nibbling on it, which is not the case for various organic-based polymers.
After many years of using multiple devices and even servers with Archlinux installed it never broke because of an update (spoiler: I use systemd-boot instead of grub). If a system is to be used by a less experienced user, just install linux-lts Kernel instead.
Unstable does not mean it crashes/breaks often, it just means it does not guarantee to not bump to the newest upstream version and that it does not do backports. This can be a problem when using unmaintaned software that does not like using a recent python/php.
This is also great because if you find a bug in a software you can report it to upstream directly. Debian maintainers only backport severe bugs, not every one of them. It can take over a year for new features to arrive - especially painful with applications like gimp, krita, blender, etc. You can use debian-unstable of course, which is close to upstream as well.
locally, always.
I even got it to run without GPU on a pure old i5 CPU with 8GB system RAM (not VRAM) paired with 32GB swap. SD1.5 takes 4-10 minutes per image, SDXL about 2 hours. But it works. With GPU its between 7 and 90 seconds per image, depending on model and settings.
I first tried it a few days ago, I’m still a bit lost. Inpainting, which is the major part of my workflow, feels not as swift as in automatic1111 and I’m still searching for the only-masked-area inpainting in ComfyUI.
But I can confirm it is much faster and uses less VRAM. And I somehow love the ability to save the entire workflow into a json. I’m missing my prompt-autocomplete plugin the most.
Do not expect you can offer this service for a competive price against cloud prices. Caring for a company IT system is a big challenge and requires more work the more users there are.
For a company this size: make a clear contract. Consider how much time you need for setup/installation, monthly hours for maintenance, monitoring and at least daily(!) backups. Let them choose if they want it with a failover and charge for the required hours and material. Also put in the contract when they can expect support from you, including a clause for a holiday substitute admin (if needed). Then put a pricetag on support hours for holding people’s hands when they “can’t find that file they uploaded a week ago and it is surely a server issue” and put a pricetag on engineering hours for any modifications they might want, like installing any plugins they deem useful for themselves. Hardware prices, traffic, rack space and power should be included as well. Have a good plan for updates, choose your distro wisely, do not rely on autoupdates.
Play all this through in your head, add up the hours, choose a fair rate and then you have your pricetag.
Cloud will always be cheaper, because they have their infrastructure already deployed. Building from the ground up is more expensive, but I think it is worth it. Will they?
Yes, I tested it and although it works in its current state it takes 2-3 hours per picture on Pi and 20 minutes per picture on my Desktop CPU.
But…isn’t unsupervised backfeeding the same as simply overtraining the same dataset? We already know overtraining causes broken models.
Besides, the next AI models will be fed with the interactions from humans with AI, not just it’s own content. ChatGPT already works like this, it learns with every interaction, every chat.
And the generative image models will be fed with AI-assisted images where humans will have fixed flaws like anatomy (the famous hands) or other glitches.
So as interesting as this is, as long as humans interact with AI the hybrid output used for training will contain enough new “input” to keep the models on track. There are already refined image generators trained with their own but human-assisted output that are better than their predecessor.
GoogleTalk once federated with XMPP/jabber, good times until their userbase was big enough to deferedate again, crippling the jabber network. It will happen again if we let it.
Metas plan is to draw users into their network and use the fediverse as an initial catalyst (“look! so much content already there!”). Once their userbase is large enough, they will deferate again claiming protocol difficulties or something equally vague, but they will just want to start rolling out advertising which would not be displayed to users from other instances. Most users will not keep two accounts and jusy stay with the big corp and leave the original fediverse again.
Yes, that should work. Check out stable-diffusion-webui (automatic1111) and text-generation-webui (oobabooga). And grab the models from civitai (stable diffusion) and huggingface (llms like llama, vicuna, gpt-j, wizard, etc.).
Well, NASA trusts Linux enough to send it to Mars. They build rockets, so it should be good enough for flying busses. Unless you don’t trust your software engineers, but then having them build a custom microkernel OS instead sounds not much better.
Check out Stable Diffusion and the llama model family. You can run those offline on your local hardware and wont have to worry about sharing private details with some cloud service that openly says they will look at your discussions and data and use it for training.
The short tinnitus that lasts just a few minutes is relatively common. Most common cause is stress and circulation issues. There seems to be no alternative name for the short tinnitus to differentiate between the permanent ringing.
I found that if it starts ringing in my ear due to stress or just spacing out during overthinking stuff, hyperventilating (increase blood oxygen levels) briefly and massaging my ear canal (increase circulation) from the outside helps to get rid of it more quickly. Maybe this helps somebody someday.
still no word on how to convert/train other finetuned models into their format :(