• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: October 31st, 2024

help-circle
  • As nice as this would be, it’s not very likely… Licenses are usually limp suggestions from the perspective of companies with billions of dollars. AI companies train on millions of copyrighted materials, both literature and art, without any express permission from the authors or artists, and with essentially no recourse or compensation to the authors. You could append a ‘no AI training’ clause to an existing license like the MIT license, but the impact that will have will mostly be brief personal satisfaction and won’t change what the AI companies do. It’s genuinely more useful to keep code proprietary to prevent it from being used to train AI models.




  • Yeah, I see what you mean. That makes sense. After reading through some common OSS licenses, I can see the difference between licenses that require you not to modify the license notification, versus software that explicitly forbids certain changes. But, given how little funding OSS projects get, I’m not bothered by the idea that they want to make sure people financially contribute to the original creators. After all, if someone does fork it and do a better job, they could easily just put their own donate button higher up above the original one.













  • Mobile games are designed like junk-food: take it out, eat some junk, then put it away to go do something else, throw away the bag or seal it for a quick snack later. Normal games are designed like a full meal: sit down somewhere with good atmosphere, nutritious, good conversation, get full and go home with plenty of leftovers and good memories


  • TinyLLM on a separate computer with 64GB RAM and a 12-core AMD Ryzen 5 5500GT, using the rocket-3b.Q5_K_M.gguf model, runs very quickly. Most of the RAM is used up by other programs I run on it, the LLM doesn’t take the lion’s share. I used to self host on just my laptop (5+ year old Thinkpad with upgraded RAM) and it ran OK with a few models but after a few months saved up for building a rig just for that kind of stuff to improve performance. All CPU, not using GPU, even if it would be faster, since I was curious if CPU-only would be usable, which it is. I also use the LLama-2 7b model or the 13b version, the 7b model ran slow on my laptop but runs at a decent speed on a larger rig. The less billions of parameters, the more goofy they get. Rocket-3b is great for quickly getting an idea of things, not great for copy-pasters. LLama 7b or 13b is a little better for handing you almost-exactly-correct answers for things. I think those models are meant for programming, but sometimes I ask them general life questions or vent to them and they receive it well and offer OK advice. I hope this info is helpful :)