And keep in mind there weren’t just large beasts, but small ones as well. People tend to focus on the big ones, because they are most impressive, but they came in the full range of sizes. Just imagine how alien our planet looked compared to what we are used to now.
- 1 Post
- 57 Comments
Thorry@feddit.orgto
Programmer Humor@programming.dev•Don't pay for AI, frame your questions like you want Maccas.
4·11 天前I think McDonald’s UK still has a support bot. But it’s like one of those pre-LLM bots that does very basic stuff. Basically a glorified search function for the website.
While it’s true that the big players don’t use consumer grade hardware, there is an actual shortage of consumer grade stuff. This is pushing prices of whole computers, laptops, phones and hardware such as gpus and ram sticks to high prices.
The reason is there’s only a few companies creating high end memory, storage and processing chips like cpus and gpus. These companies have in the past 2 years shifted their manufacturing capabilities to creating just the high margin enterprise stuff. This at the cost of producing the lower end low margin consumer stuff. For a while we were coasting on existing stock, but once that began to ran out, prices started going up real fast.
Big players like Dell, Lenovo and HP for example bought up a whole lot of hardware so they could have a 2026 lineup of products. This again further pushed up prices for consumer hardware, especially memory. And even these companies are struggling with getting enough supplies for a 2027 range. So expect prices to go up further the next couple of months. Although in the consumer space we are seeing prices stabilize, because demand has fallen off hard, not many people can or are willing to afford as much as the asking price.
I agree totally it’s fucking bullshit and all of the AI speculation impacting the rest of the markets so hard is a real issue, the shortages are real and not just market speculation.
I use Arch BTW.
Like just huge arches instead of windows or even doors, Arch is all you need.
With stories like this I always wonder about how much guidance the bot was given. They mention a whole bunch of tokens and someone actively guiding the thing “to get it unstuck from dead ends”. What did the bot actually do? Did it do anything? Or was it guided to write an already known script for an already known exploit.
A lot of times it’s presented as telling the bot: “Hack this software” and it does the thing. This one is at least a bit more honest about saying it also took a lot of hours and prompting. I’m not sure the bot actually did anything, it’s just a really really inefficient way of typing.
Very good! Your spidey senses are working perfectly. Hey I want to comment this calculation, why don’t I move it into a function so the name can explain what it does. Good call!
Sometimes the algorithm is inlined for performance, sometimes it’s a class with a bunch of functions that as a whole is primarily based on an algorithm, so comments might make sense in those cases. Most of the times it’s a library, so the name of the library kinda gives it away and hopefully has good documentation as well.
Asking an LLM to add comments is actually pretty much the worst thing you can do. Comments aren’t meant to be documentation and LLMs have a habit of writing documentation in the comments. Documentation is supposed to be in the documentation, not in the code. LLMs are often trained on things like tutorials, where super obvious statements are commented to allow people to learn and follow along. In actual code you absolutely do not do this, obvious statements should be obvious by themselves. At best it’s extra work to read and maintain the comments for obvious statements, at worst they are incorrect and misleading. I’ve worked on systems where the comments and the code weren’t in line with each other and it was a continual guess if the comment is the way it was supposed to work, or if the code is correct and the comment wrong.
So when do you actually add comments? That’s actually very hard, something people argue about all the time and a bit of an art form to get right. For example if I have some sort of complex calculation, but it’s based on a well known algorithm, I might comment the name of that algorithm. That way I can recognize it myself right away and someone that doesn’t know it can look it up right away. Another good indicator for comments are magic numbers. It’s often smart to put these in constants, so you can at least name them, but a small little comment to indicate why it’s there and the source can be nice. Or when there is a calculation and there’s a +1 for example in there somewhere, one might ask why the +1, then a little comment is nice to explain why.
Comments should also serve like a spidey sense for developers. Whenever you are writing comments or have the urge to add some comments somewhere, it might be an indicator the code is messy and needs to be refactored. Comments should be short and to the point, whenever you start writing sentences, either start writing documentation or look at the code why it’s required to explain so much and how to fix that.
Another good use for comments is to warn away instincts for future devs. For example in a system I worked on there is a large amount of code that seems like it’s duplicate. So a new dev might look at it and see a good place to start refactoring and remove the duplicated code. However the duplication was intentional for performance reasons, so a little comment saying the dupe is intentional is a good idea.
I’ve also seen comments used to describe function signatures, although most modern languages have official ways of doing that these days. These also might border on documentation, so I’d be careful with that.
LLMs also have a habit of writing down responses to prompts in the comments. For example the LLM might have written some code, you say: Hey that’s wrong, we shouldn’t set x to y, we should set it to z. And the LLM writes a comment like // X now set to Z as requested. These kinds of comments make no sense to people reading the code in the future.
Keep in mind comments are there to make it easier for the next guy to work on the code, and often that next guy is you. So getting it right is important and hard, but very much worth while. What I like to do is write code one day and then go back and read it the next day or a few days later. And not the commit, with the diff and the description, the actual files beginning to end. When I think something is weird or stands out, I’ll go back and edit the code and perhaps add comments.
IMHO LLMs are terrible at writing code, it’s often full of mistakes and oversights, but one of the worst parts is the comments. I can tell code was AI generated right away by the comments and those comments being present are a good indicator the “dev” didn’t bother to actually read and correct the code.
Thorry@feddit.orgto
Science Memes@mander.xyz•today's massive sunspot looks like a dancing gorilla cmvEnglish
20·23 天前To estimate when to blast out the CME to wipe us out as revenge
The thing about a games store like Steam or Epic isn’t the software itself. Steam has shown having a good client is a large part of it, but it isn’t the most important part. The most important part is the negotiations with the publishers and game developers to have them publish the games on that store. There is a whole lot of legal and pricing stuff involved. Another important part is a large CDN around the world to deliver the data to customers at speed.
Many large companies have tried pouring millions into this and haven’t had a lot of success. There is so much involved and large market forces to contend with.
As for just having something to manage the games on your system there is Lutris. It allows you to easily manage different game libraries and individual games. Plus tools like emulators and such to run older games for example. It’s fully open source and an initiative well worth sponsering.
Thorry@feddit.orgto
KDE@lemmy.kde.social•This October... KDE is turning 30 🎂! Join us for six months of celebrations, fun and activities🎉!
6·25 天前I started out Linux with Suse 5 and when version 6 rolled around I bought it. For several 5 and 6 versions I had the physical box set with a thick book and a bunch of cd’s. Plus of course the boot floppy, because booting from CDs was only a thing in dreams. I didn’t have very much internet at the time, so physical media ruled supreme.
When version 6 was released it included KDE 1.0 and I was very much interested. Unfortunately hardware support was lacking and software was incomplete at best. So I spent months trying to get X Windows running at all and then running without crashing. When I finally got that black and white checkerboard pattern with the familiar X mouse cursor some tears were shed. I could move my serial grey ball mouse and the cursor would move, and it didn’t even crash.
I started with some small utilities and the basics were working. But without a window manager like KDE it’s not much fun. Unfortunately that also didn’t work. But I knew C, C++, multiple forms of assembly and a bunch of other programming languages. My sneakernet version of Ralph Brown’s Interrupt List was my holy bible, hacking all sorts of stuff together on my machine. The Suse distro came with everything needed to develop software and the sources were included on the CDs. So I started hacking and sunk so much time into it.
We didn’t have internet beyond expensive and slow dial-up. But I did have a treasure trove of books, a lot from my grandpa, some from my dad and my own collection was coming along nicely as well. With a lot of hacks and persistence I got KDE somewhat working. Not long after that I switched over to Debian and used Gnome on that. I got into QT software development for a while. These days I use Arch btw and use KDE, absolutely love it!
I often think back fondly of all of those nights and weekends I spent on my little machine. Getting stuff running just for the fun of it. These days the world moves so fast (or I’m just old). I wouldn’t mind slowing everything down and spending some years getting KDE 1.0 to work on my old machine.
The interesting thing about this is that it could be a double whammy. The collision that formed the Moon not only made Earth smaller, it also ejected a lot of material away from the orbit. This made Earth even smaller than it would otherwise have been, had the two bodies merged. And the Moon also formed in the process. The Moon causes the tides which are theorized to have a significant beneficial effect on evolving more complex forms of life.
So just being small might not be enough and having a big moon might also not be enough, but Earth was lucky enough to have both. And that’s just some of the things in a long list of things that have to go right to get complex life on a planet.
My feeling is that life is pretty rare, but given there are so many star systems in our galaxy there might be a lot of it still. But most of it is probably very simple stuff. Getting to where Earth is, might be a once every couple of millions of years event within our entire galaxy. So there really might be nothing intelligent out there at this moment in time, there might have been earlier and there might be in the future, but for right now we are it.
Thorry@feddit.orgto
Technology@beehaw.org•The blue light from your phone isn't ruining your sleep
4·29 天前Bruh I feel like shit when I get up at 7 AM after I’ve been doom scrolling till 4 in the morning, it must be the blue light man, must be
Thorry@feddit.orgto
Science Memes@mander.xyz•Houston, we have a Microslop Outlook problemEnglish
23·29 天前On the other hand, you’re doing the first ever Moon landing, trying to manually find a good landing spot, running out of fuel and trying not to die. And all of a sudden your navigation computer starts throwing 1202 errors. That has to be one of the most butt clenched moments of all time.
Computer issues are basically tradition at this point.
Thorry@feddit.orgtoHacker News@lemmy.bestiver.se•Sam Altman's Coworkers Say He Can Barely Code and Misunderstands Basic ConceptsEnglish
19·29 天前Because these tech bros like to put forward a fairy tale where they themselves are geniuses and basically carry the company on their shoulders. They maintain this carefully crafted image of being super duper smart and all of the setbacks are just because humanity is catching up with their vision. They use this image to dupe investors into investing huge amounts of money based on ideas that are in principle good, but in reality very much not feasible. A lot of the money is invested purely on the story and image of the one person, where they build each funding round on the previous one, snowballing into something huge.
In reality these people are scam artists, they know how to project their image, how to sweet talk investors and how to lie and commit fraud on a running basis. Sometimes they are caught, like with Elizabeth Holmes, most of the times they are not.
People who actually know their stuff clearly see those tech bros like Altman and Musk are full of shit talking out of their arse. But the general public doesn’t know that and believe the lies, but more importantly the investors believe the lies as well.
Now you’re asking the right questions. That’s not just simple editing, that’s AI editing. I’m going to give you the honest brutal truth, AI editing is everywhere.
Thorry@feddit.orgto
Spaceflight@sh.itjust.works•Artemis II will use laser beams to live-stream 4K moon footage at 260 Mbps — one giant step beyond the S-band radio comms of the Apollo eraEnglish
5·1 个月前For a deep dive into the old Apollo communications, check out the YouTube channel Curiousmarc. His team and him have been working on restoring (and playing around with) old Apollo hardware. They go through a lot of the features and functionality, as well as teardowns, repairs and testing all of it.
Can we PLEASE stop calling everything …gate? There wasn’t even a gate involved in the first place. This naming convention fucking sucks.







Suuuuure, sloperators are going to be big.
Maybe let the tech speak for itself instead of forcing all this slop down our throats. The harder people shout “THIS IS THE NEXT BIG THING!” the more I don’t believe it. All of these comparisons to old tech that became big, but don’t look at the thousands of ideas that never became big. Just because some tech got big, does not mean any tech will be big and the odds are very much against any single tech becoming big.
And these comparisons to the dotcom bubble, sure it was all a bubble, but the tech was valid! That’s true, but we didn’t know that at the time. And the bubble popping still sucked, the economy was bad, people lost their jobs, regular folk were suffering for VC going ham on some new tech. And it also turns cause and effect on its head. It wasn’t like the dotcom tech needed the bubble to become successful or that tech can’t become successful without a bubble. Just because AI is now in a big bubble doesn’t mean anything useful will come out the other end.
Personally I think the future of AI is more in small dedicated expert systems, that do select specific roles and do them well. Not these know all, do all, everything chat based LLM systems they are pushing now. And I believe we will have specific chips that run these systems in an efficient local way, not the subscription based future tech is pushing now. I also believe there is a hard ceiling to this tech, where it can be pushed so far and not further and I feel we are close to that ceiling now. Going beyond requires exponential more time and effort to setup and run and in the end there just isn’t a use-case available where it’s cost effective to do so.
But hey, I’m just an idiot on the internet, what do I know.