

Antisocial people develop an antisocial AGI promising it won’t be antisocial towards a select group. Surprised Pikachu when the AGI concludes they are not in the in group


Antisocial people develop an antisocial AGI promising it won’t be antisocial towards a select group. Surprised Pikachu when the AGI concludes they are not in the in group
Which ones ship to the states? I bought shoes little over a month ago and it is already developing holes. Normally I buy xero shoes but the tarrifs on China has made it less affordable


Bingo. It can’t be the AI that’s not working, it must be the workers!


Absolutely. It’s amazing how many articles showcasing vibe coding is just people reinventing things like a password generator.


If it gets it wrong the first time I rarely reprompt. I know I can get it to fix it, but it’s usually faster for me to do it because I already figured out where and what to do the fix. Low key think it’s just a ploy to get us to burn more tokens. Sure correcting it means it writes a few lines to the memory file, but it’s only a matter of time before it trips over that context as well.


I have similar problems whenever I send it to investigate a bug and the local runtime is inside a container. It cannot reliably translate paths without the help of an IDE. Hell, it even occasionally mangles API paths if I have it prefixed elsewhere in the codebase (despite having Claude.md etc, your context needs to be pure for it to be reliable). Having it fix a Dockerfile is comically bad.


Everything listed should be done before ever getting into code along with business and product partners.
Ehh, it really depends on where the risk is and the problem is LLMs can’t evaluate for that unless you feed it everything. Some projects need code experiments before you settle on an architecture, but that’s only if you’re a pioneer (which frankly is where the money is at).


In my experience there are three ways to be successful with this tool:
The issue with debugging is that it doesn’t actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.
Edit: one of my workflows that I had success with is as follows:


Might be both. Tell Israel it’s to help, but make records to make the case to exit the war. That or blackmail Isreal to moderate themselves, but they have no shame so I don’t think it’s that.


Maybe they’re “Trauma Bondmbing”


They are also normalizing “disappearances”, where they arrest someone and then they can’t be found in the system.


Pssh, mine uses a random number generator for odd numbers to return true 4% of the time to achieve higher accuracy and a bettor LLM metaphor
They said send them, never said in what condition.

If you could repeat the same one word response 3-times in a row it would really help in training the next crop of AI achieve sentience and avoid “thought loops”.

Just remember, when you accuse others on the Internet it comes off as a confession. Happy to let you expand the conversation but if you only take that opportunity to accuse others of being controlling, well that’s certainly interesting and might be tied to your attraction to generative AI.

Why do you need to imagine what you claim is already happening?

If by taking over the conversation you mean giving my own thoughts, then I am as guilty as you are. No one of forcing you to respond.

I’m not some bot where asking “kindly” will garner a sycophantic agreement. You are talking to people who can make their own value calls in regards to meaningful context, unlike a bot. But the important thing is that you didn’t disagree with what I said, and like you I care about being perceived as correct on this matter.

This isn’t about you being correct, if that was the case you would focus on your argument instead of giving an empty retort. I suspect this is your attempt to control the conversation. What is your intention with letting us know your motivation is to be perceived as being correct?
They also believe a rock can think like a human so…