• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: March 10th, 2025

help-circle
  • uuldika@lemmy.mltoScience Memes@mander.xyzHelp.
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.

    for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.

    philosophical zombies are no longer a thought experiment.


  • uuldika@lemmy.mltoScience Memes@mander.xyzHelp.
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 months ago

    happened with Replika a few years ago. made a number of people suicidal when they “neutered” their AI partners overnight with a model update (ironically, because of pressure because of how unhealthy it is.)

    idk, I’m of two minds. it’s sad and unhealthy to have a virtual best friend, but older people are often very lonely and a surrogate is better than nothing.















  • because it worked for millennials! it wasn’t an unfounded expectation. remember Flash? remember how many Flash games were programmed by 12yos? how many websites on Geocities and Angelfire? the 5kr1pt k1dd13z in the hacking scene? it was a golden generation of computer literacy. dev tools were basic tools, we were exposed to foundational technologies while they were new, and the Internet was oriented around producing content rather than consuming it. then came the app era, and the TV-ification of the Internet, and those skills atrophied.