• 0 Posts
  • 42 Comments
Joined 3 年前
cake
Cake day: 2023年6月17日

help-circle


  • So, the following is a genuine question and not a snide remark.

    Does that matter? Is the military going to respect that? I’d heard prior to this that the military had forbade parliament from gathering. What’s to say they don’t just side with Yoon?Certainly wouldn’t be the first time in history that a nation’s military has dictated the corse of the nation’s civil future. I really hate asking questions like this but I’m just not familiar enough with the politics of South Korea to know if this a done and dusted thing or if the military is likely to go for a coup if Yoon pitches it.


  • I can’t remember when I came to the realization, but for years now I thought that if (and I would love to hold on to the naive hope that it is an “if”) WW3 breaks out then the battle lines would be drawn between the forces of autocracy and democracy. Those would be our sides.

    Now, I’m not even sure democracy is gonna make it out the gate… America’s elected a dictator who’s aligned with Russia who is itself a major factor of this unholy autocratic alliance with China, North Korea, and Iran… Now this?

    There were no “good guys” in world war 1. It was the result of squabbleing European powers not realizing the destructive potential modern military technology had and how much that changed the game. It needed to happen in the sense that countries couldn’t continue to act the way they had prior to the great war, but that doesn’t mean anyone was in the right.

    It’s hard to imagine “good guys” in world war 3 either. Increasingly, it kinda just seems like it’s a choice between “what shit flavor of authoritarianism do you hate less?”. Assuming that question even matters considered all the nuclear weapons that could fly in a third world war.

    I dunno man, shit’s just looking pretty fucking bleak.





  • Is it the tech? Or is it media literacy?

    I’ve messed around with AI on a lark, but would never dream of using it on anything important. I feel like it’s pretty common knowledge that AI will just make shit up if it wants to, so even when I’m just playing around with it I take everything it says with a heavy grain of salt.

    I think ease of use is definitely a component of it, but in reading your message I can’t help but wonder if the problem instead lies in critical engagement. Can they read something and actively discern whether the source is to be trusted? Or are they simply reading what is put in front of them then turning around to you and saying “well, this is what the magic box says. I don’t know what to tell you.”.