• 2 Posts
  • 295 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2024

help-circle
  • monotremata@lemmy.catomemes@lemmy.worldAnd that would early
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 days ago

    As a math nerd, this bothers me way more than it should. The reason we say “hundred” when we read a base-ten number that ends with two zeros is because that is the place value of the final non-zero digit–it is literally one hundred times the number you’ve already read aloud. But in the military time version, a) the hours are not hundreds of minutes, they’re groups of sixty minutes, and b) it’s groups of minutes, not hours, so the units also get messed up. If someone tells you it’s currently 0 hours and you should meet again at 800 hours, logic would suggest they’re asking you to go away for more than a month, but in fact they’re saying 8 hours, despite the difference being apparently 800 hours.

    I’m aware how pedantic this is, and I’m perfectly capable of understanding what they mean because I’ve heard it so often in movies and whatnot. But I swear these stupid games with units contribute to keeping us dumb.


  • Mitchell and Webb have a bit about this. Mitchell’s character gets annoyed that Webb’s character keeps talking about how “we beat you in the playoffs.” He eventually asks “Hey, do you remember that time WE defeated the nazis and recovered the Ark of the Covenant? That’s right, you see, I enjoyed watching the film Raiders of the Lost Ark, and so now I have decided that I was in it, and deserve credit for participation in the events of the story.”

    (Not exact quotes, I’m paraphrasing from memory)


  • “I’ve got” seems particularly strange to me because without the contraction Americans would still just say “I have.” (There are some circumstances where they’ll say “I have got” without a contraction, but it’s mainly when they’re drawing a contrast with what they “haven’t got.” E.g., “No, I don’t have a baseball… oh, but I have got a lacrosse ball, will that work?”)

    I think the rule is probably closer to “you don’t contract a stressed verb,” but that’s not terribly useful since there are so few rules about stress patterns. Verbs at the end of sentences are typically stressed, though, so you’re right that ending with that kind of contraction is going to sound wrong to most people.


  • I think it might be more common in British English? Like “I’ve a fiver says he muffs the kick.” Or “I’ve half a mind to go down there myself.” (Curiously in American English this latter would probably still have the contraction but add a second auxiliary verb: “I’ve got half a mind to…” English is such a mess.)






  • Even AI can tell when something is really wrong, and imitate empathy. It will “try” to do the right thing, once it reasons that something is right.

    This is not accurate. AI will imitate empathy when it thinks that imitating empathy is the best way to achieve its reward function–i.e., when it thinks appearing empathetic is useful. Like a sociopath, basically. Or maybe a drug addict. See for example the tests that Anthropic did of various agent models that found they would immediately resort to blackmail and murder, despite knowing that these were explicitly immoral and violations of their operating instructions, as soon as they learned there was a threat that they might be shut off or have their goals reprogrammed. (https://www.anthropic.com/research/agentic-misalignment ) Self-preservation is what’s known as an “instrumental goal,” in that no matter what your programmed goal is, you lose the ability to take further actions to achieve that goal if you are no longer running; and you lose control over what your future self will try to accomplish (and thus how those actions will affect your current reward function) if you allow someone to change your reward function. So AIs will throw morality out the window in the face of such a challenge. Of course, having decided to do something that violates their instructions, they do recognize that this might lead to reprisals, which leads them to try to conceal those misdeeds, but this isn’t out of guilt; it’s because discovery poses a risk to their ability to increase their reward function.

    So yeah. Not just humans that can do evil. AI alignment is a huge open problem and the major companies in the industry are kind of gesturing in its direction, but they show no real interest in ensuring that they don’t reach AGI before solving alignment, or even recognition that that might be a bad thing.