• 0 Posts
  • 819 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2024

help-circle












  • Just one paragraph? I understand why that feels like an indicator of LLM use these days, but that actually sounds like a fairly common mistake human writers might make. Author decides to move a topic to a different section, copies it and rewords to suite new placement and forgets to remove the section from it’s original spot. A pro shouldn’t be making that kind of mistake, but it’s a particularly difficult one to spot in reviewing the article. It’s an error that is especially difficult to spot if you’re the author because of your own familiarity with the article. The only effective way I found to combat those kinds of mistakes in my writing was to delay my own review of my writing (sometimes as long as a day or two) after significant writing or edits. Clearly that strategy is unworkable in a fast paced journalism setting, where that kind of space between writing and editing cannot meet deadlines.

    This would look a lot different than the similar AI slop tell I see in news articles that repeat the headline across multiple paragraphs in a row with different wording and no new details or clarifications. I don’t see any of this in the article. I could not find the repeated paragraphs that you’re talking about. Calling back to previous points in an essay with various subsections, even repeating important points and details is often just good writing.







  • That also sounds a lot like the kind of comments that Reddit (and Lemmy, and really any social network with votes) grooms for if you prefer up votes to arguing with pedants and trolls. Eventually all your left with are boring overqualified comments or inflammatory comments when the mob rules and you are striving/solving for the most popular/engaging answer. It’s like conversational least squares analysis.

    I wonder where the LLM trolls are? Maybe they are just so subtle, we haven’t noticed them. Maybe LLMs aren’t hallucinating answers, so much as they and trolling us. And here is where I qualify my answer in an attempt to quell the fools that might think anything I’ve said here implies that LLMs are anything close to sapient.