

This predates the ai bubble. There used to be a really common “plagiarism detector” (something like CheckMeIn?] that would generate a “similarity score” with a database of literature. Institutions were welcome to set their own thresholds of what they considered too similar. I hit the threshold multiple times in completely original works by using language that was simply too literary or formal in nature.
This is because all art are forms of remixing, whether it’s intentional or not. We’re teaching the wrong lessons here.
For many many centuries, art and artists, whether it’s musicians, artists, actors, writers, essayists, whoever, they have been facing an uphill battle of oversaturation in each creative industry. It’s only gotten worse in the past 50-75 years, and we’re more exposed to the sheer numbers now. We are throwing a drop of water into an ocean and hoping people will notice.
Trying to use “plagiarism detectors” against databases of millions or billions of pages is about as pointless as accusing songwriters of plagiarizing songs based on four notes. There are only so many musically-useful combinations of four notes, and they have all been used. Adam Neely has been reporting on this garbage for years.
LLMs are just making the problem even more obvious: creativity is not unique, it is not unique to people, and people have been mentally trained to expect uniqueness so much that we purposely ignore 99.999% of the material that is offered to us. As such, only 0.0001% of the ones who create earn any sort of popularity, and the rest starve to death. We ourselves are starved for content, as we consume anything that fits our extremely narrow definition of creativity like the voracious vampires we are.




















I’ve always been sick of it. It’s impacted how developers create games.
Once upon a time, hard and difficult games on 8-bit and 16-bit platforms were created accidentally, either because of design bugs, or developers not having time to run through proper play-test cycles, or only doing the play testing themselves. We put up with it because we were kids and had a limited budget for games, so we played what we had. It was never intentional, since they wanted to make sure it was balanced enough to appeal to the general audience, but still have difficulty levels for people who wanted to try out a second harder playthrough.
Then, games like Dark Souls came along, which pretended that hard games were a From Software invention, and propped up a community of egoists and digital sadomasochists. All they did was make the designs more deliberate, to the point of developer trolling. (I know this started earlier on in the indie scene, especially roguelikes, but Dark Souls popularized it.)
The “git gud” crowd pushes this narrative of “if it’s possible to do, then it’s the player’s fault for not having the skill to do so”, to the point of personifying a game with statements like “the game is punishing me with bad RNG” or “the game is actively trying to kill me”. This completely ignores the developers’ responsibility of instituting balanced difficulty levels, since it’s the developers’ fault that “the game” does these things.
Again, it has really impacted how developers create games nowadays. First, the “git gud” crowd is loud enough that developers now think they deserve a voice, as if difficult games weren’t absolutely everywhere, even before Dark Souls. The popularity of speed running makes them think that have to cater to that crowd, and streamers streaming impossible challenges skews that difficulty Overton window even more. Developers think they have to make some impossibly difficult game, so that streamers, who famously play video games for a living for thousands of hours a year, will advertise their game and push it to the top.