I code and do art things. Check https://private.horse64.org/u/ell1e for the person behind this content. For my projects, https://codeberg.org/ell1e has many of them.
- 6 Posts
- 142 Comments
ell1e@leminal.spaceto
Linux Gaming@lemmy.world•Lutris dev says he's cool with AI generated bugs because his code is already full of bugsEnglish
3·2 天前How would such limited use fix the plagiarism? Here’s a lawyer demo’ing the issue: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567
This isn’t a legal advice. Check out the link, form your own opinion.
ell1e@leminal.spaceto
Programming@programming.dev•LLM Code and FOSS licenses are in conflict.English
5·2 天前Some highlights from this talk: https://github.com/LemmyNet/lemmy-docs/issues/413#issuecomment-4105667974 Quote: “Obvious, this is a copyright infringement.”
ell1e@leminal.spaceto
Linux Gaming@lemmy.world•Lutris dev says he's cool with AI generated bugs because his code is already full of bugsEnglish
354·3 天前Sadly, it seems to be fairly common to have at least some AI slop code now. E.g. lemmy itself appears to be planning to do so too.
It’s like having slop would get you some prize.
ell1e@leminal.spaceto
Programming@programming.dev•We Overhauled Our Terms of Service and Privacy Policy - Another VC funded bait and switchEnglish
1·3 天前Kate is a great minimal VS Code alternative. Sure, it’s less features, but it has the basics.
ell1e@leminal.spaceto
Linux@programming.dev•Systemd’s New Feature Brings Age Verification Option to LinuxEnglish
3·3 天前Relevant article: https://www.gnu.org/philosophy/you-the-problem-tpm2-solves.en.html
And if anybody thought TPM provides security: https://www.elevenforum.com/t/tpm-2-0-is-a-must-they-said-it-will-improve-windows-security-they-said.13222/ https://gist.github.com/osy/45e612345376a65c56d0678834535166 https://www.sophos.com/en-us/blog/serious-security-tpm-2-0-vulns-is-your-super-secure-data-at-risk https://www.covertswarm.com/post/how-secure-are-tpm-chips
Reader, you know what’s likely most secure? FOSS code, peer-reviewed and regularly patched.
I don’t get why one would trust security theater, aka TPM and secureboot.
That doesn’t take into account the extensively researched plagiarism concerns. It’s not just that LLMs make low quality slop but that some of us think the GPL won’t work if you can train LLMs on GPL, then have it spit out GPL snippets un-GPL’ed.
Some people literally un-GPL projects via AI in one go. While that’s the egregious version, any LLM use seems to risk having a similar effect on a smaller scope.
This isn’t only a legal question. At least if you think the GPL has societal and moral value.
Problem is, LLM code prediction will likely plagiarize too. Some argue “it’s too short I can’t get sued”, but even if that were universally true (don’t know, IANAL) that still leaves the ethics and morals of seemingly stealing some lines hook and sinker with every punctuation bit and intricancy from GPL code bases, without attribution.
Some simply think that’s bad for FOSS, notwithstanding other ways LLMs seem to harms FOSS.
(And oldschool “IntelliSense” is semantics based and doesn’t do that.)
There is a growing list of projects to collaborate with that reject LLM code: Asahi Linux, elementaryOS, Gentoo, GIMP, GoToSocial, Löve2D, Loupe, NetBSD, postmarketOS, Qemu, RedoxOS, Servo, stb libraries, Zig.
My opinion is that the data disagrees with you: 1. https://www.psu.edu/news/research/story/beyond-memorization-text-generators-may-plagiarize-beyond-copy-and-paste 2. https://dl.acm.org/doi/10.1145/3543507.3583199 3. https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 4. https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/ 5. Related high profile incident that is very telling: https://www.pcgamer.com/software/ai/microsoft-uses-plagiarized-ai-slop-flowchart-to-explain-how-github-works-removes-it-after-original-creator-calls-it-out-careless-blatantly-amateuristic-and-lacking-any-ambition-to-put-it-gently/
In the US at least, there’s clear legal precedent that LLM fabrications are not copyrightable.
I see many people doubt this says anything about training data copyright, beyond AI user copyright.
This isn’t legal advice, I’m not a lawyer.
Then the PR can be evaluated, rejected if it’s nonfree or just poor quality
I don’t get the difficulty of rejecting “if it’s nonfree or just poor quality or known LLM code”. I don’t think it’s a vague criterion.
And for many projects, if you admit it’s from a StackOverflow post, unless you can show it’s not a direct copy they will reject it as well. This isn’t commonly taken as incentivizing people to lie.
Now whether you think LLMs are worth the trouble to use is a different discussion, but the enforcement point doesn’t convince me.
There is also a responsibility and liability question here. If something turns out to be a copyright issue and the contributor skirted a known rule, the moral judgement may look different than if you knew and included it anyway. (I can’t comment on the legal outcomes since I’m not a lawyer.)
I was asking for good uses of LLMs since we were talking about those. Sorry for being unclear.
deleted by creator
ell1e@leminal.spaceto
Privacy@lemmy.world•Break privacy to make privacy? Age verification isn’t the answerEnglish
1·4 天前Reminder that the EU seemingly wants to do an UK Online Safety Act, supposedly coming in July 2026: https://leminal.space/post/31858818/21120139 For some reason, I still haven’t seen any press about it.
We were talking about lemmy and LLMs. They’re not part of any use case you’re listing.
But my apologies if I missed something here.
far too many “fuck AI” people are literally advocating for the equivalent of “fuck computers” and “more tedious labor please!”
Not what I’m advocating for.
We need to be pointing to good applications of AI
Feel free to do so, but studies are not on your side. Edit: this is a reminder we’re talking about LLMs for code and documentation.
The only somewhat clearly useful use case appear to be code reviews, but then you don’t need to actually allow submitting any LLM rewritten code or text since code reviews can be done using natural language. And if you use server-side LLMs, you’ll probably agree to ToS that they steal your data.
And LLMs seem to be amazing at plagiarism.
In my opinion, this argument is exactly the same as saying “we can’t enforce people not stealing GPL-licensed code and copy&pasting it into our project, so we might as well allow it and ask them to disclose it.”
You can try to argue AI may actually be useful, which seems like what they did, and that would more fairly inform a policy in my opinion. I think your argument doesn’t.
It’s sad. I’m hoping perhaps some well-reasoned comments might still have some impact, but I admit that it might be a long shot.
You’ll see farther below, sadly the lemmy team seem to have reversed their opinion immediately after. See also here: https://github.com/LemmyNet/lemmy-docs/pull/414/changes







So is
function isEven()a prompt with exact wording from an example, too?