I code and do art things. Check https://private.horse64.org/u/ell1e for the person behind this content. For my projects, https://codeberg.org/ell1e has many of them.

  • 6 Posts
  • 142 Comments
Joined 8 个月前
cake
Cake day: 2025年7月16日

help-circle











  • ell1e@leminal.spaceOPtoFuck AI@lemmy.worldLemmy may be heading down the path of LLMs
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    4 天前

    Then the PR can be evaluated, rejected if it’s nonfree or just poor quality

    I don’t get the difficulty of rejecting “if it’s nonfree or just poor quality or known LLM code”. I don’t think it’s a vague criterion.

    And for many projects, if you admit it’s from a StackOverflow post, unless you can show it’s not a direct copy they will reject it as well. This isn’t commonly taken as incentivizing people to lie.

    Now whether you think LLMs are worth the trouble to use is a different discussion, but the enforcement point doesn’t convince me.

    There is also a responsibility and liability question here. If something turns out to be a copyright issue and the contributor skirted a known rule, the moral judgement may look different than if you knew and included it anyway. (I can’t comment on the legal outcomes since I’m not a lawyer.)






  • far too many “fuck AI” people are literally advocating for the equivalent of “fuck computers” and “more tedious labor please!”

    Not what I’m advocating for.

    We need to be pointing to good applications of AI

    Feel free to do so, but studies are not on your side. Edit: this is a reminder we’re talking about LLMs for code and documentation.

    The only somewhat clearly useful use case appear to be code reviews, but then you don’t need to actually allow submitting any LLM rewritten code or text since code reviews can be done using natural language. And if you use server-side LLMs, you’ll probably agree to ToS that they steal your data.

    And LLMs seem to be amazing at plagiarism.


  • In my opinion, this argument is exactly the same as saying “we can’t enforce people not stealing GPL-licensed code and copy&pasting it into our project, so we might as well allow it and ask them to disclose it.”

    You can try to argue AI may actually be useful, which seems like what they did, and that would more fairly inform a policy in my opinion. I think your argument doesn’t.