Policy on LLM code?

Hi, what’s the policy on LLM-generated code when making MRs?

I ask because I thought I would ask chatGPT to implement Mixer Brush in Krita, and, having spent a few days going back and forth with it, to my surprise, I seem to have something workable. But I don’t know if there’s any chance of getting it integrated in Krita because, frankly, it’s fully vibe-coded and I don’t understand the code very well myself. All I can say is, it’s a fairly limited addition to the color smudge brush engine and it shouldn’t have much of an effect outside that context. But I don’t know if it’s worth bothering anyone about it further because being coded by an LLM might be a non-starter.

Thoughts?

The problem is that you don’t know what chatGPT did.

Did it to much, did it introduce a bug or a vulnerability?

It works for you, but a developer has to code review it and that takes precious time. And if the developer has questions or finds problems, you have to fix them. So yes you can make a MR but if it is merged is another story.

@halla What is your opinion?

1 Like

Yes. It was a little more involved than just one-shotting the spec and telling ChatGPT to implement. I did go through the logic and corrected course a number of times as the thing took shape, but the specific implementation is not something I could have done myself and I don’t entirely understand the details of it. So there is that problem.

My thinking was that if there is, in principle, interest in this kind of contribution, I would next see about splitting the code into separate commits, understand to the best of my ability what’s being done, and try to make it as clear as possible to read what’s going on. But if using an LLM is a no-no to begin with then of course I won’t do that.

It’s kind of a shame - on the one hand, this is a feature that’s been requested time and again, if I recall correctly, there’s even a feature request for it, not to mention all the time and effort you’ve put into it.
But on the other hand, there are the arguments raised by @BeARToys and the uncertainty surrounding the legal background and status of the code used by the LLM. In particular, regarding whether copyrighted code is embedded in it, since Krita’s code must be verifiably compatible with the GNU General Public License (version 3 or any later version).

Michelist

1 Like

When you submit code for review, you should:

  • Understand what the code does
  • Make sure it’s compatible with the license

If either of these is in doubt, then submitting the code should be in doubt.

My advice would be to start over. Reimplement it from scratch so that you know what the code’s doing and it was written by you. But don’t be afraid to ask for help (from humans). I think developers would rather help you figure out how to write proper code than have to review made-up code.

5 Likes

hiya, just joined here because I feel pretty strongly about this, even though I don’t do art digitally (and even then I don’t usually share it)

just out of solidarity to artists alone, we probably shouldn’t use AI to code for…an art program.[1]

Feels wrong.


  1. disregarding all the issues with code quality that were best exemplified by the Claude Code src tree leak earlier this week, which has horrendously unorganized, messy code with bad practices that doesn’t follow any code style; In other words, it perfectly exemplifies many of the issues with AI coding ↩︎

2 Likes

I second this, I feel this technology has no place anywhere, but especially not in a program for artists.

We should not be supporting anything taking the creative output of all humans and soullessly replicating it. Not for code, not for art, not for writing, not for anything else.

1 Like

The problem here is, that you can not distinguish ai written code from human written code if the AI is used as a tool, not as a replacement. And code is one of the most trained words in LLM, as they have a clear structure and syntax.

So the programmer can submit vibe coded content without us knowing it was AI if used correctly.

You also do not have to be afraid that that code is licensed. Try to proof the code is from a LLM. Go on…

Scenario:

You: Hey AI generate code!

Ai: … Here you go…

You: ok, but I just need that part. copy, paste, run… works as expected.

Someone else: but that comes from AI!

You: don’t know what you mean :person_shrugging:

I am exaggerating here to paint a picture, so no offence.

So comparing LLM usage to program with the usage of AI to generate images in my opinion is like comparing apples with pears.

I think that CoMaps actually has a really solid policy around that: https://codeberg.org/comaps/Governance/commit/01b50a60a74bb09ec8901386b441350caf0028da
(in turn based on Forgejo’s): https://codeberg.org/forgejo/governance/pulls/325/files

So just to apply it to OP’s case, because they say it’s just pure vibe coded and doesn’t understand the actual code. It would break rule 6 instantly alone.

I think the same could be said about art, and in some cases it’s difficult (at least for me) to find out whether it’s “ai”-generated even without it being used solely to assist.

My feeling on this is that it is important to be against it on principle. That we do not accept anything from these slop machines.

It’s not always possible to determine whether something is the output of a generative model, and we should be pragmatic there. It is also important to assume someone is innocent until proven guilty.

This is the same with code licensing, it is very difficult to determine whether someone actually owns the code they are submitting. That none of it is copy-pasted from anywhere and that they’re not doing it on company time. (in which case, the company could own it!)

Free software already relies on a whole bunch of goodwill, and all we can do is have a public stance against generative models in all of their uses.

If code (or assets, writing, etc) is eventually determined to be “ai”-generated, what to do with it should be, in my opinion, determined on a case-by-case basis. If it is simply not practical to revert it, it’s not necessary to. Do what we can and stand by our beliefs.

1 Like

Hello!

Throwing in my two cents. I think there are practical and ethical reasons to not use LLMs, but other people have made that point, so I’ll address the practicality of an anti AI policy, because this is a common concern.

First, even if it’s just the core devs promising to not use it themselves and to not accept known ai contributions that’s already an improvement in my opinion.

Second, a fair amount of ai users are loud and proud of it and may be annoyed at the existence of such policy, so even excluding that is fine with me. Perfection needn’t be the enemy of better, and we don’t necessarily need to go on “witch hunts” on every PR.

Third, anyone can submit code that violates copyright, but that’s not used as a reason to not have a license.

And last, this argument that code is so different to art in the possibility of someone passing off ai output as their own is unconvincing to me. People do the same with art and they may or may not be found out, but we can reject it when we know it.

Speaking as the Krita maintainer, I don’t want us to accept any LLM-generated code into Krita.

Some may say, as long as the contributor understands the code it’s fine, but that’s a fallacy.

No programmer really fully understands the code they are working with (otherwise bugzilla would be empty); but with LLM-generated code they haven’t even put any thought into it. That code will be unmaintainable.

And that’s even completely apart from the fact that using LLM’s to generate code lowers your cognitive capacity, burns up the planet, destroys water sources all over the world and takes away from people what fits them best: being creative together with other humans.

The right thing to do could be to, after you’ve experimented in this environmentally-disastrous way, to describe the feature you reached and wanted and make a wish item in bugzilla, or discuss it here, and then let people who can code implement it in a responsible way.

16 Likes

If the code breaks will you even be able to fix it ?

1 Like

I’m an artist that uses Krita, and also a software engineer. I absolutely think Krita should have a policy that disallows ANY LLM generated code in the project.There are legal, ethical, environmental, existential reasons to not support the use of generative AI in any field, including engineering.

Honestly, I don’t think “quality” of the addition even matters - whether it’s comprehensible, maintainable, etc is secondary to this:

If you think implementing a long-desired feature will make some people happy or attract some users who otherwise have avoided Krita, you need to factor in the reputational cost of allowing LLMs anywhere near a program used by artists. Personally, I’d be immediately looking for a fork of the project if Krita is knowingly accepted LLM-generated additions, because willingness to accept such features is a failure to understand the community you’re building for and the world you’re living in. The features might make the program “better” in a technical sense but at the cost of exposing a severe misunderstanding of your users.

6 Likes

I have dealt with that in my own projects by making contributors sign an agreement or be banned from contributing altogether.

Someone that does not aim to respect the contributing guidelines, whether it is “detectable or not”, should not be allowed to contribute.

besides, again the Claude Code source tree that leaked this week definitely shows how easy it is to identify low-quality, unmaintainable code spat out by generative AI tools.

Halla has spoken. Case closed at least for me.

4 Likes

Hoping other maintainers will share the same views. I really don’t want to see Krita embrace this very unethical technology. I want to see it rejecting all of it outright and be a beacon of hope in a world full of slop.

1 Like

Thank you for making me love Krita even more. Love to see an art software actually being on the side of artists, unlike some other companies…

For now… I fear for the future of open source when AI will be good enough at making code for people to not detect them… Using AI without discosure to generate code has to be criminalised as soon as possible, before it’s too late.

Hi!! I’m also a software dev by day who has been loving using Krita for persona projects in my free time!!!

I agree with a lot of the points above about the environment and sociological detriments of using gen AI code for a feature here. I would love to continue to use Krita without worrying about if any of the codebase is gen AI. I also think a great course of action would be to describe the issue that needs to be solved and make a plan from there, and/or ask for assistance from there. :heart:

2 Likes