Plugins and Tools with integrated AI or AI dependencies no longer tolerated on the forum

Until now there was kind of a loophole in the forum rules (or more how they were interpreted) about plugins and tools with AI integration. While we have a strict rule against AI content there was always the opinion that anyone can make any plugin they want. There are no limits on what you can do with Python in Krita other than the ones imposed by the lack of API functions. Krita still does not impose any rules for what plugins you can create or use, it wouldn’t make sense anyway. However, that doesn’t mean they must be allowed on this forum.

Since Krita’s core is AI free, and plans to stay that way even to the point of not allowing AI code, it only feels natural to close this loophole and make clear that plugins and tools that integrate, connect to AI services or are AI themselves will not be tolerated anymore. There will be made no difference between generative or other AI tools.

This rule is specifically for plugins that contain or connect to AI in any way, either as an integral part of their functionality or an optional one. It’s not about plugins that are “vibed” or otherwise coded with AI assistance. This is because it is hard to detect and also for some people hard to know what is just smart autocomplete or classic code generation and what is AI, especially because the lines get blurred in some IDEs. We can also not review every plugin and check for signs of AI generated code. The forum rules don’t change in that regard and it is still encouraged to disclose the use of AI for coding help (like Copilot or the likes), so users who want to use the plugin can make an informed decision, and its good to see that users already did this by themselves even before we made it an explicit part of the forum rules.

Generally, one should always be aware that a plugin is not automatically more trustworthy because it is not created with the help of AI tools.

We are also aware that not every AI is created equally and that AI has become an umbrella term for a lot of already existing and useful tech that was around long before LLMs and image generators. There are tools out there that are called AI for (probably) marketing reasons despite barely qualifying. Exceptions may be granted for such tools if they are ethically sourced, local, small scaled, community driven and open source. The Krita Fast Line-Art tool can be seen as an example for this.

Currently only one public topic is affected by this rule, which is the Smart Segments plugin. A few hidden topics are currently in review since we have this discussion among the moderators and staff, those will be deleted too.

The rule is not limited to the Resources > Plugins and Scripts category but applies also to feature requests that ask for implementing AI or similar questions about how to do it. Users were always quick with flagging these topics in the past already.

44 Likes

That feels good.
Thank you!
:person_bowing:

Michelist

5 Likes

I’m thinking that this announcement should be pinned for a while.

10 Likes

Good point.

7 Likes

This is good news, and good rule!

5 Likes

I think this is a great step forward, but I also feel it is not enough.

GenAI as a whole is unethical, no matter what it is used for, so I think we should be condemning all GenAI code (whether completely vibed, partially vibed or just autocompleted with LLMs) the same way as we condemn all other uses of the technology.

We should all stand together against it in solidarity: artists, programmers, writers, reporters, etc. It is not great to allow one but not the other. We must have a stance against it all. Let’s not throw any profession under the bus.

Yes, I know we cannot always tell whether code was produced by a GenAI model, and I do feel we need to make sure we do not get any false positives, but I do think we can do more than nothing.

A policy could be that we condemn all uses of GenAI, no matter what it is used for, and if something is very obviously created using GenAI (partially or fully), it should be banned. The important part is that the Krita project condemns all GenAI.

For code, this would mean checking for:

  • Assisted-By / Generated-By / Co-Authored-By
  • The author(s) explicitly stating they used GenAI
  • The project README very obviously stating endorsement or usage of GenAI

Also, the term “AI” is so vague now that it has little to no meaning, so I use the term GenAI to mean LLMs and diffusion models trained on lots of scraped data and used for the purpose of regurgitating human creativity or knowledge.

Also, as a bit of an aside, I don’t think we should be stating code generated using GenAI is possibly trustworthy. It is not.

7 Likes

We simply don’t have enough people to review every plugin and check if it is made with the help of AI. If it is not disclosed it is sometimes very hard to know if it is AI. Many plugins are made by beginners and amateurs and even their legit code can be mistaken for AI generated code just because of the way they code.

It also fosters a community of mistrust when people accuse each other and demand proof of innocence.

Just recently someone suggested that all artworks need to prove that they are not AI generated or have used AI assistance. However this is impossible and also something that I don’t (personally) want on this forum for the same reasons as mentioned above. I don’t want to accidentally persecute an innocent just to eradicate AI. I don’t want this to become a hostile community.
We still have the report button for AI generated content, if someone suspects something to be AI, then we will check it as best as we can but we won’t check every post as it’s made.

I understand your point but I fear that when people get shamed for using AI for their coding (not saying they shouldn’t), they simply lie about it and then the plugin user will never know that the plugin in question might be slop.

No one did.

7 Likes

I 100% agree with you that we really do not want to foster a community of mistrust, we do need to assume the best of people. Witch hunts and such would also be very bad.

That’s why I would like a policy where obvious endorsement of GenAI is banned. So, for example, if the author explicitly says it was vibe-coded or they used GenAI assistance, or the project README says it. This could quickly be reviewed.

People may hide it, and we can’t prevent that. But at least there will be an obvious stance against GenAI, and I think this is the most important part.

If banning these plugins is too much, maybe have a tag stating it’s problematic? Like F-Droid anti-features.

If we would apply the same policy with submitted art, it would pretty much be that if the author says it is created with GenAI or GenAI assistance then it is seen as problematic. Without that, no actions are taken. People will be able to smuggle GenAI art, but at least the Krita project would have an obvious stance against it.

No one did.

Sorry, I misread this part. :’)

1 Like

We already have a policy against AI generated images and posts on this forum.

7 Likes

It was more meant as an illustration for how it could be applied to code.

Doing whatever is easily possible to go against GenAI code and having a stance against it, while leaving the rest out of scope.

lumi be awake and actually read things properly challenge.

We still have the report button for AI generated content, if someone suspects something to be AI, then we will check it as best as we can but we won’t check every post as it’s made.

Couldn’t the same approach be done with code? Do very superficial review and go deeper on reports. It can be difficult to spot with everything, but that doesn’t mean it’s okay. This is fundamentally very unethical tech.

The post flagging process/tool could easily be used for this:

The wording of the flag report description could be modified to add “or AI generated code”, or whatever wording is agreed to be suitable.

At the moment, you could flag a post using the AI-generated content classification and explain why you flagged it, i.e. for AI code, in the provided text box that opens up.

5 Likes

I feel this would be a good way to go about it. Don’t put undue burden on reviewers to check everything, and let the community make reports about obvious usage of GenAI.

So we acknowledge that it’s sometimes difficult to find out whether something was created fully or partially by GenAI, but still forbid it in its entirety. (Though, GenAI boosters tend to be very loud about this, so those cases will be obvious.)

Enforcement would then just require proof beyond any reasonable doubt that it was created by GenAI. This high standard would need to be there because we really would not want false positives, witch hunts, or people using the rules against others who are innocent.

Will things slip through? Probably. But that does not mean we need to compromise on our ethical standards.

Enforcement would also need to be seen on a case-by-case basis, if someone is clearly a beginner and not aware of the harms, it is best to educate them rather than give them a harsh punishment. If this is a GenAI booster that really should know better, punishment can be much harsher. If this is explicitly done to sneak GenAI in, it could be harshest.

This way we don’t need to compromise on our morals, it won’t place any undue burden on reviewers, and we won’t get a toxic community or witch hunts.

3 Likes

Here’s a reason I think you should reconsider. I’m a paediatrician. I look after children and young people with mental health conditions, trauma such as child abuse and with severe disability.

I look after one boy, who at 12yo, he was the state champion in Counterstrike. He interacted with the game using an eye controller because he couldn’t use any other bodily function to do so and mind-computer interfaces aren’t an advanced enough alternative. The controller isn’t good enough for drawing but he wants to draw. He can write via an adaptive controller (the sort of thing that Stephen Hawkins brought into the public domain).

So, a natural language interface would allow him rather than using generative AI to draw, but to actually control the position of drawing tools on a screen. The drawing is based on him telling the controller what to do and where to go, as well as what tools to use, settings, etc. His use of the model is based on an LLM trained through self-reinforcement learning, to control cross-platform UIs.

To be fair, the underlying content, like a lot of LLMs is almost certainly scraped from the internet including copyrighted materials. However, the use case is based on transfer learning of legitimate UI interaction recordings.

If we’re considering the ethics of this, I guess the question is whether Mohist/Epicurean/Machiavellian consequentialism or Bentham’s utilitareanism would be reasonable, or whether Kant’s deontological framework universalisability suggest that the moral harm to the copyright owners, would outweigh the benefits to those children and young people?

My 2 cents.

As someone who do code very often, I do think to some extent, it is questionable to simply just use Claude/ChatGPT to generate the code for you, especially if you haven’t bothered to check. I will admit that I have used Claude as very last resort for code, but I always make sure to know what I’m doing, verify the content, and give it the direction I need, and I only use it as assistant in case I get stuck on code, but most of the time, I do the code myself. Most people who do use LLMs for coding generally don’t really check their damn code, and they try to do everything with LLM, instead of keeping it small to what they’re stuck on, and they don’t even bother thinking of optimization. At the end of the day, people should really avoid LLMs for coding, but I’m not against using LLMs for last resort if one is truly stuck on something, but they still should be responsible by testing and verifying and try to do code themselves.

That being said, I have nothing more to add here as I agree already not to support Gen AI. G’MIC do have machine learning capability, but the scope is extremely small, and it is just for something like Krita fast line-art or in case of G’MIC, denoise. That very much fall in line with Darktable policy of using AI to edit photographs, and they forbid generating new contents from AI.

2 Likes

I believe a lot of these assistive technologies do not at all rely on LLMs or other GenAI, and if they do, they could be developed in a more ethical and sustainable way without it. Supporting GenAI for accessibility is putting our eggs in the wrong basket. In my opinion, we should be pushing for better assistive technologies, not for the assistive technologies to be an accidental side effect of big tech doing awful things.

Also, what I am against is specifically GenAI, not machine learning in general. Machine learning can be used for many good purposes and we don’t need to burn down the planet or soullessly regurgitate human creative outputs to do so.

I don’t tend to argue from a standpoint of copyright, as I feel copyright is a proxy for the deeper issue of soullessly attempting to replicate human creativity and knowledge.

I feel like a good trade-off is to concern ourselves with the actual outputs. If GenAI outputs are part of the code, from LLM autocomplete to copy-pasting from the LLM chat, to full vibe-coding, then it is not okay. The whole spectrum here should, in my opinion, not be allowed.

For the rest… I really would prefer people use search engines rather than ask questions to LLMs, but as expanding rules to things like this would become very iffy and counterproductive, I’m not opening that can of worms.

Would be happy just with a ban on LLM outputs being used in the code, partially or fully.

1 Like

Thank you for the considered response. You make some really good points, and I must admit that I jumped to assuming this was a good solution.

I’m a physician-executive and academic, and within that role I tend to be involved in transformation. That’s how I’ve become interested in this. It was actually a discussion in the context of the Australian social media ban, and pointing out that although there are harms from social media, banning it in young people causes harm to disabled young people who interact. In my cohorts, I began to think about how modern tooling could help the kids I look after, and their parents.

Connecting the dots, Microsoft is probably the main large technology company that focuses on accessibility including their adaptive controllers (designed for X-Box but used for a range of other things). I have limited experience using tablets and only really know of Wacom, but I will look to see whether any digital art vendors provide accessible solutions.

The reason for this lead-in is that in assuming connecting AI with krita would make sense and be a good idea, I had thought it would be helpful for the krita community, I hadn’t considered the factors you’ve brought into play. Like with the social media ban, I think bans on AI (and many other things) are fraught, because they tend to drive behaviours underground rather than actually stopping them (see Escaping the Interpreter’s Trap: Authorship, Trust, and Sociotechnical Governance in AI-Assisted Qualitative Research [I seem to only be able to put two links in the text]). I think that has relevance to this community, but I appreciate it might annoy some people for me as an outsider to waltz in and say that. My intent isn’t to annoy and hopefully those of you reading this response will see the contrition and attempt at transparency.

I didn’t realise this community existed until after I had done some work related to this- open source communities are commonly fragmented. I had seen krita’s web page, but not the forum until I went to go share what I’d done.

There was already some krita tooling which I adapted, but it was a bit clunky with limited API coverage. I adapted it to use CLI as well as MCP because it’s often more efficient and because many use a range of tools. Naively, I also thought I was helping/contributing. To make matters worse, I used an AI coding agent to do most of the work, and it’s also now published on pypi and conda, and I apologise because if I’d known about how this community feels, I would have respected your views.

If I haven’t irreversibly pissed too many people off, I’m happy to take your guidance. I respect that this is your project and I’ve learned a lesson here.

This is that link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6063174. For what it’s worth, I admire the time and effort people have put into developing the software. Your community is impressive and there is some amazing art work here. I get that this is a topic many of you likely feel strongly about, and might even be triggered by this, so please accept my sincere apologies.