Work in Progress: Stable Diffusion Plugin

I am working on a Stable Diffusion Plugin for Krita:


I know all the artists are on the fence right now about all the AI stuff, but when it all settle down this is going to be a must have tool for every digital artist, the img2img feature alone will revolutionize how we can do digital art by allowing process like this and this into reality.
I’m running Stable Diffusion locally and is mind blowing what it can do, i’ve done a paint that would take me 6+ hours to make in one hour and a half with it’s help, it’s amazing.

Now about the plugin, what fork are you using? there’s a optimized version that allows it to be used on GPUs with low RAM (the one i’m using) that would be nice to have, also i would like to ask you to implement a precision_full button, because there’s some GPUs like 1660 Series (mine) that output green images when not using full precision.
The CFG Scale slider on txt2img option would be nice to have, (in the img2img is a must have in my opinion).
What does the HQ button you implemented do exactly?
Keep up the great work!


nice bro

Amazing bro, keep it up

Weeks ago, I was thinking in What If we use an AI to generate 3D models like CSP-models , human figure, chairs, tables, desktops, keys, bicycle , etc, seem to me easier because models are not rendered, they’re just like the normal gray model without any shader or render , color or something like that

Idk just a silly idea came to me weeks ago

1 Like

That’s a lot more complex than you think ;).
You can make a 3d model, give it some constraits for wheel radius, …, you can make various parts to generate different kind of bicycles but all is basically prebuilt.

For an AI algorithm to take a prompt and make a model you would need to train it on a gigantic database of 3d models just like these image generating algorithms are trained on a huge sorted image libraries.

There are currently algorithms which create 3d models from images when you take a picture of something from multiple angles and then feed it in and a 3d scene is an output.
It’s rapidly improving with every new research paper but it’s still not at a point of being useful for this (thought NVIDIA has pushed the boundaries quite a bit ;-)).

Getting that large database of models which needs to be sorted (various tags describbing what’s there, lots of manual work) is quite something, I don’t know if there is one publicly available at this moment or if anyone is working on this (understand that to train an AI for a bicycle you need ton of bicycle models, if you want to train a general one to generate various models you need tons of sources for every object or some other way to figure this out).

With improvements to image generations like this you might be able to eventually generate images for the 3d library but that’s still quite far away.

There are other options, for example general AI improving which would lead to model generation, using engineering blueprints to teach it while using something image generations AIs use for stylization, there’s probably more options.
All of it is qutie complex but we will get there probably decently soon :slight_smile:

Don’t forget that in 3d you have edge flow which matters too, you need to teach your AI lots of things.

@imperator Keep the great work, I’m looking forward to trying this one day ;).


I am using the WebUI fork. So if you have it already running Plugin installation is trivial. I do not know if WebUi support precision_full button - if so it might be possible. But I will publish source so somebody else might find a solution.
I am planning to implement CFG and other UI improvements plus Img2Img today and go on with Inpainting tomorrow. I will change HQ to a reload button and add steps-parameters there. It basically means to make new generation with higher steps value. So in this dialog it will have higher steps which most times has got btter quality. In future there will be HQ anyway through text2imghq which hopefully be supported by WebUI in future.


now with Img2Img and improved dialog:


I have now Inpainting implemented:


Art theft and consumeristic no-work-fast-result lust implemented into an art program? Amazing. I hate tech bros so much.


Without the “tech bros” there would be neither Krita nor your PC or your graphics tablet, think about it.
But one can also test its AI on self-photographed images and does not have to use sources such as museums or the Internet, but even there are freely usable sources one can work with.



Thanks so much for the work you are doing. I’m looking forward to trying it out :clap:

Earlier today I saw a post on twitter on/about the same subject. There was a tutorial showing how to use stablediffusion in Krita.
So I spent all morning trying to install files at a prompt I had never used. It’s now night and I still can’t get it to work. It’s not something for a noob, but I’m insistent but I must admit I gave up.
I had to work double in the afternoon as my morning was wasted (laughing nervously)

1 Like

are you talking about this?

Because just like you, I’m not a coder savvy and I threw away my entire morning/afternoon, recording each of the steps (so I could post an easy-to-follow video for the community, later) but it turns the author did not know how to install prerequisites for some of the things needed on the python environment.
I ended up uninstalling everything at 11:50pm after giving up a completely wasted day.

Hi @imperator, I guess once you have everything ready, you’ll share this as a community plugin? Or is it just for experimentation on your own exploration?

Yes, I’m talking about this tutorial and my frustration/incompetence. Initially some things went wrong, but I tried again until it worked. So it wasn’t wasted time completely as I learned some new things :slight_smile:

1 Like

thanks for feedback!

  1. installation of Stable Diffusion was very(!) difficult a week ago. But I tried it again (webui) and it was quite easy. You need a NVidia card (6GB mem needed). I can not provide help for that part
  2. I will try to offer support for Stable Diffusion on Google Colab. I have Pro plan and tried colab SD there and it was not very fast. I have a 3600TI which is a mid-sized card and it is much faster. So Pro+ will give you better performance.
  3. my plugin is more advanced than KOI Krita but I hope that we can join forces together and I might give him maintainer status but I a not in contact with - just wrote him on Twitter a few minutes ago.
  4. Sharing: I am working hard to finish first version at weekend maybe until Monday. I will put it on GitHub.

omg this may resolve my need to fill images like photoshop

Good points. Hopefully the copyright issues won’t be a problem. I still love drawing so much I want tools that let me do that with my own hands, but some people use things like photo bashing as a creative process and that is fine. They still have to be careful of plagiarism and copyrighted images, trademark and IP infringement etc. though if they want to use it in a commercial context.

1 Like

I don’t think that I’ll use things like this in the near future, but I’m curious, my AMD got 8 gigs, should be enough, or is it dependent on the architecture?


I don’t like AI or such things like photobashing for creating my works, because my brain is full of pictures I use these. I only look at pictures when I’m drawing things I’ve never seen before and want them to look realistic to some extent, for example, the inside of an engine. But it would never occur to me to then copy this engine. I’m all about realism as a principle in such situations, it should be able to look like this, but please no copy.
Yes, I have clicked flyers together using clip art, but I don’t consider that art and certainly not creative, that is dullness.



Hey! Thanks for working on this plugin. :smiling_face_with_three_hearts:

I’m super curious and excited about how I can blend this tool in my workflow and the possibilities of this Ai generated imaging technologies for artists. This new powerful tool will certainly leverage new creative shortcuts, process to make art, and empower a new wide range of users. Exciting time ahead!

Btw, I feel puzzled to read the negative critics here. I’m from a generation who already read same type of critics when digital art started to be a thing over traditional art 20 years ago. A similar wave of critic happened then when some started to experiment painting over 3D models or scenes, and then, same music when concept artist started to use photobashing technique.