You should always take one step at a time to avoid stumbling. Therefore, we will probably have to wait a little longer for one or two optimizations of a function, as long as they do not make Krita unusably slow.
On the other hand, I also have the feeling that users with increasingly less powerful hardware expect more and more performance, but that may just be my impression. With reasonable desktop hardware, Krita runs very efficiently, but today everything has to be even smaller and consume less energy, and then itâs supposed to move mountainsâI think thatâs the wrong approach, or squaring the circle.
Yes and no in my opinion. I saw a post where a user was unhappy, that Krita was unusably slow with editing a lot of 32bit floating point layers. Well, yes, as Krita does not have gpu acceleration, this is to be expected even on decent hardware. On the other hand, I can kill Kritaâs performance by just adding a blur filter or an outer shadow layer style - and that happens on 13th gen i7, 64gb ram and a RTX 4xxx GPU with 16gb video ram. Or, using a brush with animated brush tips like the Memileo impasto ones, even on a decent notebook, Krita is not the fastest one in that case.
But thatâs what I meant. It works quite well in most cases of painting, but as soon as one uses more complex editing, more complex brushes, 16bit or 32bit, it can become challenging. The high bith depth part (e.g. 32bit exr) might be out of reach for Krita, because that is in the domain of GPU acceleration, but the rest might be optimizable. Itâs just the question, whether there is a demand for it.
Personally Iâm tired of bloated applications with massive resource footprint for no reason (looking at you every Electron app ever made) and sometimes I feel like software gets terribly inefficient when it doesnât need to because devs get âlazyâ. But I think the Krita team does a good job with Krita.
Large painting projects will consume more computer resources, there is not really a way around it. Personally Iâm not really a fan of the infinite canvas as a concept (my personal taste). I worked with some apps in the past that had it but the final format of the artwork is just as important for the composition of the painting as the elements of the painting itself, that I find it messes everything up sometimes if not thought about it beforehand. But I love infinite canvases for note taking and doing concepts and stuff, like having a giant white board you can sort your ideas on, not so much for painting tho.
I am not even sure if there are âthe devsâ at all. As I understand it, over the decades people joined as devs and left later. Some found an open source lib and used it within Krita. At its time the lib might have been up to date, but maybe not later anymore. New devs joined and used the Krita architecture as they found it, added new stuff, new libs etc. From time to time there was a âtidy upâ here and there, but all in all Krita grew, like software often grows - without a constand overhaul of its internal workings. I think in the dev world its called âtechnical deptâ. The longer you donât pay this depth by not revisiting your code and architecture, the higher the interest rate you have to pay. In the end it leads to not optimizing at all anymore, because the risk to damage the whole software gets to high and fixing this would just take much time and effort (not true for Krita, as I see they are touching a lot of areas in it and trying to make them better - just check the thread in the feedback catergory about getting better canvas rendering speed).
Maybe one challenge of Krita is, that it tries to do everthing. Painting (good), vector (weak), animation (weak), image editing (okish, but slow in some cases), high bit depth (slow), text (weak 5.2.x, but good with 5.3 if the new text tool does not add slowness). But there are only a few full time devs and a few volunteers - with respect to this Krita is quite amazing.
EDIT: Typo correction: Its not âtechnical depthâ but âtechnical deptâ.
I think itâs true that loads of developers have contributed over the years but having the same maintainer for the last 20 years has probably lent a good bit of consistency. Iâm not a developer, just a lay user so maybe my viewpoint is not entirely correct.
By the way, thanks for teaching me again with your experience using this software.
I also noticed that Krita renders a 1000 px soft brush quite fast with Instant Preview Mode ON.
Unfortunately, the transition between dabs are very laggy with this setting.
Hm, based on your feedback I made two more quick tests (Krita 5.3, no plugins, Windows 11):
Notebook 1:
10th gen i7 notebook (on battery, not plugged in)
default airbrush at 1000px
canvas 3400 x 2550 at 300 ppi
Brush stroke is very smooth with Instant Preview ON and a little bit less smooth with it OFF. Both are very usable.
CPU use is around 13% (with web browser in background etc).
Notebook 2 (10 years old):
6th gen i7 notebook (on battery, not plugged in)
default airbrush brush at 1000px
canvas 3400 x 2550 at 300 ppi
Brush stroke is stuttering with Instant Preview ON but usable. With Instant Preview OFF it is very laggy - I would not like to use it like that.
CPU use is around 13%.
_____________
So, I need to use the slowest notebook on battery, not plugged in, to get the worst performance and test Leonardo again.
In this case the Leonardo airbrush is still fast (I see this âprocessing pixelsâ message on the canvas, but it is not disturbing further strokes).
CPU use jumps between 9% to 22%
Summery, on weak hardware Leonardoâs airbrush is faster than Kritaâs. So much that I personally would not use Krita for such painting tasks. Leonardo on the other hand is ok.
I wrote it in other threads: when doing optimization work in the code, test it with the slowest hardware you can find
UPDATE:
I used both apps with a pen tablet. Krita gets bit better without it - just using the touchpad of the notebook gives a little better performance.
Using a judiciously old hardware for tests is actually the required fundamentals to maximize the outcome of optimization and performance. Glad someone other than me shares the same opinion.
Hopefully, @Takiro also acknowledges that Krita has lots of room for optimization.
Only if your goal is to optimize for old hardware which in turn can make things run worse or at least not better on modern hardware. This phenomenon can often be seen in old video games. As hardware changes a software can make things different to get better performance out the new capabilities (like different instruction set for chips for example) which wouldnât be possible on the old hardware. Optimizing for old hardware will not automatically make it also better on new one. It depends on the software of course. I donât know the depths of Krita well enough but there is normally a reason why old hardware gets left behind at some point and that is because it holds the software back.
Thatâs exactly why I said âjudiciouslyâ. âJudiciouslyâ old.
I know that PC Cpus or chips arenât the same for instance as Android or a playstation chips.
The PS2 could run a game like Kingdom Hearts 2 on 32 MB of RAM.
The PC itself can not do it and will need at least 4 GB of RAM to handle the same game.
If the software is using e.g. DX12, OpenGL 4.5, AVX2 instructions, the âOLDâ hardware should of course support those. It does not make sense to test AVX optimized code against a CPU that does not offer this.
Another one is the storage. If the code is optimized for SSD IO handling, it would be useless to put a traditional harddisk in the test box.
But In the case of Krita on Windows most decent but 10 years old PCs fit that bill.
Another question is, how the test are done. If the devs implemented unit test, which do performance measurements automatically, you would get useful data on any hardware - new or old.
But, I guess, in the Krita world, the testing is done manually âby handâ and performance is more a perception of the user. In that case a slow PC might reveal performance gains more obviously than a modern high end system.
6th gen intel notebook, windows 10, Krita 5.3 vs. Leonardo.
A4, 300 ppi canvas in Krita (300 ppi in Leonardo as welll, but no fixed size because Leonardo has an endless canvas)
Basic-1 brush in Krita @ 40 px.
Round Brush in Leonardo @ 40 px.
Making a single stroke for about 10 seconds
CPU load:
Kria around 7%
Leonardo around 4%
But:
Disabling âhardware accelerationâ in Krita:
CPU load around 4%
Just this super amateurish, manual test by hand, gives me a hint where to look if I would think about optimization work. It might be a wrong hint but it is maybe worth it as a starting point.
Edit:
Idle load of the notebook is around 1%.
So we have:
Leonardo 3% load
Krita 6% load with acceleration
Krita 3% load without acceleration
In the forum everybody says, acceleration is only used for panning, zooming of the canvas, but I was not doing those operations during the test. So why does acceleration doubles the CPU load?
Edit 2:
I am not sure if those 6% of CPU load are considered âa ton of energyâ. Maybe @novames00 has a much higher load on his system. EDIT, sorry, this remark is related to another thread of the OP:
I never heard about unit tests being able to do this. Unit tests test if single units of code (like a class or a member) do what theyâre supposed to do (if given input values give desired outputs values), they donât test the system as a whole. Furthermore they run when building the project so the only hardware you test is the dev machine or the CI environment. They donât test the final running software and especially not on user hardware.
Agreed, it depends maybe on how one define the term unit test. A company I know, for example, develops a low code platform for data flow driven process communication. One part of the testing is performance optimization. Those tests are implemented into the unit tests. The tests do not only evaluate the functionality per se but also the time needed for the operations. But setting up such a pipeline is a dev project on its own. It took a long time to eleminate the guesswork within the performance measurements. One might call this âextended unit testsâ. It was a resource intensive work, but well worth it in the end. They were able to identify a lot of bottlenecks and increase the overall performance significantly.
But thatâs how a commercial company is handling this type of stuff because on a little, weak IOT device like a machine sensor every millisecond counts. In our Krita world itâs not that important. On a halfway decent PC Krita is fast enough with respect to painting. So the demand might not be there, but Leonardo shows, there is potential for optimization, in case it is needed.
Performance tests can be automated to some extend but you also need to know the target machine and rebuild it entirely. This is pretty hard for a software with no specific target machine, could run on every machine configuration imaginable. This is why developing games for a specific video game console is much easier than for PC. If it runs on your copy of a PS5 it will most likely run on every PS5 and checking and optimizing performance metrics is much easier than for PCs where itâs basically a moving goal post.
But weâre getting too much into the difficulties of software development with this topic.