How good is AMD Ryzen CPU for Krita?

I’m planning to buy a new PC and hesitating over what to choose between Intel CPU series(which I’ve been using for a long time) and AMD ones.

I’m leaning toward AMD but I’ve read that the developers are working with Intel on development. So I’m not really sure… because krita really is the main program I have to use.

It really shouldn’t matter. David Revoy has just gotten a new workstatiion with an AMD CPU, for instance: https://www.davidrevoy.com/article805/my-new-pc-arrived .

1 Like

I have an AMD PC at home and Intel at work, can’t really tell a difference. They are both about the same GHz and cores. AMD or Intel doesn’t matter in CPU in 2020, AMD is cheaper so I would probably just go with them.

1 Like

Hm maybe I can get David to run the FreehandStrokeBenchmark, I’m still curious how it performs on Zen2 (and soon enough Zen3) based CPUs…

On paper, they should be competitive, Zen(1/1+) still had only a 128bit internal data path, so 256-bit AVX(2) instructions ran much slower than on Intel, but Zen2 caught up with that.

There’s a few AVX instructions that are still are much slower on AMD, however those are rather expensive in general and should be avoided for good performance…and it should be mentioned that only a small part of Krita uses vector code (through an abstraction library) in the first place.

In any case, there is no Intel-specific tuned code anywhere in Krita to my knowledge.

For a desktop, I personally would totally go AMD right now, the peak power consumption of Intel’s pushed-to-the-limit-and-beyond 14nm+++ desktop CPUs is just crazy…and this won’t change til like 2022.

2 Likes

You may also check this thread [Processor] Intel vs AMD, Raw Power x More cores?

1 Like

I switched from intel and Nvidia to AMD mostly because of driver support on Linux and cost of course.

Have AMD gotten their act together with graphics drivers for Linux? I recently bought an RTX 2070 because horror stories of bad support from AMD lingered at the back of my mind.

Nvidia felt like a safer bet. Yes, it’s a closed binary driver but it usually just works.

It works like a charm on my machine. I’m very happy with my AMD graphics card so far. Nvidia gave me a lot of trouble in the past and often left my machine unusable after a driver update, among other things. Performance is also great however that’s not unexpected after switching from a gtx 970 to the most powerful AMD card I could find a few months ago.

1 Like

I know it’s a bit off topic, but does AMD perform good with Blender too? Thanks!

Oh I just went through AnandTech’s Ryzen 5xxx review, and holy crap.
AMD seems to have worked on every weak spot of Zen2, also those extremely slow PDEP/PEXT instructions I was referring to above are now very fast with Zen3, and many other AVX improvements that should also benefit Krita.

But AMD knows they’re now performance leader, so that reflects in the prices…

@oliver yes AMD performs well in Blender, you can find plenty of benchmarks on Phoronix for example. But if you’re serious about rendering, I guess you want a strong nVidia card to use all that compute power for the CUDA renderer…

Too bad OpenCL is in such a sad state, that’s really the one remaining reason to consider nVidia for Linux.

Thanks @Lynx3d! And do AMD drivers come with the distribution or do I have to install additional drivers (amd gpu pro / rocm)?

I was going to buy the new Ryzen 5 5600x but I think I want 8 cores and less power consumption(for 8 cores)… :confused: I’m probably gonna settle with 3700x. 5800x is too expensive.

Anyway thanks for the info. <3

Yes I also really want 8 cores for my next CPU, preferably at 65W TDP, but for the Ryzen 3xxx series the price jump from 6 to 8 cores is just too huge, so I held off.

And the Ryzen 5600X is clearly priced against the 3800X(T) because it performs similarly despite 2 cores less (and has no competing Intel part yet)…only chance would be that AMD sells off the 3800 for a short while.
I don’t really expect they’ll keep it around like the 2xxx line since those are not produced at TSMC but Globalfoundries who have no immediate plans to offer a more modern fab process, so they just keep production running as long as it’s profitable.

OpenGL and Vulkan drivers are part of MESA, so yes, they are included in all major distributions.

I don’t have any first hand experience with AMD GPUs, but the proprietary AMDGPU PRO aswell as AMDVLK (an open source “light” version of the PRO Vulkan driver) seem pretty much obsolete by now, the MESA ones get the most attention.

ROCm I don’t know, I guess it’s the best (if not the only really working) option for OpenCL.

Thanks again @Lynx3d! Open CL, Open GL, Vulkan,… why aren’t these things less complicated :slight_smile:

if it was me I would be more worried in getting my hands on a laptop with cuda than anything else to begin with probably.

I’m curious how the AVX optimization affect the actual usage? It indicates that it’s related to vector operations but I don’t see any noticable difference when I disable it on my current system.

Well to be honest I expected the difference to be more noticeable too, there’s definitely a couple bottlenecks that need to be adressed to make an “wow” kind of difference especially for those more expensive brushes (like those using texture patterns), but it’s not so insignifficant under the right conditions.

As far as I’ve discovered, there’s 2 main aspects of the painting vectorized:

  1. The creation of the mask for “auto” brush tips (default, soft and gaussian…), which of course are cached for static brush sizes, but dynamic brush sizes need to recalculate those a lot.
  2. The most heavily used blending modes, the simple “Over” blender that is called Normal in Krita, and the special blending that is used to combine dabs in Wash painting mode, called “AlphaDarken”.

I can say that the blending functions themselevs are ~6-8 times faster with AVX2+FMA on my aging Haswell CPU, but there’s just a bunch more to be done until the brush stroke is on your screen.

Blending currently has some further limitations, they are only implemented for 8-bit channels and 32-bit float channels, and for the latter AlphaDarken is disabled due to a bug (that I just found btw.) and alpha-locking aswell as disabling some channels makes it fall back to scalar code.

Unfortunately, fixing the float bug in AlphaDarken didn’t help much, turns out converting the float image to your display color space by LCMs is REALL expensive, that’s eating all the performance :frowning:

I’m currently working on 16-bit integer support. In the “FreehandStrokesBenchmark” which simulates the whole painting process (with 4 layers IIRC) I’m seeing 30-60% (correction now:) ~45-90% better performance, yet that’s rather hard to actually feel in Krita.
Though as you keep stacking layers, the effect becomes more noticeable.
I’ve also managed to include alpha locking (also use for inherit alpha)

I’m also trying to figure out why Krita will rarely actually reach 100% CPU usage when painting…memory bandwidth may be a limiting factor too.

Actually I never see any painting app reaching 100%CPU for core (I use R5 1600 and 16GB/2700Hz) except CorelPainter which always use all core/threads limits you give and it never satisfied and you never know does it help or no because it lags in anycase.

I eventually ended up buying 3700x, but later I found out Intel 10700 might’ve been a better choice for me. Core numbers are the same, TDP is the same and the benchmark is higher, additionally it should have better support for AVX optimization you mentioned. And above all even the price is about the same(at least in here).

tbh I’m a bit regretting buying AMD and I’m planning to move on to an Intel system in the near future and sell my AMD components. Looking back, I feel it’s kinda odd how everyone I asked suggested AMD over Intel on CPU.

Maybe there’s something I’m missing here. But either way, I should’ve been more careful. :neutral_face:

Hm are you sure you compared the right CPUs?

If I look e.g. at >this review< the multi-threaded performance of the 10700 at 65W is quite a bit lower than the 3700X, and if you do what they did for the “max turbo”, the actual power usage is a good 70W higher in multi-threaded workloads, just to get a tiny lead.

Admitted, the Intel CPU is still more efficient in light workloads, AMD could work on its idle and single-threaded power consumption some more, although 10th Gen Intel CPUs have regressed over previous generation too.
Though that single-threaded performance advantage is already history with Ryzen 5xxx…but they still are basically unavailable here, only 1-2 retailers list them at ridiculous prices, obviously :confused: