1 Mask for several layers

This idea is not that original but it has been on my mind a lot lately.

001

Imagine I place 3 colors on screen and then apply masks on them it would come out something like this. but then imagine I would need to change the masks and not the interior for some reason. It would be trying to align the masks even though they do not show themselves to the other masks.

However if all the masks were connected and affecting the corresponding layers it would be quite more manageable to edit.

002

Imagine placing a layer color with primary and secondary colors tag (RED, GREEN, BLUE, CYAN, MAGENTA, YELLOW) this tag that would have a good dispersion of values to be used in the mask, then you would make a “Mask” layer and it would affect the whole group, painting “Red” would mask the layer with the “Red Tag Layer”, the "Green color would mask the “Green Tag Layer”, and so on and so on.
Yes this method would limit the number of masks in a group but it would give clean gradients in case there were any to be made between the Masks instead of a fuzzy limitation to how much of that color would be affected.

Why would this be good?

  1. Imagine this I made 3 layers but with masks I had 6 layers instead. So if you have a lot of layer you might end up having double that amount until you merge them down or do something else, but you might want to stick with the mask for some reason so it will be lingering there.
  2. The cool thing about this would be that you could animate the Mask like a normal layer and it would maintain the edges clean for those above so acctually shading the upper layers would be made ALOT easier. And considering it were colors you could even mix the masks to create blended transitions when needed be.
  3. If this limit of colors to be applied to masks was just 6 you would separate in groups, like the group for character 1 then character 2 or prop 1 and prop 2

Another cool thing to be implemented too if this would appear would be the ability to change the render color of that mask, pure colors are very aggressive to the eyes but on the back end it would operate with the same principal as if they were, something like mask1=color1 mask2=color2. If somehow more colors would be possible it would be even color I think.

hope you guys like the idea, cheers

Interesting idea to link layer type with the type of color label. May I know the usecase for this? From what I understand you want to transform or edit the masks together. If it is just to transform it then clone layers and alpha inheritance can be a workaround.

You make three mask shares add them to a group. Then clone each three and put the clones are base layer in separate groups, enable alpha inheritance and then add the color layer. I am just guessing this. I need to check if this works out :slight_smile:

not sure if I explained myself well enough then. I went out to look for some animations and found these videos that shows things in motion.

honestly this technique is applied to things in motion or when you have several passes to composite, but I have the growing idea that it would make much more efficient masks even to just illustrate even reason why I proposed this.

I get that feeling even by watching sakimi-chan draw in her tutorials her constant selections around seem so time consuming as she needs to paint and repaint on both sides of the selection. it would avoid a lot of hassle and compact a lot more info in less layers and changes would be smoother between different materials.

I have a 3d and composite background so it kinda just makes sense to me things like that, black and white is binary with what it separates so it can only separate 2 things at a time, it is only good when you have things already done.

Sometimes i want to separate the skin from clothes and the edges just annoy me or the arm that is in front of the body and each has their own rendering type. And all this while keeping the character away from the background too. It might be problematic to implement considering the layer stack method but it feels more efficient. The tag idea would go around it and catalog the masking scheme visually for the user.

Though if it was by nodes it would be really top tier but I dont know how that would even work out I did not even think about that much. For layers Krita’s system is already the best as it is now that I think nodes for a painting program sounds cool too :U it would allow use any color for the masking since it would be a channel flowing. but I think I would want more python before any rewrite for nodes :expressionless:

It seems like what you are after is something like Cryptomatte. Basically an automated ID matte generator that assigns every pixel to one of a set list of unique colors that represent the various IDs. Blender got support for that with one of the 2.8+ releases. I haven’t had a chance to play with it yet, but it seems rather promising. Though I don’t know if that works with straight 2D, that particular solution tends to be a render pass for 3D. In the comic industry they do something similar, usually manually, and it’s known as flatting. The unique color regions allow for fast selection based masking.

There are probably some proprietary AI solutions out there for automatically generating ID mattes from 2D images, but I tend to doubt they are readily available to the general public (I haven’t heard of any yet anyhow). Most of what you see in that second video is likely being done manually with some clever use of selections and filters to speed up the process.

In Krita I think you just need to leverage the layer stack and have a group for each region you want a mask for and paint those masks on the bottom layer of each of those groups (this can double as your local color fill as well), then have the other layers within inherit that base layer’s alpha. Should be able to add filter masks and such to the groups to affect everything within. That is probably about the best you can do for now.

Though maybe I’m just not following with what you are after.

Can you please explain the exact problem are you trying to solve with this new feature? Like, what is impossible or difficult (or just tedious) using the current system? Please show the usecase: like, what picture you want to paint, what do you have, what do you want to achieve and what did you try to achieve this and failed (or had to do a lot of workarounds; of course describe the workarounds too).

Not sure if you’ve already read, but this is a very insightful manual page that explains why I’m asking for workarounds and issues in the current system instead of details about the new feature you thought of: https://docs.krita.org/en/untranslatable_pages/new_features.html I can see you wrote a bit in your last comment but to be honest, I would really like some elaboration with examples because I’m not sure if I get what you’re saying.

yup yup your correct

Have less Layers in the layer stack I have said it before and illustrated it too on the first post. This idea does NOT aim to invent the wheel, there are many “work arounds” so to speak that do their work well but they scale very badly I think, this aims to make the wheel to only go faster but it is still just a wheel or in this case just a mask. And a Mask only does masking things I guess no need to explain there.

well I will show you a image I did and its layer stack but I will not bother myself with “get good” comments I know how to lower the count and all that my method to get there is poor alone I know ~__~ but I think it came out well regardless.

by memory I think at a certain time I had 200 layers or something. I kept it like this to affect each channel individually as I tried to make a non destructive workflow, but highly unefficent the more layers I made and this exponentially. As you can imagine changing a limit of one layer has many implications.

When I see Sakimi-Chan making quick masks I think of the same thing so you can see a video o her doing that or any other good artist. I am certainly not on that category.

  1. But what is the purpose of working on all channels separately?

  2. Channels on layers don’t affect each other; why it’s important to use separate layers for them?

  3. Why can’t you use layers with disabled all other channels except for those you need

  4. What exactly makes you use so many layers now? Can you describe an example when you feel you need to create so many layers? Maybe for example, tell me how you’d draw an orange ball? (orange because it’s neither of all colors)? How many layers you’d have, how they would be separated etc., so I can see easier what effect are you exactly talking about?

  1. Editing their influence.
  2. They can acctually, I did the math for it and the values come out right. still off topic.
  3. Channels docker does not work or at least I haven’t managed to make it work yet, for me it is a useless docker, still if it did, not useable for what I tryed to do. I know how to use channels in PS and all that jazz.
  4. As I said many times by now, I place a given material on a layer and render that but if you change its limits you need to delete and reconstruct, and if you do it all in one layer you need to do that process on both sides, which is a thing I find bad in illustration. you should work form and then render correctly where you place your limits but I guess everyone has their method. I know people that work in 1 layer will find this useless but I am sure once you try animation you might think twice about that.

I gave an example where I did it very wrong because I came from post. But I still use many layers. Ussually one for each material or section that overlaps and then for the passes for the composite on top of all of it, AO, projected shadows, light, temperature stuff like that. not 200 but still quite alot to manage and mask go around randomly in it. my very first post shows this very abstractly but it is all there.

You seem very focused on the channels thing, when I just speak about masking and how to make it scale better in a layer stack with how many layers you have, you can use a mask even without the need to expose channels on the layer. If with rotoscoping examples you cant see the use of it in animation and how edges are not set in stone for rendering after movement is made, I don’t know what else to say to explain it better or how to show that it could be a good way to make masking more efficient. maybe it is just not.

I just had an idea I am not in to fight over it. if you guys like it, use it, if not, don’t. I will refrain myself to write these walls of texts now I am just repeating myself 3 times now.

@EyeOdin I asked all of those questions to get better idea what you have in mind, not to “fight”. I’m a developer; I need details to get the job done. To get Krita best for everyone we need to check all of the feature requests, understand what the author had in mind, how difficult it is to implement or how much time it would took and, finally, how much advantage it would bring to Krita users.

If you mean masking in general and the channels thing is less relevant… then I have two important, quick questions:

If you know both of those features and you still aren’t satisfied, please write that too. At first I thought the channels separation is the sole reason for this feature, which might’ve been what caused my confusion, but since it doesn’t, I can reread carefully the thread tomorrow - right now it’s a bit too late for me to do so, this is just a quick comment just before going to sleep.

  1. I am aware of adding masks to groups
  2. I am aware of using inherited alpha

As you can imagine all those are binary masks ( black to white ) so all of them create the same issue if you want more areas to be masked you need more layers for each one they just propagate different down the stack. Placing a mask on a group is one of the best solutions came across yet and then have my comp group containing all object groups.

It would be like having one universal “Black/0” that blocks everything and colors that are different “Whites/1” and each white connects to another layer if it is not the right color for that layer it would be considered Black too.

I just feel it could work in illustration too on the blocking phase and alot more in animation, going from rough form to final render.

There is no fight :slight_smile: We ask these questions to everyone. Simply because we would like to add our own take to the feature request and may be brainstorm a little so that the feature that gets implemented is well thought out and covers all cases.

2 Likes

I get what you’re trying to say, one mask for each unique area / object, but the normal grayscale masks are not binary, they allow for transparency (meaning in-between values) with the bit depth of the Alpha channel, so an 8-bit mask already has 256 values of transparency.

If you’re proposing RGB masks you will one these limitations:

  • In case each color channel should represent a mask, then you are always combining at most 3 (grayscale) masks into one. You can’t add more, because that would require additional channels.
    Do you understand what I mean? Your example with 6 colors, the 3 primaries and 3 secondaries, wouldn’t work. Lets assume you are already using red, green and blue on the mask and want to add a yellow component for the yellow tagged layer. But yellow in RGB is a half-half mixture of red and green, so if you’re painting areas in yellow then you’re extending both the red and green masks, you cannot decorrelate yellow from the primary colors that make it up.

  • In case you want arbitrarily many masks combined into one you have to identify each mask with a color ID, but then you lose any and all transparency because any variation of that color could and would represent a different mask instead of transparency. Only one exact color value for a mask, meaning they are truly binary in this sense, no soft transitions or anti-aliased edges are possible.

Cryptomattes that are now supported in Blender and other 3D and compositing software are actually very complicated to solve those issues, they have arbitrary additional channels to store individual information about transparency and other things for every object ID. You can’t simply modify Cryptomattes with tools that are designed for RGB.

if that is the case what about an abstract channel then ? and the color is just added randomly as reference so you can distinguish it?

like a layer where you can add any number of channels, with perhaps a limit of 10 channels and select a color for each one or do a random select for all through a seed. Since they don’t connect or affect one another beyond the fact that they are painted on the screen, they would resemble vertex weights but on pixels if that makes any sense. if you add all the influences on one pixel they would have to add up to one in total regardless of how many channels could be affecting it.

Then you need to create tools that can 1. select the values from those abstract channels from the combined mask in the first place and 2. let you edit them, ideally paint on them. You couldn’t use the color selector to pick colors corresponding to the channels since they’re purely imaginary and consequently couldn’t use brushes, and you wouldn’t have a way to intuitively display blending between all mask channels at the same time. Any imaginary color needs to be represented by a visible one, but there are no unused color combinations left that could be displayed after using up red, green, and blue. You aren’t thinking of the cases where independent transparency of each mask channel is needed. Again, this is why Cryptomattes are difficult. You can’t truthfully display all information at the same time in a Cryptomatte containing more than 3 mask IDs.

I am just throwing out thoughts but…

The paint mask you already enter forcibly inside greyscale mode entering a set of predetermined colors does not seem like a stretch for the user.

Since krita relies so much in a brush system why not use it, ussually zbrush just adds a new brush to do a new thing they want, and krita has done it before too.

For the display that does not sound awefully hard either, if a limitation is done on what is being displayed at a time you can easilly push it out without the need to use the whole rgb even and just use regular greyscale and just use a clone for user display once it is done, you tap a channel and it shows that mask. but it would become more of a mask hub more than anything like that, and even so having them connected would still do the same but I am not sure if it would feel right to use and not too ethereal. I think no one would like it like that. click the channel inside the mask to see it instead of seeing the whole mask composed on one layer, I do that all the time but I doubt artsy people would feel any use in it.

How would you select the correct channel to switch to on the canvas? Any pixel can contain any combination of all the used channels, for example it could contain 59% of color ID 1, 16% of ID 2, 20% of ID, 5% of ID4. Which does it switch to, the dominant one? What if the user wants to edit any of the others?

That leaves the problem that the mask can never be displayed in its entirety. You couldn’t represent more than 3 channels at the same time, so the layer thumbnail or isolated view is of limited use, you’d need to constantly switch between all channels.

You’d also lose the ability to edit across channels, or at least make it complicated. What if I wanted to edit 2 channels simultaneously? What if I wanted to expand the area ID 3 covers with a brush, but also reduce the values on other channels, so I don’t get any overlap? - A special blending mode?

Cryptomattes are usually displayed to the user in the dominant pixel ID interpreted as a unique solid color, but that method doesn’t allow seeing any blending colors between the channels, even if the mask does store that information.

R+LMB as it is natural to do or click on the correct channel in the layer stack

well normalizing values obviousley. if you consider that each channel can reach 100% influence. imagine you have 3 channels in one pixel that exceed total value of sum of all into 100% here the max it can get is 300%, not good behaviour ussually. this could either mean it is a shared mask between channel but it should by default be adjusted. If 3 channels had 100% you would have 3 channels with 33.33% as a process of normalization. and this adjustment can be done after the painting yes it is normal to do that. if you need to pick a dominant one it would be the channel being edited, sounds like basic editing sofware filosofy there.

not really, I gave the answer to this before.

You cant pick 2 colors in simultaneouslley … you can only add percentages of other channels and that makes a whole new color every time. how do you even imagine that something that would even need? if you adding a percentage on one channel your subtracting that from all other channels at the same time. you want 2 brushes?

cyptomasks is just a faster way to do something you can do by hand if you come from a 3d enviroment but if you come from raw footage you have to do it yourself by hand which is a pain(my videos examples where all examples of that) cryptomatte only automate that process. regardless if you do it by hand you will only be using the red color “only” mainly but you can change if you have the want for it. And this works fine because it is a node based system your not limited by a layer stack by default, other programs like after effects that is layer based use tags to jump accross layers. Cryptomatte does not show the alpha because you don’t want to see it when you use stuff like that it is a mask after all, only the influence of that material matters and underneath it is the world that is always black until you cover it with something. I have spoken of materials before assigning a value to a pixel should not be this hard.

Now I am getting curious about these counter arguments, I will learn C++ to see what is making you all fussed about which seems basic math. if this is Qt in the middle I guess I can understand your limitation since it is not your library.

Here’s a crude example of what I’m trying to make you understand:


This Blender scene shows a finished render on the top, and the corresponding Cryptomatte information taken from the objects and interpreted as a color mask on the bottom. In the lower right of the render you see a blue cube that’s positioned behind a mostly transparent sheet. On the displayed mask that cube is not visible, because it can only display the material ID for the transparent sheet in front of it. However, Cryptomattes guarantee that it can store 2 IDs per pixel, per rank (depth value). In this example there are 2 ranks, so it should be possible to have up to 4 overlapping object masks stored in the Cryptomatte.
When you click on the Add or Remove buttons in the Cryptomatte node, you get an eyedropper tool that lets you pick a color ID from the mask. But if I click on the area where that hidden cube should be located, I get the mask output for the sheet instead, no way around it.

And here’s the proof that the matte has the information from the hidden object:


Now I use the IDs generated from the materials instead. There are 2 other cubes visible with the same material, so if I pick one of those I also get the hidden cube.

This is the problem when you want to display abstract color data.

  • Either you choose to present each ID as a unique solid color like Blender does, but then can’t visualize any color blending at the same time.

  • Or you choose to show each ID in its own independent channel. But there are only 3 color channels that you can use to display on a monitor, or even distinguish with your eyes - red, green, and blue. If you add a 4th, a 5th, and more color channels, then you can’t display them all together as a combined layer, unless you want to lose the unique mapping from data to color. Again, let’s for example say the 4th channel is interpreted as cyan. Cyan is already green and blue mixed together, so how could a user be sure if they’re seeing the cyan mask channel, instead of green and blue overlapping to coincidentally create cyan? Same issue with any color or variation in brightness.

  • Or you stick to just displaying a single channel in grayscale, like you said. But then what’s the benefit? That makes you have to switch between channels you want to edit and you need some mechanism to reliably and easily do just that. Using on-canvas color picking to switch doesn’t work if there’s no area that your target channel already covers. So you’d need a new UI element that lists all available channels for each mask, and it would have to be adaptable, because each mask could have a varying amount of channels or possibly use different custom colors.

To make this clear, I’m not trying to discourage your efforts. These are the obstacles for which I don’t have a good solution, but maybe you or any of the experienced Krita devs already do.

That mask in blender is right. your comparing a rendered image with a crypto mask made by materials it will consider zdepth of the material not the light going thought it. if you want it with the transparency you will have to create a secondary pass to isolate that, or just really plain materials that react to to light but not so much. that is why various passes exist on a single render if you dont unlock it you will not see pass the transparency of that face.

You keep refering to cryptomatter when I keep speaking of vertex weight in materials.

I really don’t care much about the “how” I paint the mask compared to the mask being acctually “there” really. Truth be told painting like 6 layers at the same time or just 1 at a time seems more a neutral point to debate on. If you know were your limits are you dont need to think about the masks anymore.

Even with the limit of three colors plainly it could still be more manageable, the more you talk about it the more it seems feasable even if limited. But that would make some ugly looking masks for sure too :\

It’s about hidden information, the displayed matte doesn’t show you everything. The transparency mask for that cube is already in those Cryptomattes, both the object one and the material one so no other passes are needed, but it’s not accessible from the canvas in the object matte, you’d need to know the exact ID value to filter for it.

Vertex weights work like a single channel, all the vertex maps are separate and you only edit one at a time. Blender can display the vertex influence as a gradient map from blue to red, but that’s the same principle as grayscale.

Yeah again, 3 color channels can work. Think about how each color channel is a vector pointing in a unique direction (axis) and you’re creating a 3-dimensional color volume this way. If you try to add more than 3 vectors, you can’t point those additional vectors anywhere that you can’t already reach by combining the first 3 vectors. It works if you could make it hyper-dimensional with a new axis for each new vector, but in the end you are forced to convert it down to only 3 dimensions (3 color channels) and lose that information.