Feedback about the inclusion of New Fast line art feature

A post was merged into an existing topic: Introducing a New Project: Fast Line Art

The only Ai that can come along that is “good” is when they give up on the image generators. The tech can be used to assist artists, but it will always be exploitative one way or the other if they keep trying to force us to giving in to their hyperconsumerism demands. I’ve seen people claiming that this speeds up the process so we can produce more and faster, but you only need to produce more and faster because our work has always been devalued and it’s being even more devalued with genAI produtcs.

Anything that normalizes image generators is a detriment to our carreers and only feeds into the idea that Art is not a “real job”.

There will never be an ethical image/video/3d/text generator because the whole philosophy behind it is flawed. It’s an exploitative product, that should not be all that AI has to offer.

Even this Krita feature feeds this philosophy. It serves to normalize it.

Also they are partnering with Intel that has a foot in the genAI gold rush. The dataset is opted-artworks only, but what does Intel get from it? Not the artworks in the dataset I believe. How about data and knowhow that can help them finetune other genAI models?

It’s getting harder and harder to recommend substitutes for Adobe products, I was preaching Krita for 2 years because how it kept away from this whole AI debate and prioritized artists instead of gimmicky features. There’s already other ways to convert a sketch into lineart that doesn’t need to give in to the AI hype. Better tools that keeps artist’s intention instead of taking away their creative decisions in the process.

You can see this whole thread only serves to show how artists are trying to rationalize genAI PRODUCTS into something they are not: tools. If the stolen dataset is the only thing keeping you from using genAI image generators I beg you to reconsider. These products are made to devalue our work, eliminate career jobs and substitute it for underemployment with underpaid and overworked workers.

3 Likes

I read that before on this thread… what is in your opinion genAI?
What are the points that make it for you not usable.

Also your making unbased assumptions about intel’s sponsorship.

I know that the question is not for me, but mecanize more my process is a thing that just causes me pain, btw and the argue that is too late and the professional market demand that is not an thing that is good in any way, as workers we shold have choice to control our craft not be treated as machines in a fordist line of production, we shold can fight agains opression and defend our selfs fron this…
Again read this : https://www.thenation.com/article/culture/fordism-ai-art-dall-e/
Any way I think that you shold use the mitsua data set of C0 as base and onli take voluntary work if extremely needed because can be fraudulent subimissions in there.
mitsua here : Mitsua/mitsua-diffusion-one ¡ Hugging Face

A technology which only profitable products are based on stolen data, and even if it wasn’t it still keeps the same general idea: devalue the art, artists and its process in the hopes of replacing job posts so multibillionaires can profit more with less costs.

what makes it not usable is the idea of giving up a whole step of the process to automation in this one, for other gebAI products is the whole process itself. You keep saying lineart and filling base colours are boring, but they are part of the process, artist’s intentions are in it as much as the final work. By the same logic: rendering, colouring, painting, sketching, thumbnailing are also boring because you repeat all those steps as much as you would repeat lineart in every artwork you make.

I’m making assumptions yes, that’s why I put quotation marks at the end. We’ve been seeing lots and lots of examples of how bigtech and studios try to pass off as ethical by using loopholes in TOS, operating on gray areas of the law and misrepresenting their products through deceitful marketing like Adobe Firefly claiming to be an ethical model when the adobe stock contributors could have never have consented to their work being used to train genAI and then when they partner with the same main genAI companies that started the whole thing in the first place. Artists still to this day completely believes firefly is ethical…

I understand all Krita’s fast lineart feature precaution, I read all your explanations about it, but I just don’t agree with its general idea to automate a process in this way and labelling it as a boring step to be skipped.

PaintTool SAI has a “vector” layer for lineart that keeps pressure dynamics just like the raster layers, you can edit every stoke you make, it serves the purpose of speeding up the process by merging sketching with lineart. Instead of automating the lineart process, you can sketch with the mentality that you are also doing lineart at the same time since you can reshape anything you’ve drawn. It serves as an assistive tool for the disabled as well even more so than an automated lineart that needs a sketch of it in the first place.

Instead of an autogenerated lineart feature, why not implement something like quickshapes in procreate and infinite painter? It is way more intuitive, time saving than randomly generated lineart. It also serves to assist, you have finer control of the final lineart, it completely keeps your artistic intention. In infinite painter you can go as far as editing the thickness alongside the brushstroke, duplicate it, change for another brush on the fly before accepting the changes and the app comitting that stroke to the canvas.

5 Likes

they can’t use existing datasets because this architecture specifically needs closely-matching pairs of sketch and lineart, and since the dataset is gonna be tiny it’ll be possible to spend some time verifying each image individually

1 Like

this is why I think anything coming from AI generated outputs, to “speed up the process” is a gimmick. It truly does not assist artists as a tool, it takes away artistic intention, it prioritizes hyperconsumerism, it devalues our process, it gives room for companies, studios, and clients to demand more and faster while we are paid less and less “because the tool allows you to produce more”.

5 Likes

I’ll go ahead and put in my two cents on this feature, without responding directly to some of the crazier things I’ve seen posted here.

I see a lot of people asking why this feature should exist when “real” artists wouldn’t need or want to use it (with a lot of real artists saying they actually would use it).

First of all, the term “artist” is extremely broad and covers a lot of people who do things very differently than other artists, so anyone who says that some process or method isn’t real art is being completely disingenuous. See the banana duct-taped to a wall, for an extreme example.

Second of all, Krita isn’t just used by “artists.” I’m a game designer who uses Krita to make/edit art for my games. In a way that makes me an artist, but the art is a means to a different end, and so I don’t care if I use “artist methods” to make my art. Anything that makes my job easier/faster is welcome to me, and this feature looks like it could fit that bill.

That being said, like most of you, I despise generative AI, both because pretty much every example of it I know of has been trained unethically, and because almost everything I’ve seen generated by it feels soul-less. I know many game designers are using gen-AI tools, and I find that a bit disturbing, but unsurprising.

This tool is NOT generative AI. The way I see it, it’s almost identical to the G’MIC filters already in Krita, just that the filter matrix/pattern will be created by a neural network trained on a small set of data specifically donated for this purpose. But since it’s still ultimately a filter, it can’t create anything out of thin air. I could be wrong on how it works (I haven’t studied the papers linked above), but that’s my understanding based on what I have read.

The last thing I’ll add is that one criticism about this I’ve seen that I find somewhat valid, is that it might take developer time away from other features that people want more. However, since Intel is paying for the feature, the Krita Foundation can actually hire on someone new to work on this, so it doesn’t take away from the people currently working on features people want. I haven’t gone onto the developer IRC in a long time, so I don’t know if that’s been discussed or not, but hopefully it’s something they’re considering (I recently finished a project, so I’m available :wink: ).

16 Likes

Of all the helpful commentary that has been offered in this thread, this bit here resonates with me.

11 Likes

Because no one seems to be able to read anymore (sorry, but this is getting painful to see), I’ll say it out loud: this isn’t GenAI, it’s is based on much a paper from 8-9 years ago (2016; for context, phones still had headphone jacks at the time). If you want, you can read the paper in question here: https://esslab.jp/~ess/publications/SimoSerraSIGGRAPH2016.pdf

To quote the original announcement:

It’s not a generative AI. It won’t invent anything. It won’t add details, any stylistic flourish besides basic line weight, cross-hatching or anything else. It won’t fix any mistakes. It will closely follow the provided sketch. I believe it won’t even be possible for a network of this size and architecture to “borrow” any parts of the images from the training dataset.
- Introducing a New Project: Fast Line Art

  • No new details will be generated, nor will it have the ability to generate new details
  • No errors in the sketch will be fixed, nor will it have the ability to fix errors in the sketch
  • It will follow the original sketch closely, because that’s all it will be able to do

Intel is also only providing the funds to develop and train the model, not any data. Ths dataset is based on whatever is donated to this single post (in part because the model doesn’t require a that of large dataset, according to the paper). Yes, a single post: Call for donation of artworks for the Fast Line Art project - #9 by YRH

9 Likes

I haven’t posted in the thread yet because of all the fighting but both @Voronwe13 and @StandingPad worded exactly how I see it. This is not genAI and the feature is based on a paper that is publicly avaliable that has been linked ( the paper is https://esslab.jp/~ess/publications/SimoSerraSIGGRAPH2016.pdf just in case the link is lost). This is not growing legs, running around and using other work without what i call The Three Cs (contact, concent and credit any artist when their work is used) like all the GenAI stuff out in the wild. It runs on your device using your sketch in a localized matter. Its also stated that artist can provide their work at their own free will.

11 Likes

In my impression, GMIC already has some filters composed of neural networks

4 Likes

I probably wouldn’t find this feature helpful personally because my sketches never look anything close to the final lines if I even draw them. If other people find it helpful I wouldn’t judge them on that because it is their choice to use what they want, especially if it helps them maybe because they have time constraints or a disability. This isn’t anything like the generative AI tools as far as I understand it either so I don’t have a problem with it being in Krita, I use the colorize mask thing quite a lot because I hate colouring things in so that would be hypocritical of me.

4 Likes

I understand the current name is temporary but might a name change reduce people jumping into wrong conclusions?

Depending on each individual’s processes, how they perceive the term “line art” can differ from purely tracing over the sketch to making important creative decisions such as line thickness, which lines to keep etc in regards to lighting and depth. I am in the latter camp in how I perceive line art to be and I suspect many who are against this feature are too.

Although still up to speculation, I believe the feature is generating “line art” more akin to the former where you trace over only for the sake of cleaner lines after going thru the description. So maybe use back the same “Rough Sketch Cleanup” name as the paper or “Sketch to Lines” dropping the term “art” to get rid of the notion that this makes creative decision for their art.

AI is quite the minefield with all the speculations, I oppose the use of it to replace decision making processes but I wouldn’t like to see artist support for Krita going down because of it.

8 Likes

I never liked doing linearts but surrendering artistic process that requires alot of knowledge and creativity to some neuralnetwork is not the way.
Its process where you refine lines and most important add details.

People defending this project will say it will only follow your sketch if it clean enought. Ok.
So whoever will use it will get subpar result that they need to correct wasting even more time than would take them to draw it themselves.

Utter waste of dev time and resources. But if Intel funded that project… cool get that $$ I guess.

Artists need to be encouraged to grow not automate >_>

1 Like

As a traditional artist, I consider that digital line art is quite cheating already - considering the work of tracing a line is a pure expression of movement. No stabilizer, no Ctrl Z, no easy eraser. So, I’m a little boring by the comments speaking about art in this case. Or you need to refine your definition of Art.

1 Like

“digital line art is quite cheating” - oh god if you really think so…
“tracing a lines” Heres example of my lineart (black lines) and my sketch (in blue) Does that look like traced lines and no creative work was done?


7 Likes

That’s what I don’t personally understand.
“This feature doesn’t add any creativity or detail”
That’s what lineart is? ? It’s the stage in the art process where you refine your sketchy blocks or searching lines to add creativity and detail? What is the tool doing then other than essentially blurring then sharpening your sketch a couple times?

Like let’s practice hypotheticals for a second.
Let’s say this was an AI shading tool that ‘didn’t add creativity or generate details that weren’t there’
You put your line and flat color in and it adds the most basic lighting. It doesn’t add multiple lights, or control the direction of the light, or add environmental light, or reflections, or ambient occlusion, or anything important or creative. Just the most basic shading imaginable.

Now imagine you have a bunch of artists coming in like “Wow this is going to help me out a lot, I hate shading and can’t possibly see how it could be used to make my art better by doing it manually”

Are other artists really on a ‘high horse’ or being snooty when they say “hey now, maybe you should try shading on your own so your shading doesn’t look boring and lifeless and the same?”

1 Like

That is for your style, there are styles without lineart, there are so many different workflows. Is there no possibility that there are artists, which would use this tool to speed up their workflow?

I don’t think so.

Think outside of your own box (here as picture for a limited view), and grow as an artist.

Forgive me for being stupid but what does someone who doesn’t do lineart have to do with the inclusion of a lineart tool?

I’m the one fighting for others to grow as an artist because I enjoy doing it myself. I find it hypocritical you’re sitting here defending automating off a part of the artistic process while simultaneously telling artists who do their own lineart to ‘grow’. I guess the irony is lost.

Honestly I think a lot of this could have been avoided by just making an automated ‘sketch cleanup tool’ instead that does the exact same thing, but I’m sure Intel had their requirements.
A tool which automatically erases guide lines and shapes and gesture lines and construction lines and refines the outer sketch lines a bit would have been infinitely more usable to the average artist than a ‘lineart tool’

2 Likes