A post was merged into an existing topic: Introducing a New Project: Fast Line Art
The only Ai that can come along that is âgoodâ is when they give up on the image generators. The tech can be used to assist artists, but it will always be exploitative one way or the other if they keep trying to force us to giving in to their hyperconsumerism demands. Iâve seen people claiming that this speeds up the process so we can produce more and faster, but you only need to produce more and faster because our work has always been devalued and itâs being even more devalued with genAI produtcs.
Anything that normalizes image generators is a detriment to our carreers and only feeds into the idea that Art is not a âreal jobâ.
There will never be an ethical image/video/3d/text generator because the whole philosophy behind it is flawed. Itâs an exploitative product, that should not be all that AI has to offer.
Even this Krita feature feeds this philosophy. It serves to normalize it.
Also they are partnering with Intel that has a foot in the genAI gold rush. The dataset is opted-artworks only, but what does Intel get from it? Not the artworks in the dataset I believe. How about data and knowhow that can help them finetune other genAI models?
Itâs getting harder and harder to recommend substitutes for Adobe products, I was preaching Krita for 2 years because how it kept away from this whole AI debate and prioritized artists instead of gimmicky features. Thereâs already other ways to convert a sketch into lineart that doesnât need to give in to the AI hype. Better tools that keeps artistâs intention instead of taking away their creative decisions in the process.
You can see this whole thread only serves to show how artists are trying to rationalize genAI PRODUCTS into something they are not: tools. If the stolen dataset is the only thing keeping you from using genAI image generators I beg you to reconsider. These products are made to devalue our work, eliminate career jobs and substitute it for underemployment with underpaid and overworked workers.
I read that before on this thread⌠what is in your opinion genAI?
What are the points that make it for you not usable.
Also your making unbased assumptions about intelâs sponsorship.
I know that the question is not for me, but mecanize more my process is a thing that just causes me pain, btw and the argue that is too late and the professional market demand that is not an thing that is good in any way, as workers we shold have choice to control our craft not be treated as machines in a fordist line of production, we shold can fight agains opression and defend our selfs fron thisâŚ
Again read this : https://www.thenation.com/article/culture/fordism-ai-art-dall-e/
Any way I think that you shold use the mitsua data set of C0 as base and onli take voluntary work if extremely needed because can be fraudulent subimissions in there.
mitsua here : Mitsua/mitsua-diffusion-one ¡ Hugging Face
A technology which only profitable products are based on stolen data, and even if it wasnât it still keeps the same general idea: devalue the art, artists and its process in the hopes of replacing job posts so multibillionaires can profit more with less costs.
what makes it not usable is the idea of giving up a whole step of the process to automation in this one, for other gebAI products is the whole process itself. You keep saying lineart and filling base colours are boring, but they are part of the process, artistâs intentions are in it as much as the final work. By the same logic: rendering, colouring, painting, sketching, thumbnailing are also boring because you repeat all those steps as much as you would repeat lineart in every artwork you make.
Iâm making assumptions yes, thatâs why I put quotation marks at the end. Weâve been seeing lots and lots of examples of how bigtech and studios try to pass off as ethical by using loopholes in TOS, operating on gray areas of the law and misrepresenting their products through deceitful marketing like Adobe Firefly claiming to be an ethical model when the adobe stock contributors could have never have consented to their work being used to train genAI and then when they partner with the same main genAI companies that started the whole thing in the first place. Artists still to this day completely believes firefly is ethicalâŚ
I understand all Kritaâs fast lineart feature precaution, I read all your explanations about it, but I just donât agree with its general idea to automate a process in this way and labelling it as a boring step to be skipped.
PaintTool SAI has a âvectorâ layer for lineart that keeps pressure dynamics just like the raster layers, you can edit every stoke you make, it serves the purpose of speeding up the process by merging sketching with lineart. Instead of automating the lineart process, you can sketch with the mentality that you are also doing lineart at the same time since you can reshape anything youâve drawn. It serves as an assistive tool for the disabled as well even more so than an automated lineart that needs a sketch of it in the first place.
Instead of an autogenerated lineart feature, why not implement something like quickshapes in procreate and infinite painter? It is way more intuitive, time saving than randomly generated lineart. It also serves to assist, you have finer control of the final lineart, it completely keeps your artistic intention. In infinite painter you can go as far as editing the thickness alongside the brushstroke, duplicate it, change for another brush on the fly before accepting the changes and the app comitting that stroke to the canvas.
they canât use existing datasets because this architecture specifically needs closely-matching pairs of sketch and lineart, and since the dataset is gonna be tiny itâll be possible to spend some time verifying each image individually
this is why I think anything coming from AI generated outputs, to âspeed up the processâ is a gimmick. It truly does not assist artists as a tool, it takes away artistic intention, it prioritizes hyperconsumerism, it devalues our process, it gives room for companies, studios, and clients to demand more and faster while we are paid less and less âbecause the tool allows you to produce moreâ.
Iâll go ahead and put in my two cents on this feature, without responding directly to some of the crazier things Iâve seen posted here.
I see a lot of people asking why this feature should exist when ârealâ artists wouldnât need or want to use it (with a lot of real artists saying they actually would use it).
First of all, the term âartistâ is extremely broad and covers a lot of people who do things very differently than other artists, so anyone who says that some process or method isnât real art is being completely disingenuous. See the banana duct-taped to a wall, for an extreme example.
Second of all, Krita isnât just used by âartists.â Iâm a game designer who uses Krita to make/edit art for my games. In a way that makes me an artist, but the art is a means to a different end, and so I donât care if I use âartist methodsâ to make my art. Anything that makes my job easier/faster is welcome to me, and this feature looks like it could fit that bill.
That being said, like most of you, I despise generative AI, both because pretty much every example of it I know of has been trained unethically, and because almost everything Iâve seen generated by it feels soul-less. I know many game designers are using gen-AI tools, and I find that a bit disturbing, but unsurprising.
This tool is NOT generative AI. The way I see it, itâs almost identical to the GâMIC filters already in Krita, just that the filter matrix/pattern will be created by a neural network trained on a small set of data specifically donated for this purpose. But since itâs still ultimately a filter, it canât create anything out of thin air. I could be wrong on how it works (I havenât studied the papers linked above), but thatâs my understanding based on what I have read.
The last thing Iâll add is that one criticism about this Iâve seen that I find somewhat valid, is that it might take developer time away from other features that people want more. However, since Intel is paying for the feature, the Krita Foundation can actually hire on someone new to work on this, so it doesnât take away from the people currently working on features people want. I havenât gone onto the developer IRC in a long time, so I donât know if thatâs been discussed or not, but hopefully itâs something theyâre considering (I recently finished a project, so Iâm available ).
Of all the helpful commentary that has been offered in this thread, this bit here resonates with me.
Because no one seems to be able to read anymore (sorry, but this is getting painful to see), Iâll say it out loud: this isnât GenAI, itâs is based on much a paper from 8-9 years ago (2016; for context, phones still had headphone jacks at the time). If you want, you can read the paper in question here: https://esslab.jp/~ess/publications/SimoSerraSIGGRAPH2016.pdf
To quote the original announcement:
Itâs not a generative AI. It wonât invent anything. It wonât add details, any stylistic flourish besides basic line weight, cross-hatching or anything else. It wonât fix any mistakes. It will closely follow the provided sketch. I believe it wonât even be possible for a network of this size and architecture to âborrowâ any parts of the images from the training dataset.
- Introducing a New Project: Fast Line Art
- No new details will be generated, nor will it have the ability to generate new details
- No errors in the sketch will be fixed, nor will it have the ability to fix errors in the sketch
- It will follow the original sketch closely, because thatâs all it will be able to do
Intel is also only providing the funds to develop and train the model, not any data. Ths dataset is based on whatever is donated to this single post (in part because the model doesnât require a that of large dataset, according to the paper). Yes, a single post: Call for donation of artworks for the Fast Line Art project - #9 by YRH
I havenât posted in the thread yet because of all the fighting but both @Voronwe13 and @StandingPad worded exactly how I see it. This is not genAI and the feature is based on a paper that is publicly avaliable that has been linked ( the paper is https://esslab.jp/~ess/publications/SimoSerraSIGGRAPH2016.pdf just in case the link is lost). This is not growing legs, running around and using other work without what i call The Three Cs (contact, concent and credit any artist when their work is used) like all the GenAI stuff out in the wild. It runs on your device using your sketch in a localized matter. Its also stated that artist can provide their work at their own free will.
In my impression, GMIC already has some filters composed of neural networks
I probably wouldnât find this feature helpful personally because my sketches never look anything close to the final lines if I even draw them. If other people find it helpful I wouldnât judge them on that because it is their choice to use what they want, especially if it helps them maybe because they have time constraints or a disability. This isnât anything like the generative AI tools as far as I understand it either so I donât have a problem with it being in Krita, I use the colorize mask thing quite a lot because I hate colouring things in so that would be hypocritical of me.
I understand the current name is temporary but might a name change reduce people jumping into wrong conclusions?
Depending on each individualâs processes, how they perceive the term âline artâ can differ from purely tracing over the sketch to making important creative decisions such as line thickness, which lines to keep etc in regards to lighting and depth. I am in the latter camp in how I perceive line art to be and I suspect many who are against this feature are too.
Although still up to speculation, I believe the feature is generating âline artâ more akin to the former where you trace over only for the sake of cleaner lines after going thru the description. So maybe use back the same âRough Sketch Cleanupâ name as the paper or âSketch to Linesâ dropping the term âartâ to get rid of the notion that this makes creative decision for their art.
AI is quite the minefield with all the speculations, I oppose the use of it to replace decision making processes but I wouldnât like to see artist support for Krita going down because of it.
I never liked doing linearts but surrendering artistic process that requires alot of knowledge and creativity to some neuralnetwork is not the way.
Its process where you refine lines and most important add details.
People defending this project will say it will only follow your sketch if it clean enought. Ok.
So whoever will use it will get subpar result that they need to correct wasting even more time than would take them to draw it themselves.
Utter waste of dev time and resources. But if Intel funded that project⌠cool get that $$ I guess.
Artists need to be encouraged to grow not automate >_>
As a traditional artist, I consider that digital line art is quite cheating already - considering the work of tracing a line is a pure expression of movement. No stabilizer, no Ctrl Z, no easy eraser. So, Iâm a little boring by the comments speaking about art in this case. Or you need to refine your definition of Art.
âdigital line art is quite cheatingâ - oh god if you really think soâŚ
âtracing a linesâ Heres example of my lineart (black lines) and my sketch (in blue) Does that look like traced lines and no creative work was done?
Thatâs what I donât personally understand.
âThis feature doesnât add any creativity or detailâ
Thatâs what lineart is? ? Itâs the stage in the art process where you refine your sketchy blocks or searching lines to add creativity and detail? What is the tool doing then other than essentially blurring then sharpening your sketch a couple times?
Like letâs practice hypotheticals for a second.
Letâs say this was an AI shading tool that âdidnât add creativity or generate details that werenât thereâ
You put your line and flat color in and it adds the most basic lighting. It doesnât add multiple lights, or control the direction of the light, or add environmental light, or reflections, or ambient occlusion, or anything important or creative. Just the most basic shading imaginable.
Now imagine you have a bunch of artists coming in like âWow this is going to help me out a lot, I hate shading and canât possibly see how it could be used to make my art better by doing it manuallyâ
Are other artists really on a âhigh horseâ or being snooty when they say âhey now, maybe you should try shading on your own so your shading doesnât look boring and lifeless and the same?â
That is for your style, there are styles without lineart, there are so many different workflows. Is there no possibility that there are artists, which would use this tool to speed up their workflow?
I donât think so.
Think outside of your own box (here as picture for a limited view), and grow as an artist.
Forgive me for being stupid but what does someone who doesnât do lineart have to do with the inclusion of a lineart tool?
Iâm the one fighting for others to grow as an artist because I enjoy doing it myself. I find it hypocritical youâre sitting here defending automating off a part of the artistic process while simultaneously telling artists who do their own lineart to âgrowâ. I guess the irony is lost.
Honestly I think a lot of this could have been avoided by just making an automated âsketch cleanup toolâ instead that does the exact same thing, but Iâm sure Intel had their requirements.
A tool which automatically erases guide lines and shapes and gesture lines and construction lines and refines the outer sketch lines a bit would have been infinitely more usable to the average artist than a âlineart toolâ