Here are some benchmarks on Intel Core i7-5960X CPU at 3.00 GHz with 8 cores and NVIDIA GeForce TITAN X GPU from the same paper. It seems to take awhile for even small canvas sizes.
would this also work for, say sketched done irl and scanned in?
and for those of you pooh poohing this because people should just learn to do line art properly, or it will be soulless or whatever, or that those who don’t want to do line art should just do something else instead of art that requires it:
I have a hand tremor that makes drawing difficult at the best of times, I also can’t afford a proper tablet/stylus and am working with a mouse pen (optical mouse shaped like a really fat pen) i can’t do good line art with my current setup if my life depended on it, i can barely do ok sketches. BUT I can sketch fairly passably with pencil and paper, and coloring/painting is still fairly doable digitally.
think of this less as a crutch for lazy untalented artists, and more of a disability aid.
This is very interesting. This would be an ethical use of generative art.
This is more like a gmic filter and can help improving some workflows like animation cleanup. The results presented here though look like vector ‘traced’. The corners look blobby and fused together. It still has that mushy feel.
I am excited where this goes.
Even though I think this will stir the pot. The project seems cool and I like it. I just feel it is handling the whole before having really good tools to do this manually.
I have been using open toonz for animation and there are tools there that obviously will never be as fast as this I imagine but give a lot of control for lineart and post line creation editing. Because hand drawing vectors like a pencil is fast and clean. Filter stuff might help a chunk but it is kinda of a gamble to how it likes to clean it up and needs getting used to it. On my end I might like this but I am not big on filters usually.
Krita wise this feels too early in the timeline. But the fact that the model is light and more than anything runs locally is good. Sending information back and forth is a no go for me. Also if the user version is not able to learn anything from the user it will probably save a lot of headaches from the AI debates for all developers there as it would be like a fancy filter instead of a mutable thing that could over reach things. Being a dumb ai is good.
Yes, but it’s out of scope right now. But most probably it would just need retraining the same network on images that have this kind of effect. I said it cannot really do fancy stuff to line art, but this effect is very much local and doesn’t require any knowledge of the artwork (like where the shadow lays etc.), which makes sense since that’s something that ink and paper were doing on their own, so it is indeed possible, I believe. It would probably need different training/datasets for different thicknesses of the line art, though… There are always pros and cons, I guess.
A sketch in this case means a rough drawing. Like when you build up the image by using more pressure/opacity and erasing the wrong lines.
I’m not sure what you mean by checkerboard patterns or fractals.
Well, in theory it could’ve been - but I feel like it would make the training process more complicated, we’d need more data, longer training etc. There is a chance it would work well enough even without any special treatment, but I wouldn’t rely on it.
I was alluding to some of the examples in Spatial anti-aliasing - Wikipedia. I was listing things that might be particularly challenging for line art model to handle if at all. It’s interesting to consider how the model would react to things that are out of scope.
To be fair, Intel does (or did) support Krita with a donation for like a year or two now. I don’t think they appear on the Krita Dev Fund page because they don’t do it via a Dev Fund?.. But, well. I wish so too.
You are right… and thanks for the picture in your comment. It’s exactly how I hope to get it to work. And come on, you gotta admit the hat is pretty cool. Of course the AI won’t ever be able to do a proper anime face, or maybe any face at all - since humans are so great at detecting any tiny disparities and issues. Can I use that picture as an example?
Yeah… I can’t really do anything about that, though. I can only now explain how it differs from other projects. That’s the reason why I made half of this post to be a list of disclaimers Every time I saw more news about AI, I was adding one more disclaimer… I seem to have at least some success: Michael J. Coffey: "@juankprada@mastodon.art @krita@mastodon.art @GIM…" - Scholar Social
That would be way more complicated… I feel like it could’ve been made easier with a better stabilizer options using normal algorithms, frankly.
Personally, as an artist (very much hobby tier), I just never got the stabilizer option I liked enough to be satisfied with my lines (while I am usually satisfied with my lines on paper), and trying to have line weight is an even sadder tale. I remember I was once fond of Dynamic Brush Tool, recently I used the Path Tool and it worked well enough, but I really need to add an option for pressure and other options there (when painting on raster layers, of course - similarly to how Line Tool works)… So, for me it’s not really that it’s boring, but it’s just very difficult. Which is why I actually started working on this project - except in a very dumb way, trying to code up the whole network from scratch myself, as in, yes, not using the torch (neural network) library - a few years ago, before the whole AI craze… Now I usually just paint instead of trying to make a polished drawing.
Yeah, definitely. I guess I was kind of looking at it the way traditional artists are making line art - first very detailed pencil sketches, and then ink on top, instead of like digital artists are used to. That’s definitely a limitation…
Wow, that’s a good point. Yeah, definitely. Frankly, now that I think about it, maybe I shouldn’t be skipping the “basic no line weight” brush (which I intended to, because it doesn’t look that good and it wouldn’t be very useful for illustrators) - since most animations don’t use line weight. I guess I’d need to see how the results are with what we have, and whether animation artists would be interested. But it’s a really cool idea. I’m not an animator but I can imagine it being a huge help, as long as it works correctly.
If you check the original article, you can see that their results weren’t acceptable for artists use - but they also never attempted that, and even the dataset was full of line arts with a constant line weight. You probably know that phrase “garbage in, garbage out” from all the recent laughs at Google AI recommending to use glue to prevent cheese from slipping off the pizza. The scientists didn’t use garbage, but their line art pieces weren’t particularly artistic, so their results weren’t artistic either.
Moreover, it seems like in general, scientists and businesses were/are more interested in vector results. That wasn’t interested for Krita since Krita doesn’t have a variable thickness vector strokes… and using shapes instead of strokes (like Inkscape does when asking it to vectorize a line art) would make the result even less useful than a raster one. And besides, even the articles describing the vector version were often not using neural networks (and to get the sponsorship from Intel, we have to use NN).
I’m assuming no one saw the potential (…or they didn’t notice, or they believe it won’t work). Photoshop especially isn’t really interested in digital painting - I’ve just recently saw on Youtube a rant from a PS digital artist that they are a second class citizen for decades now, despite being a huge portion of the userbase. Clip Studio Paint could have used it, but they have other good tools for line art. I think they do have variable width vector strokes? That alone helps a lot.
Yes, ideally, it should. I believe we might not be able to promote the compatibility with other vendors for some time though.
The training will happen in Intel Development Cloud, I believe they call it Tiber now. The implementation of the training is already live (though unfinished) here: Agata Cacko / Fast Line Art · GitLab - it’s GPL, so it doesn’t prevent using it (but does prevent sharing without making the result code GPL too).
Intel wants everything to be in the open, and it’s kind of KDE’s idea as well. I think we intend to have the final model GPL-licensed as well? That means others wouldn’t be able to embed it freely without making the code GPL as well? I’m not 100% sure about it yet. But we can’t really protect it much further.
The dataset will have CC-BY license, most probably, with a special exemption for the Krita Foundation and/or Halla Rempt Software and the resulting model (so that users using the model wouldn’t have to list all the artists that made the training pictures). We’re still working on that piece of legalese. Dmitry even wrote an email to the CC Commitee and whatnot. But the basic idea is, standard license for artworks in the open culture (see how David Revoy license his works), and an exemption for Krita and the AI model made by Krita.
We could, in theory, not release/hide the training code or the dataset, but then the GPL-compliance of the model would be kind of iffy, from my understanding. I do intend to made sure that the dataset is behind a “Are you human” checkbox/captcha to at least prevent automatic scrapping by a bot.
If you have some ideas on how to protect both of those things from misuse, please tell me.
I’m not exactly sure, I’ll talk with Halla first and then answer you. But, we’d most probably need custom-made artworks, it’s not exactly that we can buy or use some existing dataset. I mean it would’ve been possible, in theory, but not with consistent brushes/styles/brush sizes… The project would need to be a bit different. And, thankfully, the amount of data we need shouldn’t be too big - not millions of pictures, just several (at worst, a big several) dozens (probably the more the merrier, as long as they are well done).
Absolutely! I release the edit for any and all uses that I’m allowed to.
After reading up on similar projects I do hope the finished product will end up better than my example since it does seem possible, though setting the bar at ‘adobe live trace’ level would definitely be a great starting goal.
I suppose scans need to be pre-processed in that case to turn the paper to white and the lines darker. But I can imagine that when the sketch isn’t clean, the model detects artifacts like folds or wrinkles as sketch lines.
Sorry, I can’t offer any licensing advice, I’m not well-versed in that. I guess my hope is that we can make it at least “very inconvenient” for “bad actors” to prey on your/our hard work, and I see you’re doing what you can in that department.
I hardly use CSP, just testing every now and then, but as far as I can tell, it has very robust vector tools. Especially drawing raster-like but the lines are somehow considered vectors. I think the strongest asset of this workflow is the near lossless transformation of the image. You can scale or liquify without no perceivable loss. To me this is one area in which Krita is falling short (i.e. using such tools introduces heavy smoothing).
This sounds interesting - do you think we can provide our own training data to get a result that looks more like our own work? Or is it not likely to be that nuanced anyway ?