Introducing a New Project: Fast Line Art

Introduction

In the coming few months, we will be sharing the progress of developing a new experimental feature we’re working on, let’s call it “Fast Line Art”. It will be a tool that takes in a sketch and generates a basic line art, similar to how a Colorize Mask takes a line art and generates flat colors, which is the base of the coloring process for many artists.

The project is sponsored by Intel. They’ve sponsored several features in the past, most notably multi-threaded (faster) brushes and HDR on Windows. Thanks Intel!

(Note that sponsored development and donations are two different things: in case of sponsored development we sign a contract and develop a specific feature for a set amount of money, while with donations we have full freedom regarding how we spend the funds).

This project makes use of AI, or more precisely, neural networks (implementing it using just classical algorithms would be quite challenging). The technology itself has a very wide range, but recently it got well-known in the artistic community due to the popularity of the generative AIs like Stable Diffusion, Midjourney and Dall-e. We do understand the concerns artists have around AI, and how it’s being trained and used, and because of that we plan to do things differently (for example, we are not going to use generative AI in this project and we don’t plan on using it in the foreseeable future). I have explained our approach to it in detail below, which will hopefully alleviate your concerns.

About the Feature

The purpose of the Fast Line Art feature is to speed up the process of getting the base, simple line art done. It removes the need to tediously trace the sketch, repeating the same stroke multiple times to get the perfect one, and allows the artist to only focus on the parts that their artstyle depends on – like eyes in anime, or cross-hatching in particular places, or adding multiple lines or thicker lines to indicate a shadow. You can make a sketch, sculpting the sketch lines with multiple strokes and an eraser to get a perfect shape in a messy manner, and the feature will translate it into the same perfect shape but in one, continuous stroke.

This is an approximate “artistic rendition” of the feature:

This is a close up view, showing more precisely what it’s supposed to do (merge lines together into one pretty line, discard very faint lines):

The most common brush for lineart is a solid brush with variable size. But many artists use other brushes, for example ones that look like a pencil, or a ballpoint pen, or a Sumi ink brush. This tool should be able to handle the basic brush, but I’m not sure yet whether it will be universal or only use a limited set of brushes.

The project is still in early stages, so I can’t say for sure how useful the results will be. I will be writing updates on KA along the way so you’ll learn more about what to expect from the final version of the feature.

The Role of AI in This Project

This feature will use convolutional neural networks, which, yes, it’s roughly the same technology that generative AI uses. However, there are important differences between our project and the generative AI. The most visible difference is that it isn’t designed to add/generate any details or beautify your artwork, it will closely follow the provided sketch.

Moreover, there are several issues with the current use of (especially generative) AI that are not fully resolved yet, but I sincerely believe that this particular project is not repeating them. I will try my best to address all the typical concerns with AI, but if I missed any, please let me know in the comments, and I’ll try to answer the best I can.

  • It’s not a generative AI. It won’t invent anything. It won’t add details, any stylistic flourish besides basic line weight, cross-hatching or anything else. It won’t fix any mistakes. It will closely follow the provided sketch. I believe it won’t even be possible for a network of this size and architecture to “borrow” any parts of the images from the training dataset.
  • We will not be training the model on any of the existing datasets, or stolen pictures. All artworks will come from artists fully aware what it’s going to be used for. And our particular model will work better with special training data anyway, I believe. Maybe you’d want to help out with gathering the artworks - I will be making another post about that soon.
  • The calculations will be 100% local and offline. It won’t send the sketch image to any server to process and return the line art. I’m not planning to implement any networking functionality, and there are no servers planned either. It will only use your own computer CPU and GPU for calculations, the same way all of the other features of Krita do. It also won’t train on your images that you make in Krita. It won’t save it anywhere either, until you save it with Krita to your own device as usual, in a Krita file.
  • I believe that it cannot replace artists (which is a common concern with generative AI) because you will still need to add details not included in the sketch, and your own special features (cross-hatching for example, double lines, thicker lines on the shadowy side), or you might not be able to use the tool at all if your inking brush or line art style is more unique. The purpose of this feature is strictly to only reduce the tedious, uncreative part of the process of making a line art.
  • It will consist of only convolutional layers, meaning that it will only see several or several dozens of pixels in each direction. There will be no fully connected layer of nodes, so it won’t even be capable of making a whole composition, since it will work only locally. The advantage of using a fully convolutional neural network is that it will work for a picture of any size. From my understanding, the networks used for generative AI have a very different architecture (they use convolutional layers too, but also many other elements like dense blocks or complex architecture which this project won’t use).
  • The model used will be relatively small (because we’re preparing for big canvases, and both RAM and GPU memory are limited on real systems). It means it will be faster, cheaper to make, it will require less data to train etc., but it will also not be as sophisticated as you might be expecting from an AI. For example, the original model in the article we’re basing our work on has 44 mln of parameters, and that will be, I think, already too big for us; on the other hand, Stable Diffusion 1.5 was released with 860 mln of parameters, and Stable Diffusion 3 is expected to have 8 bln. (The number of parameters of course doesn’t tell the whole story).
  • It should be predictable for the artists (if I manage to reach the results I want). It should do exactly what it’s supposed to do and nothing more.
  • It doesn’t advance the technology. The article on which we’re basing the work is already nine years old; and while some parts of the project are kind of experimental (how to include the details of the brush and the brush size without a so-called dense layer?), there is nothing all that unique or innovative about our approach. (In fact I got the impression that the authors of that article aimed for maximum simplicity, exemplified by a very homogenous design of the network and the choice of an optimizer that doesn’t need any manual configuration or fine-tuning. That’s consistent with our goals at Krita as well).

More Information

The article the project is based on: https://esslab.jp/~ess/publications/SimoSerraSIGGRAPH2016.pdf

And this is a document I wrote about the project. Keep in mind that it was written for Intel, so with developers in mind, but at least some sections might be interesting for artists as well. https://files.kde.org/krita/project/SmartLineartProject.pdf

I will be writing more posts and updates throughout the duration of the project.

Short Summary

We’re going to make a feature called Fast Line Art that will use neural networks. The project is sponsored by Intel. It won’t be generative AI, it will just translate a sketch into a line art, without adding new details, and it will work very similarly to existing filters, without sending anything to any server or do anything online.

Ending Notes

In the comments, please try to keep the discussion civilized and try to be concise (sometimes threads here get very long, making it difficult for many people to read through it and/or join the conversation). But please do share any concerns or ask any questions you might have, and I’ll try to address them.

52 Likes

Do you mean conventional?

No, it’s a scientific term relating to a type of neural network layer: Convolutional neural network - Wikipedia - it’s connected only locally instead of every neuron accessing the whole image.

it works kinda similarly to how blur filter works: has weights for all the pixels around and the result is the weighted sum of all of those pixels. https://global.discourse-cdn.com/business7/uploads/babylonjs/original/3X/b/1/b1ed3029848e11aaa719ad1c293a2d13dda91b7b.png Except that in neural networks there is waaaaay more weights and dimensions and the weights are not designed by the programmer but trained.

2 Likes

Okay, I only knew this term from something else.

Looks like an interesting project @tiar, definitely something useful to speed up workflows from the looks of it.

Perhaps a bit of a tangent on lineart, but I was wondering whether AI could do things like mimic ink clogging in intersections of strokes as it does in traditional media? Running a simulation to obtain the examples from this thread would be too computationally heavy. Do you think AI could potentially be used for such a thing?

3 Likes

This looks super cool for people who are good at sketching but less skilled - or interested - in lineart. I’d be curious to try it, since I really struggle with lineweight but enjoy partially lined painterly styles.

I agree that it’s as acceptable as stabilizer usage is, given that all it’s doing is cleaning up your own sketch without drawing in other elements. I have realized on a reread that it does use others’ art data in its training, but I still feel it’s an acceptable tool which the people who work together to make Krita are choosing to create based on their own efforts.

3 Likes

I like that idea, but I don’t know if I would use it, because line-art is one of the meditative processes for me. However, what do you think about using this line-tool for the fill-tool? You know when someone is drawing not using fully closed lines. This tool could the generate linework in the the background and this could then be used as border for the color. :thinking: An advanced fill-tool so to speak.

Anyway I’m looking forward to the development. :+1:

7 Likes

FYI there’s now a gap detection option for the fill tool in Krita 5.3 pre-alpha (the nightlies), which works well for what I used it for. So that shouldn’t be necessary.

1 Like

@tiar

What counts as a sketch? I imagine checkerboard patterns and fractals aren’t supposed to work. Shaded sketches maybe.

You mentioned brushes in one paragraph which implies digital. When I saw the feature, the first thing I was reminded of is scanning in analog line art. Is analog line art in scope?

1 Like

For people who use Krita to produce work with deadlines, this sounds like it could be a great productivity boost. It still requires the artist to do the clean-up, and if the development method is followed as you described, it could be a great example of what an ethical use of AI/ML may be.

I’m wondering if this technique has been implemented in any painting software already? If not, then I’m curious why? If we can get good results, it would be interesting if Krita gets a unique feature like that.

Other than that, I’m wondering maybe about a few things:

  • Will the feature be supported on diverse hardware? (i.e. no favoritism for one vendor or API) I guess that’s the intent, but will it turn out this way?
  • You mentioned that a bit already, but how much do you want to lean on the community support for gathering the training data? Is it like a plan B, or will be integral to the project’s success? (I guess we will learn more in the future).
  • Developing the model will obviously cost time and money, on top of the implementation effort. The gathered training data set will be very valuable too. I wonder how it will interact with the open source nature of Krita. Will you try to somehow protect it or restrict its use by 3rd parties?

Anyway, it’s cool to learn more about the team’s plans. And congratulations on securing the sponsorship!

5 Likes

This is very interesting. I don’t do line art anymore because they take too much time but if it can somehow make line weight variations interesting from a somewhat solid sketch that may change my mind.

1 Like

I find it very interesting, many times I have limited time to do some commissions so a function like this would come in handy.

If you need drawings to train the model I can contribute with several drawings.

3 Likes

I wrote a college essay’s worth of thoughts but I’m going to summarize it up more frankly since this does seem to be turning into a bit one sided discussion.

  • I wish Intel would help improve Krita in more universal ways than implementing random stuff like this.
  • If you don’t have the time to do lineart, the art style that lineart is necessary for, or the will to improve your lineart abilities then I guess it’d be a nice tool. Like an AI auto-shader, or an AI auto-color-balancer, or tracing the 3d models in CSP.
  • You should probably underpromise and overperform, I can’t possibly see any way that a convolutional network with a moving context window of 128 or so pixels could ever give as much life and character to the lineart’s line weight and little details as your examples, which are very, very human drawn and show extensive artistic ability and understanding of form. If you want to me to show specific examples from the concept art I can do that.
  • After cultivating a starkly anti-AI crowd on social media, expect backlash for this from people who don’t think deeper than ‘AI bad’
  • I feel like a tool that subtly autocorrects your line position with AI when tracing your sketch would have many more nuanced uses.
  • I could submit a couple hundred finished pieces of my own, but not only are they 18+ in nature, I’ve been told dozens of times that my extreme lineart decisions are one of the most starkly ‘me’ things about my art style. So it’d probably just poison the model.

Here’s my quick personal take on what an AI output for lineart might look like (assuming output is a bitmap like the input and not a vector penstroke)

To emulate this, I zoomed in so I could only see about 300x300 at a time throughout the entire process, kept my pen stroke the same width unless it came to an end, was too close to another line, or the sketch line was very light. And I turned off the artistic part of my brain that was telling me ‘the hat is touching the head here’, ‘this is an eyeball’, because there’s no way a 40m param model would know those things and add crosshatching/shading/shape accordingly. I only went off the raw grayscale of the input sketch.

I also just want to add, if lineart is so boring to you that you want to get rid of or skip it entirely, maybe try something different and go crazy with it. You never know, maybe you’ll like the look and now you have a whole new method of breathing life into your drawings. You’d be amazed the difference proper lineart with proper weight makes

DigitBadge

12 Likes

That is something great use of AI. Finally something that AI should do instead of creating the whole art but to assist the artist.

I have a different starting process where I don’t even draw lineart, I just start splashing with midtones, will this be able to generate lineart out of it?

1 Like

I know, but I haven’t tested it yet.

But there are styles out there where only minimal lineart is done for only the core features of the subject. Let’s take a face as an example. Here, the darkest parts would be the eyes, the nose, the chin, and maybe the start of the ears.

I don’t think the fill-gap feature would be able to fill this face, but when we are generating the lines in the background from the sketch, we would be able to.

At the moment, we have to draw the lines, fill the areas, and then turn of the lines to do the final lines. So I think that would be a timesaver. And we would be using the technology, we have then already implemented in an other way, what would bring more value to this feature.

Assistive AI is cool and it’s definitely the way AI should be used. Good thing it shows up in Krita.

This feature is good for those who are capable to produce clean sketches, otherwise you can spend more time cleaning the sketch for the AI to work rather than do the linework yourself.

I think that’s partly why CSP uses vector brushes and has all the manipulations with them so you can do linework fast enough yourself with any kind of sketch.

Ultimately, it depends on how messy you are with your sketching phase, and how many clean passes you require to refine a sketch for the AI. And it’s always better to have a tool, than don’t have it.

1 Like

Doing line art is one of the reasons I switched to painting because I can skip line arts altogether and move from sketch directly to painting. I still use outlines sometimes for drawings but I probably keep drawing them myself because its something I want to get better at myself. However I could imagine using this for animations maybe? When you have to draw hundreds of frames, this could be an amazing time saver, depending on how predictable and precise it is, so you don’t get those weird wigggly lines early flash animations sometimes had.

4 Likes

This feature represents one of the ideal ways to utilize AI, and I am hopeful that it will turn out to be wonderful. The fact that you are using clean data is also very commendable. However, there is one thing that caught my attention.

Nevertheless, it seems to me that the impression of the witch’s face has changed slightly. In particular, the shape of the eyeliner, the position of the highlights, and there are lines at the hair ends that were not in the original picture. I hope that it will be reproduced even more faithfully in the future.

1 Like

Thats a good thought on animation, although not sure how much consistent line arts will be generated by AI

Thats kind of mockup and not actually rendered by ai. So lets see how this features evolve and actually works

1 Like