Because no one seems to be able to read anymore (sorry, but this is getting painful to see), I’ll say it out loud: this isn’t GenAI, it’s is based on much a paper from 8-9 years ago (2016; for context, phones still had headphone jacks at the time). If you want, you can read the paper in question here: https://esslab.jp/~ess/publications/SimoSerraSIGGRAPH2016.pdf
To quote the original announcement:
It’s not a generative AI. It won’t invent anything. It won’t add details, any stylistic flourish besides basic line weight, cross-hatching or anything else. It won’t fix any mistakes. It will closely follow the provided sketch. I believe it won’t even be possible for a network of this size and architecture to “borrow” any parts of the images from the training dataset.
- Introducing a New Project: Fast Line Art
- No new details will be generated, nor will it have the ability to generate new details
- No errors in the sketch will be fixed, nor will it have the ability to fix errors in the sketch
- It will follow the original sketch closely, because that’s all it will be able to do
Intel is also only providing the funds to develop and train the model, not any data. Ths dataset is based on whatever is donated to this single post (in part because the model doesn’t require a that of large dataset, according to the paper). Yes, a single post: Call for donation of artworks for the Fast Line Art project - #9 by YRH