Thoughts on artificial intelligence in art creation

What are people’s thoughts on artificial intelligence being used to create artwork. I was playing around with one of the tools (Midjourney) and am pretty surprised with the amount of details it gives. To me, it kind of feels when the internet first started getting popular and people had to know what to search for to find things. These AI art tools need text as “prompts” to get going. Only after a lot of trial and error, and learning how to write it wants, can you get some results. It seems like it will be more of a tool to help artists than it will be to replace them.

Do you see yourself using any of these tools in your workflow in the future?

Here is something I came up with pretty quick with some minor changes in Krita.


Maybe. I’ve seen a video by Jazza testing Dall-E 2 (Also an AI). Maybe I’m going to use those AI generated images as inspiration, or even references.


Yeah. DALL-E 2 looked good. The MidJourney one runs through Discord, and it is in Open Beta, so anyone can use it. That is why I was checking that one out. There seems to be an explosion of these tools coming out in the past year. It should be interesting to see how the tools evolve.

1 Like

I don’t have Discord, and because of that I couldn’t try it.

1 Like

I’ve seen quite a few images using these tools that I really liked, but it challenges my emotional response when I don’t know how much input an artist had in it’s creation.

I started to write a longer response but realised it would likely be an essay and I’m already getting a headache… :sweat_smile:

edit: here’s an example I liked from another artist using Midjourney -


I could imagine them as help for getting some inspiration when you have absolutely no Idea for a given theme. Or for non artists to try something out to give to the artist for reference since non artists often have trouble commuicating Ideas and themes, they could give it to you and say “something like this”.

It could help cut the boring parts of the art creation process or speed up the work for artists in the industry. Currently I don’t see much use for me personally.

1 Like

With what I have started playing around with, you cannot get too detailed with one thing with typing. With a character for example, if you start typing too many details about what the character should look like, the algorithm will just ignore some of it. The character might look detailed, but the algorithm can only handle so much.

If you do a scene with background and characters, I have noticed everything start to become more abstract and almost gesture like. An artist would still need to do a lot of work if they wanted to make it polished. It all seems like it is just another flavor of photo bashing.

Here is one I tried typing with a bunch of details…

This was the text I used…
"beautiful princess with tall crown. soldiers with flags in the background. war. air ships in sky. epic scene, fantasy, style of Craig Mullins.


Photo bashing really comes close.

Typing in the correct cues for the query to get a specific result can become an artform in itself.


I’m pretty much opposed to it. We don’t have real AI in any case… Come back and ask me after we have accomplished Artificial General Intelligence…


Natural General Intelligence seems to be in short supply nowadays :frowning:
(I’d better stop now.)




I’ve played with Craiyon a bit for inspiration. Sometimes a scene containing crazy, unrelated elements is just what I need to start imagining my next piece.


I mean - that’s just ridiculous! That’s like 95% done, and done in a way that would take many years of learning and experience to achieve. I’m talking about things like composition, values, palette, atmosphere, abstraction, colour vibration - all this is already effectively there in that image. You could maybe tweak some focal points to make it easier to read and call it done.

I find it exciting to look at, but also depressing when I know how hard earned the skills are that go into making an image like that from scratch. Then it raises the question - just how important is the human element in your enjoyment of an image? Will we still favour traditionally made art when computers can extrapolate and chuck out technically superior work in seconds? :man_shrugging:


which is the background for my comment above…Let the Butlerian Jihad Roll! :slight_smile: (Dune Reference)


This will be part of the industry wether people like or not.
And it just makes humans the replaceable monkey that uses the software just like Autodesk sees us already this will just worsen it.

Problem will be the subscription to these AI services probably because these things will never be open source as the need to do web scraping for references to give better results. Or you have pay to do a couple of searches. People will monetize this like crazy.

Just saying.


Honestly the problem is that people see humans as a set of algorithms. While I think the case is far more complex. Yes parts of our behavior can indeed be summed up as some logical algorithm “If this do that, else do the other”. However human intelligence and creativity spans beyond this, I belive. Heck even a monkey could end up being smarter and more creative than these AIs.

People or should I say “trans humanists” push them as “super creative”. But in reality they take an input from a human and a human evaluates their result. More over they cannot create beyond what their data has. If an creative AI was not explicitly trained in some art style it will never attempt to create it. On the other hand a human might explicitly attempt to create an art style, for no apparent reason than just being bored. The later is my personal experience where I created an drawing method on my own, out of my own frustration, only to find someone also developed identical method halfway across the world.

In the end I would agree that this will be a subscription and a pricey one. Training an AI on data that is new to it ,takes time, energy and computation power. I remember watching the Starcraft Alpha Star AI. They said they trained it on Google’s computational machines for several weeks. And they remarked that training is somewhat expensive on the power bill. So yeah…near future I see it as a “Pay to Play” thing.
Still a simple random content generator could be a good tool for generating ideas. Like you could let it train on a large reference folder like “medieval soldiers”, so it can make some “medieval warfare” related ideas. Still here one can see a possible downfall of this AI, it will need to train on a large enough data and for long enough to generate somewhat acceptable result(at least for now), while a human might take 2-3 references and generate a decent result almost in half the time.
Like for example in photogrametry. The AI or the algorithm needs 3 or more images with reference points that it will compare so it can (in reality) realize it’s seeing a 3d image. A human…(although can be tricked) can deduce this from 1 image only with no points to track at all.


maybe you can have your own AI and train it in your own style so you can produce faster. that would be interesting.


When a human skin gets touched by another human (or animal), a certain, hormone controlled feeling occurs (mostly a good, social feeling). This feeling will not occur when the human touches itself, or when a machine touches the human. So there must be something special that distinguishes “living creatures” from machines. Machines can physically touch, but their touch doesn’t evoke feelings. They are cold. I don’t know why they are cold. What could make a machine warm? Maybe when it has developped its own identity, completely on its own, without any external programming? If that will ever happen, will this machine also develop its own emotions? Only at that level, I guess, will machines be able to evoke feelings in other creatures when they touch them. And what I’m writing here about skin touch probably also applies to machine music and machine art etc.

I think there are a fair amount of open source versions of these AI programs. I tried running a version of the DALLE-mini library. Here is even a YouTube video helping people set it up. How To Run DALL-E Mini/Mega On Your Own PC - YouTube. If you search github, there are a number of other of efforts with open source implementations with tools like this… lucidrains (lucidrains) / Repositories · GitHub.

The issue with running these things locally is that you need a supercomputer to run them right now. I have a decent computer (8 cores, 32 GB RAM, RTX 3070), and I cannot run even a “mini” image generation version. The program crashes because I don’t have enough video memory. I am sure over time, though, this won’t be an issue as computers improve.


and not unlike medieval painting ‘factories’ where the apprentices were said to have done much work in the name of the master artist.