AI related meta thread

Use AI to cheat AI lol

Though current method might only last until another enhanced AI algorithm bypass the pixel distortion cheat…

Edit:
Seem it can be bypassed with some photoshop denoise… might not that useful if the artwork is less-noisy…
https://twitter.com/toyxyz3/status/1636726724846288902

5 Likes

From what I gathered this tool was in violation of gpl license they copied code along with typo from an open source project. The lack of understanding about license by the creators of a tool meant to protect unlicensed usage of work is an irony. They also don’t have a Linux version while it is not a big deal but it shows how narrow their understanding is.and they have kept this tool closed source if they truly intend to help people they should keep it open sourced.

10 Likes

In my opinion a lot of us artists loose the forest for the trees with all these glaze features, credits, copyrights and chasing shroudinger’s payckecks. The same way I don’t think the skillfullness of artists compated to AI also as useful of a framework.
Credits and consent are important obviously, and I think generally - even as a jsut a guideline - authors should maintain the authority over their work, but that won’t save you from AI. Even if it’s perfectly ethically sourced on free-range public domain art - it will still take your job and no amount of copyright will stop that.

The same way your consent, even if recognized legally and protected legally, won’t matter much if most studios have you transfering the rights for your images to them as part of their contracts to hire you - you either consent to that or don’t have a job.
And if you don’t work with studios, do you have the resources for legal battles to legally protect yourself? Even in non-ai matters a very small number of people that aren’t corps, have the resources to actually challenge legally people that steal their work and by the looks of it not many of those battles were even worth it for the amount of money and time it burned for those artists for very little gain.

The thing about Luddites weren’t that technology and progress bad, but the fact that they were cut off from the means to earn the leaving and their dissatisfaction with that was suppressed not only by the employers, but police, military and the goverment, they were literally murdered.
Uprising that radical still didn’t help, and now we’re pacified with all this copyrigh talk and noise watermark nonsense, the options a lot of people will go for, but are a lot less impactful than the ones that still didn’t work.

So I’d say, buckle up, it’s gonna get a lot worse.

3 Likes

Imo, with how believable deepfakes become (both video and audio), how stuff like chatgpt can churn out things that on the surface look factual and you would need an actual specialist to know of it’s nonsese or not, with how these algorithms seemingly can choose between true and untrue information to give out the one that looks good to an untrained person… I think we’re entering a new era of desinformation.
It’s an absolute mess even now, but with AI eventually you would need so much resources to debunk stuff that it’s going to become incredibly exhausting.

I work as an animator, so I’m as affected as anybody else. However, I believe we should focus on fighting only the battles that can (and should) be won, adapt to change, and also acknowledge the positive aspects of AI :slight_smile:

Some ideas I think are very important and I’d like to share with you:

  1. AI is already here, and (for good or for bad) it’s not going away.
  2. AI is extremely powerful, and anyone who uses it has a great advantage over those who do not (not only in image generation but in all other areas).
  3. AI models and software should be open-source. If it’s not, those who control it will have too much leverage over the rest of humanity, and the gap between rich people (who can afford it) and poor people (who cannot) will increase.
  4. We should support open-source models. If we are going to legally challenge any AI company, we should start with those that are closed-source (such as OpenAI, Microsoft, Adobe, Google), not with those that release everything to the public (such as Stability AI).

As I said previously in this thread, I don’t wanna be an AI’s defendant, I just want to help here. I’m pretty new to Krita but I love the software and the people here and I want to return some of the love back, so if anybody has any questions about using AI just ask me and I’ll try to help.

Here are some thought-provoking videos about AI:

3 Likes

I think most people are not against AI, but how AI is created using data gathered without consent. The data is curated too, it is not random data all over from the internet. For example stability and midjourney give such nice painterly result if you write cat in it because it is more weighted towards good artwork found on the internet and not just random cat stick figure drawing which is in abundance on the internet. So they have purposely fed specific set of artworks curated or have given higher preference for certain artwork from the garbage images on the internet to generate further artwork using a mathematical model. That non consenting part is the major problem. And being open source doesn’t absolve any company from doing un-ethical things. Open source respects copyright it doesn’t disregard it.

Adobe have released new AI tool but they specifically tell that they have trained the AI using only consented stock photos and public domain images. How true is that we can only guess as it is not open source or transparent.

So all the argument that AI is here and it is done deal, is like normalising theft and unethical practices. Nobody stops anyone from making money by legal means, but to steal and make money is illegal and we don’t say well the person is already rich now and the theft is done so lets embrace it and forget about the crime.

9 Likes

I disagree. Most people are against AI out of fear (and I can understand that, I’m quite worried myself), the use of unethical data is just a valid reason that they put forward to try to prove their point. I don’t think any of the people who were against Stable Diffusion will embrace Adobe’s Firefly now, even if their datasets were ethical.

In Midjourney (closed source), yes, as you say, their data is highly curated. But the images used in the first models of Stable Diffusion are not manually curated; there are lots of poor-quality images in the dataset (they used automated “curation” to be able to process the large amount of data). Right now, they are working on more ethical models, though (they thought the use of scraped images was “fair use”, which is why they didn’t make the data “opt in” for models before).

Of course, we should still fight against the unethical use of data for training AI models. But as things stand now, I think it’s more urgent to be vigilant (and maybe fight) the companies that are going to cause more damage (and faster) to the art industry, like Adobe, and think on how should we react. I think we should not focus only on sueing and hating Stability-AI (I kinda feel sorry for them XD) who released everything thinking that people would be happy for having some open-source models to play with, without having to get around the paywalls of Midjourney or Dall-E 2.

That’s just my opinion, though ^^

It is true about the fear of Ai taking the job, any sane person would be fearful about their job security. That doesn’t negate the resistance on ethical grounds. There can be multiple reason for the opposition of something. It is not either this or that. People can oppose things based on number of points at the same time.

Fair use in commercial settings? They are basically using images to build a tool which will give them monetary benefit. Last I checked fair use doesn’t cover commercial competing product. Fair use requires you not to put out a product or service which can compete with the original author in the market. I am sure stability AI has good lawyers and they know these things before hand. It is not a project done by students sitting in a dorm room. They funded the research which clearly states no commercial use should be done of the research data and they themselves use it for commercial gains. And given the delusional remarks by the person heading stability AI about solving world hunger poverty and some sort of utopia etc I do not have confidence in them. They seem more like a charlatan.

Why can’t the community do both things at the same time? Denounce the use of unethical data and simultaneously work to discourage exploitation by the corporates. You surely know that stability AI is a company too right? it is not a non profit company. There is no difference between them an adobe who is looking to get profit from AI tech. A company can open source stuff but can be evil too.

Notes:

Article which mentions how stability AI intends to make money - TechCrunch is part of the Yahoo family of brands

Laion 5b dataset page - which credits the sponsors which include stability AI and it also advices to use to use the dataset for research purposes. but stability AI uses it for commercial purpose. - LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS | LAION

2 Likes

I mostly agree with your first and third answers, but not with the second one. “Fair use” can be used in commercial settings (as explained in this video: A.I. Versus The Law - YouTube ). It is still not clear whether training generative models with scraped images is legal or not (which also will vary significantly from one country to another).

I was thinking about making a new post here for sharing knowledge about using generative AI models in which I think are ethical ways. However, as this topic is quite complex, I am not certain that we will agree on the ethical implications, nor am I sure whether people would be interested in the discussion. What are your thoughts?

1 Like

It can be used but not in competing area of the original author, you can’t use the authors image to create a product which can harm the original authors revenue. An example of fair use is reviewing a movie the movie reviewer can use the clips from the movie and can sell their review to make money. But they can’t use the clips of the movie to make another movie and release it as their own.

The fourth factor measures the effect that the allegedly infringing use has had on the copyright owner’s ability to exploit his original work. The court not only investigates whether the defendant’s specific use of the work has significantly harmed the copyright owner’s market, but also whether such uses in general, if widespread, would harm the potential market of the original

As it is already demonstrated by AI bros, that they can target an artist and try to damage their market. IT can;t be said that this tool doesn;t harm the market of the particular artists. if you want to know more check out what happened with samdoesart.

Also this depends on the country in some countries there may not be a fair use option.

Sure you can make the posts which can;t be covered here in this thread, just keep it in this same category.

3 Likes

I run an art blog focusing mainly on digital art and I’m too scared to discuss any AI generated art stuff but it is good to see discussions about these things, I appreciate you (or anyone else) giving it a go in a rational way.

2 Likes

I saw it this morning.

Hopefully, they post it yesterday and not today :sweat_smile:
https://s3.masto.ai/cache/media_attachments/files/110/124/693/779/578/332/original/b8dede8b58a058cc.mp4

Grum999

7 Likes

Read this petition and sign, if you agree,
This is a petition to ensure ccopyright and human rights for creative arts are valued, and that as we do progress into a world with this, we wont be victims of AI. This would push AI into the proper direction that the original developers of these programs should have done, rather than allow their greed to guide them.

4 Likes

I’ll just leave this here:

I was directed to this article, by a friend that has been sad about me not posting art online anymore. I’m just reading it now, thought I’d share it.

4 Likes

Japan stipulates that copyright laws in the field of artificial intelligence will no longer take effect… regardless of whether it comes from pirated websites or whether the operators are profitable.

2 Likes

I came across this feature-length video on AI generated content today. Worth sharing, I think.

4 Likes

Being in the position of watching all the long AI video links here so far, I like this one in the first half the most because it talks about fair use in a lot more detail, weighing the factors of human authorship into categories ranging from artist to just a commissioner, which isn’t present in the other videos yet. The second half is more high-level talk which I think was all summarized in other videos.

Sorry folks, just need to vent a little… :neutral_face:

Long story short, I was browsing Amazon Kindle store this morning, I wanted to see if some Japanese books were available (they weren’t, hello regional copyright restrictions). I ended up in “Art” category browsing for “anime” stuff…

What I saw there were pages upon pages of AI “artbooks”. 1500 hot topless anime girs, part 1, part 2, going on and on. Literally heaps of this garbage, page by page. I couldn’t even find normal books in this sewer stream. Why do they even allow this cr…p to be listed in the store? There should be a filter checkbox to remove this garbage from my results with one click. Some of these “works” even had a “bestseller” badge slapped on them. What in the actual f…? Is this the world we live in now? :frowning_face:

7 Likes

I pulled all my art from teespring, as they are allowing crypto and AI, redbubble just increased prices, so pulled art from there. with AI and Crypto ( nft etc ) things aren’t looking well for any of us :frowning:

4 Likes