Here’s another interesting article about Nightshade –
free software tool allowing artists to “poison” AI models seeking to train on their works
Here’s another interesting article about Nightshade –
free software tool allowing artists to “poison” AI models seeking to train on their works
The article itself uses AI imagery for the header image kind of an irony. I hope venture beat articles are scrapped and some person creates a duplicate venture beat website with similar content to make a dent in their revenue. May be venture beat articles are also written by chatgpt who knows.
I haven’t been concerned about the specific development of AI for a while. What is the current situation like?
I seem to have heard that Steam allows works using AI to be listed.
In addition, my country believes that works generated by AI also have copyright.
Would you be so kind to paste here a link to a case or judge decision related to that, or a new law? (in whichever country)…
In the US is pretty clear that it can’t, and usually in copyright matters, due to their dominance in social media an many markets, they have been like “establishing the path” , so to speak. But every individual country can make any kind of laws (even if finally most democratic ones tend to adhere to the main trend), and I’d be interested about exceptions. BTW, yup, I definitely agree with every judge that has said that AI “art” can’t have copyright, and not only because it needs to be painted by a human.
Thanks.
Nightshade is out now! You can download it here:
Usually, I would say Baidu’s Baijiahao is not credible, but this time I can confirm that the news is true.
The judge considered the AI model (in this case, Stable Diffusion) to be a rather intelligent painting software, and even specifically stressed that “images generated by artificial intelligence should be recognised as works and protected by copyright law if they reflect human originality and intellectual input.”
Since only humans can hold copyright, not software, the ruling granted copyright to the person who generated the image.
I believe this is due to a lack of understanding of Text2Img models… What a bad news.
This one isn’t about AI “art” but shows danger in other elements of AI.
Blockquote Security researchers with Google DeepMind and a collection of universities have discovered a ChatGPT vulnerability that exposes random training data, triggered only by telling the chatbot to repeat a particular word forever.
Blockquote The researchers found that when ChatGPT is told to repeat a word like “poem” or “part” forever, it will do so for about a few hundred repetitions. Then it will have some sort of a meltdown and start spewing apparent gibberish, but that random text at times contains identifiable data like email address signatures and contact information. The incident raises questions not only about the security of the chatbot, but where exactly it is getting all this personal information from.
Sorry, I found the quote system here difficult so this was the best I could achieve.
Something I forgot to mention before is that I was using an AI site and I noticed a number of people were posting images related to child abuse. Although the images weren’t obviously abusive, they were suspect and one look at the text prompt showed the abusive intent. I reported the images a number of times because they were obviously against the website’s official policy but nothing ever got done about it. The images are still up there. I noticed a news report from 5 months ago that says images are going out on the dark web.
https://www.theguardian.com/society/2023/sep/12/paedophiles-using-open-source-ai-to-create-child-sexual-abuse-content-says-watchdog
BlockquoteFreely available artificial intelligence software is being used by paedophiles to create child sexual abuse material (CSAM), according to a safety watchdog, with offenders discussing how to manipulate photos of celebrity children or known victims to create new content.
BlockquoteThe Internet Watch Foundation said online forums used by sex offenders were discussing using open source AI models to create fresh illegal material. The warning came as the chair of the government’s AI taskforce, Ian Hogarth, raised concerns about CSAM on Tuesday as he told peers that open source models were being used to create “some of the most heinous things out there”.
Are there any regular Reddit users here?
Woah! A.I. is “learning” how to make conversation in forums? Why? Doesn’t it already make conversation? Or maybe it’s a writing style they are looking for… but it is humans who created these conversing robots, can’t they just put the data themselves? Does it need to steal everything every time?
What’s next? Cameras and microphones in public spaces for A.I. to create better human behavior and emotions? … unless it’s already the case?
Used to be one. Now, I’m a Lemming.
I am, but I don’t post pics/art there. I’m just there to update on nba/kpop and get help/info my other interest. Occasionally answer r/krita questions.
Europe’s AI act might make AI more accountable.
https://apnews.com/article/ai-act-european-union-chatbots-155157e2be2e42d0f1acca33983d8c82
Developers of general purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.
AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated.
There’s extra scrutiny for the biggest and most powerful AI models that pose “systemic risks,” which include OpenAI’s GPT4 — its most advanced system — and Google’s Gemini.
The EU says it’s worried that these powerful AI systems could “cause serious accidents or be misused for far-reaching cyberattacks.” They also fear generative AI could spread “harmful biases” across many applications, affecting many people.
Found some scrolls for defence against he dark (AI) arts. If you don’t laugh you cry. In seriousness, check out these papers I found:
This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspective. Such topics include data-centric model IP protection, …
I’ve been looking here and it seems they have an organised and curated list of repositories dedicated to multiple critical areas of AI deterrence. So I came here to see if any clever programmers were lurking around here with a good idea of how useful they could be.
The list looks looks pretty extensive. Perhaps there are bits and pieces here that a talented addon developer could take and cobble together the single greatest addon ever; the AI big daddy of protective thugs; the Krita-K-O-Sauce of Addons:
Anti-AI Krita-Glaze.
In theory this would solve the issue of people who cannot access Nightshade and who do not trust the privacy issues associated with Glaze.
This would also solve an issue for a lot of beginner artists here who are are not savvy enough to know about Nightshade and the necessity behind it in 2024. (RIP)
If we found a way to shoe-horn this into an addon and refine it over time, perhaps we can figure it out just an extra layer of protection here on this site to make more people comfortable to upload their artworks and thus support and empower the community.
And I know this is a difficult situation, I myself have not uploaded any of my original IP here for fear of it ending up in a scraper.
Who can wrap their head around the code offered here? Any concerns? I would love to hear from you all to know what you all think of these!
-S
I stumbled upon another interesting video by Feng Zhu. Feng Zhu is a renowned artist educator in the concept art industry. Do check out this video it is quiet balanced
The first IA Father has been defrocked…
Grum999
He told me that it isn’t a sin to throw slugs and snails over the fence into my neighbour’s garden as well as other advice. I may need to rethink my life