Discussing AI and logging

The AI suggestion fills me with horror. Regarding logging, while it’s great to read that this kind of thing would be very carefully reviewed (and would definitely need to be optional), I don’t feel it’s correct to say supporting libre software/gpl/freedom/choice is a political decision or something as petty sounding as wanting to ‘hide habits and behaviour from software programmers’. Nobody has to auto-trust anyone, including online, but, most of all, it’s about the things I’ve just listed, respecting libre, the gpl, freedom and choice about not wishing to feed data up to online venues, and ensuring krita has the options to not connect to the internet (opt in, not out).

I’m not political at all; it’s very much about respect and morally supporting something of real value, that ensures everybody has choice and no pressure about data/forced internet connections etc. My vote, without opt-in’s and enabling choice to not connect online or upload data, would have to be to fully stop using krita.

But, again, really great to read that there’s awareness that this kind of thing needs very careful and thorough communication. It’s already really good that there’s the option to turn off logging (documents, as you’ve explained, which I wasn’t fully clear on, but turn off logging for), and gmic contains a setting where connecting to the internet can be turned off, so hopefully things can continue in this good direction.

Hope this feedback is useful.

@anon87509114 You can always block applications from interent access by using a firewall.

@AhabGreybeard That’s not a direction I’d take; trusting a software program shouldn’t involve me having to do such a thing, and I wouldn’t want to use a software where that’s necessary, as it’s not a good environment to paint in, personally. Surely people just want to enjoy being creative, instead of concerns about internet connections, data, firewalls/technical precautions and complexity, a program going in a direction that doesn’t feel right, etc.

1 Like

I agree with you in principle but I’d take a pragmatic position: If I know it’s reporting usage statistics back, I’d block it.
Ideally, I’d have the option to opt out in the settings.
Many browsers do that for bug reporting and usage statistics and I select ‘disable’ in the settings (then I trust it not to).

1 Like

That’s because AI is a buzzword. Lots of the time it’s used by people (often marketing guys in businesses, but very much not only) who have no idea what they’re talking about and can’t use more specific, more useful/meaningful terms. In programming, AI is used for so many completely different things that it’s just as useful as talking about “art” which would include making very useful shovels (art of crafting useful shovels, don’t tell me it’s not art!). And I was on a computer science course with Artificial Intelligence as specialization :smiley: In game dev, AI often refers to pathfinding and scripted NPCs, which are often much less complex than algorithms already used in Krita. Even in machine learning, the base of it is just pure math. If I remember correctly, apparently first ideas for machine learning were used in some battle like 200 years ago, because it’s just math and someone had a good idea how to use it. The scary thing about Big Data is that they are allowed to gather and process huge amounts of private data, not the methods they use to get something useful (for them) out of it.


I meant that adding such logging would be a “political decision”. And what I meant is that it’s not a matter of whether we can do it (time-wise etc.), but whether we want to do it (and the current answer is “nah”) (I didn’t know what other word to use; since I’m not a native English speaker, I might’ve mistaken my own language’s connotations with “polityczny” with English “political”; maybe “ideological” would be more correct?). I just explained why it’s not as obvious that we should implement it because everyone else has it implemented. I didn’t mean for the reasoning to sound “petty”. In fact, I dislike the current exploitation of user’s bias for convenience and lack of understanding (or, even just lack of time to research more) of possible dangers (I really want someone to tell me what exactly are “legitimate reasons” for “connecting devices” when I check it out in GDPR-forced cookie settings).

Also we are bound to KDE’s software privacy policy: KDE Software Privacy Policy - KDE Community so don’t worry, there would never be any information sent to Krita without opt-in option.

2 Likes

Quick note: Something like this happened with Audacity recently and there was a big (and understandable) fuss about it.

2 Likes

@AhabGreybeard Yes, I do the same, blocking things, opting out, disabling things; done it for years, and glad it brings peace of mind and removes distractions, plus I feel it’s important to support respect of people’s privacy and choices. It’s not as if people aren’t already giving, via donations, answering questions, posting/videos about krita, contributing resources, etc etc etc, so I always feel, if feedback is wanted, better to do it e.g. in a pinned forum thread or something similar … everyone isa able to join in, too. Those who want to help, will, and those who didn’t want logging/data uploads/opt-outs etc will still be around and able to join in and help too.

2 Likes

As a software developer I can confirm this. In my 15 years or so as a dev I never came across an AI in the wild that actually was one. Most are just clever heuristics and high level math. Artificial, sometimes. Intelligence, no. It really is an umbrella term for a lot of technologies nowadays.

But we’re getting off topic again.

3 Likes

@Tiar Thanks for responding. With respect, I understand what AI is, but am also very concerned about it. This isn’t ‘just math’, but something that now takes all manner of data from people, and increasingly uses it to make decisions that impact on peoples’ lives. Machine learning, by it’s very nature, needs to be fed data constantly in order to learn, and the increased push to get everyone’s data everywhere is central to that.

Personally, I can’t compare art and AI … AI is soulless and mathematic, trawling the internet to suck up everyone else’s artistic time and talent, whereas art is traditionally natural human expression from within each person and which can help e.g. mental/emotional health and wellbeing, and be genuine fun to do and to share. ‘Fun’ and ‘healthy’ are words I could never apply to AI, ML, big tech, etc.

You reference gaming … I don’t use Steam due to the huge amounts of data that would be constantly taken while playing. NPCs coded into a game, where there’s no data mining; all good. :slight_smile:

Again, with respect, battle strategy 200 years ago emerged from human experience, abilities and insights, not necessarily mathematics, except for counting numbers on both sides probably, lol, and certainly not from machines and AI.

I do agree that big data gathers and processes vast amounts of private data, more by the week. And, yes, ‘everyone else has it’ isn’t a valid reason if you take it right down to the bottom line of how this kind of thing isn’t healthy for people but has limitless potential for harm. At the same time I don’t believe ‘everyone else has it’ … those most active in the online world, e.g. big tech, push data-mining/ML/AI, and decision-making from all that, very aggressively, and more and more obviously, including pushing that ‘everyone else has it’ so people think they’re missing out and should get involved. I see a very significant and growing number increasingly concerned about this and making decisions to drop e.g. social media sites etc.

Yes, exploitation and convenience have become far too aggressively marketed, and pushing everyone at a fast pace blocks understanding … stepping out of it all, and taking time to understand and gain insight is a very useful quality. Otherwise all privacy and respect end up gone, and, if that space no longer existed, so many other things that are important to human beings, not machines, would be gone also.

Thank you very much for assurances about always opt-in. That’s wonderful.

Yes, many have left Audacity and use forks instead as standard now.

May I request the title changed, as it makes it look like I started this subject and was initiating criticisms … I was responding to the OP and Tiar. Can I suggest ‘discussing AI and logging’, particularly as Tiar was referencing feedback? Thanks.

@anon87509114 You should be able to edit the title using the small ‘pencil’ icon next to it.

1 Like

Thanks @AhabGreybeard I’ve clicked everything in sight, lol, from within the top copy-pasted post, and can’t see where to change it. Saw the history recorded of original changes I’d made to the post though, ironically. :rofl:
EDIT: oops, found it … of course, the easier option right at the top!

I moved the comments to a new topic so we can talk without disturbing others in the thread with ideas for Krita 6.0.

Also, sorry that it’s so long :slight_smile: I’m passionate about computer science and I tried to be thorough.

Thing is, machine learning doesn’t require machines, you can count it on paper, it would just take a long time. Also 200 years ago there were definitely battle machines, haven’t you see a trebuchet? :wink: (Joking, though, interesting trivia, in my language they are usually called “machiny” while the current age machines are called “maszyny”, most probably a word was loaned from some other language multiple times).

Returning to serious topics, with all due respect, I don’t think you understood what I mean. Let’s first focus on similarities between our points of view so we don’t have to fight on that: I know that there are corporations out there that are gathering and processing user’s data that they shouldn’t, and they’re making decisions based on that while they should often be made by humans instead. Google is the most famous example, I guess, both with AdSense and Youtube “mighty algorithm”. Also Facebook with its opaque sorting algorithm and promoting different content/posts to different people. I agree that those kinds of practices are unethical and often dangerous (echo chambers, skewing voting for a specific party, automatic decision-making might not take into account or handle correctly special situations, there is inherent bias because things are often based on people, among which there are plenty of racists, etc.).

However, you seem to mix up AI with other things that don’t actually require an AI. And some of what you wrote is technically wrong as well.

For example,

this statement is simply incorrect. You can just have a specific set of pre-gathered data and not gather any more. If you’re satisfied with the results, you can stop teaching the network. Usually in more non-Google settings people don’t train the network infinitely because it’s too expensive for too little benefit. If it works, then it works. Usually you just need some amount of data to get really good results, and the benefits of having more data are lower and lower the more data you have, so the reasonable thing to do is to just cut it off when you achieve the level that seems fine enough.

I’m not even sure what would be an example of a constantly learning AI outside of processing user data Google-style. I would consider self-driving cars, since they could improve and they get plenty of data all the time, but then, with all the legal issues, I would assume the manufacturer would just prefer to give them the system frozen in the state that they thoroughly tested, instead of allowing any instability with a constantly learning system. I can’t think of anything else, maybe you have any example?

And this is statement where you mix AI with something else. AI doesn’t require user’s data: it just requires data for the topic it works on. So, if AI works on sketches, then it requires sketches, not user’s behaviour. It’s the company decision what to use AI for. If it uses it for unethical reasons, then it gathers data for unethical reasons and then feeds the AI that data. If it uses AI for ethical reasons, then it gathers data for ethical reasons and there is nothing wrong with that. For example if I made AI to convert sketches into lineart and I had permission from the author of the sketches and linearts, there would be no ethical questions about it. Just like the algorithm used for Colorize Mask is not unethical (which is, I think, a straight-forward, old-style algorithm, though pretty advanced), a program that converts sketches into lineart wouldn’t be unethical, because why would it be?

Again, AI doesn’t contain in it’s definition anything about traveling the internet. And you can travel the internet and suck up everyone’s artistic time and talent without an AI: I’m sure you know about bots, which are often just simple programs (no machine learning involved), and there are actual human beings stealing artworks left and right, too.

I removed “big tech” because I have plenty of criticism about big tech as well, but AI and ML are not just in big tech.
You mentioned “healthy”, are you aware of using deep learning in medicine? (Just in case: medicine is not just in USA and not just greedy private corporations). Especially the kinds that are really good in image categorizations, I’m sure you know that doctors often look at images in their practice to see whether the bone is broken, or what’s happening in the brain, or is there a tumor somewhere. When you have a great algorithm that can differentiate between an image with a tiny tumor and an image without a tumor, why not use it? A doctor can only look at some number of images in their life; and human’s memory is not exactly reliable. Afaik, it’s still in the experimental phase, and of course the goal is to have it as an advisor, with a human reviewing the result. Kind of like a “second opinion”.
“Fun” is pretty subjective. I had plenty of fun implementing both non-deep-learning algorithms like random forests, evolutionary algorithm and ant algorithm (all of those: machine learning techniques, but don’t involve deep learning or even neural networks), and neural networks (both deep and shallow). For a user, I guess it could be both fun or boring, depending on what it is used for. In games, it could be fun. In a program like Krita, it would be just a useful tool. In Google’s data gathering scheme, it’s a force of evil. There is a whole spectrum.

What my point is, you seem to think that AI means exploitation, but exploitation doesn’t require an AI and AI doesn’t inherently mean exploitation. In fact, there are plenty of areas where AI can be used effectively and without exploiting anyone - I mentioned medicine already, but there are also self-driving cars, software to correct your pronunciation in a foreign language, text-to-speech (often very important for blind and autistic people and possibly with other disabilities as well), speech-to-text, virtual assistants (they don’t need to exploit people to be useful), software to make cars safer (detecting a quickly approaching obstacle, system to avoid sliding off the road in winter or on a wet road etc.). I would even say that finding related videos (or posts etc.) to suggest doesn’t necessary involve any exploitation, if the data gathered would be used only for finding new videos and not for selecting related advertisements, and maybe even just saved on the user side and never sent to the theoretical ethical video platform company. Though, when it works more like Mastodon, then it’s more sure, frankly (Mastodon’s timeline is chronological, and the platform is open source) (there is Peertube but I’m not sure how it works, so I can’t say, but I would guess it’s much more ethical than Youtube’s platform, as long as you don’t choose an instance with questionable ToS and filled with even more questionable videos).

And there is plenty of exploitation of user’s data without AI, for example the whole “traveling over the internet” doesn’t require AI, matching AdSense cookies and connecting your visits on various websites doesn’t require AI. Matching who’s likely to have disabilities, or be a female or male, or be queer, or anything else, can be done without AI, though with AI it works better (but AI isn’t infallible either, it’s sometimes just as dumb as writing the conditions manually, like “if they look for female products, they are female, if they look up games, they are male, if they do both, they are aliens and let’s only show them ads about spaceship fuel”).


GOG.com allows you to buy quite a lot of games, all without DRM, no need to access the internet while playing and you can download a standalone offline installer that doesn’t even require their software “Galaxy” to be installed to install the game (just like in old times™ ). You still need to create an account, so they do have some data, but you can visit the website just to buy and download offline game installers, so still better than Steam. And they have Linux games too.

Thanks @Tiar for responding, and I hope we can find common ground and can power on with some good ideas moving forward.

I should clarify first that I’m referencing what most people understand ML to be today, and started with the focus of how that relates to privacy/digital art software, after responding in the other thread to the AI suggestion, but mostly points you had made. I also included some ‘bigger picture’ that was necessary in responding, regarding those who value privacy, including that politics plays no part for me personally. I don’t want to discuss political matters, as points really aren’t about that. Just to first continue by illustrating the ML I’m referencing; this happened to be at the top of search results … What is Machine Learning? - United Arab Emirates | IBM. Not keen on IBM but the description is written with surprising clarity for such a complex area … a couple of sentences from it: “Through the use of statistical methods, algorithms are trained to make classifications or predictions, uncovering key insights within data mining projects. These insights subsequently drive decision making within applications and businesses, ideally impacting key growth metrics.”

An example regarding AI: microsoft’s AI program Copilot last year was found to be exploiting FOSS code, without notification, permission, or respect for licensing, and they have just made Copilot commercial and for-profit. Microsoft’s total lack of response to questions was understood by the SFC to show that MS feel any disagreement with them eradicates any questioner from deserving a response. The SFC explain the overview very well: Give Up GitHub: The Time Has Come! - Conservancy Blog - Software Freedom Conservancy

I’m basically wired to be against anything AI, so am finding I’m on the opposite end of the points you’re making tbh, but will attempt to group a lot of responses into this paragraph and hope it isn’t confusing. It’s more just getting this paragraph over and done, as I see we don’t agree about AI/ML …
I have a problem with ML having any data at all.
I’m also against self-driving cars; subscriptions to unlock normal functions are part and parcel, harvesting data, too much control over people’s right to own and use their transport. I also don’t observe that these cars are safe.
I can’t agree that AI is ethical and therefore okay to harvest data. And, yes, if you have permission from a specific person to use only set data they volunteer, without any pressure, that’s very different, but I’m referencing the general data harvesting that’s worsening. I don’t trust who created AI/ML (big tech), and I don’t trust what it does. People shouldn’t be pressured to provide data that satisfies what someone else wants; trust cannot exist when there’s that pressure and when respect is absent.
Art theft is much worse due to AI using any art it finds online all across the world, and, yes, artists are having their work stolen also by humans, e.g. for NFTs now too. I’ve seen a few threads in recent weeks where known artists had a terrible time trying to get any justice, as the procedure Opensea demand involves giving all personal data, which the thief then sees! People can’t get redress when refusing this process, which is a really bad situation and puts people off putting art online (as well as being put off by AI/ML) … not that offline/community art interactions and groups expanding is a bad thing; far from it.
Virtual assistants remove the human interaction, and are often a nightmare to deal with, so much so that people often don’t get to communicate important information or receive assistance, so are too put off to continue with it.
Language learning and assistance for disabilities and special needs existed before AI/ML and worked better, especially due to more human interaction alongside. Over the last 3-4 weeks I’ve seen threads about software for those with disabilities/special needs being worse now. It’s hard to read these kinds of things.

I can’t go too much into the medical side of AI/ML much, for personal reasons. All I can say is that the medical system in my country personally caused disability over worsening years of malpractice … and countless people are getting worse medical care nowadays, across the world; AI doesn’t seem to be helping, or working respectfully with, the general public. Medicine has got colder, yet a very important part of medical care is human contact and understanding. Also simplicity is important, so someone can understand fully what is happening, and be confident regarding their treatment. The more technical and remote things get, the less that is possible, so, rather than go along with this attempt to remove both fully informed clear choice and the feeling of positive control people need, people have no option but to step away and do whatever self-care they can. I was in the caring profession, and also have a lot of experience due to dealing with ill health myself … the last thing I would trust with my health would be an AI system programmed by people who unfortunately aren’t interested in patients having important human and caring contact, or privacy/respect.

Just today I saw more people discussing using older computers and software, particularly about running gemini (a blogging software that is it’s own ‘web’, away from the main ‘web’ and any tracking etc) … this is getting more and more common, as is leaving big tech social media sites, so the irony is that the more all the tech that people don’t like tries to insert itself into people’s lives, the less people will take part. They’ll all be elsewhere, as, when not listened to or respected, people vote with their feet.

Thanks for referencing Gog; I used to be a member of it, for about a year, and bought a lot of standalone games (many still to play!) before unfortunately facebook tracking was added to the site, along with some other changes I can’t quite recall, but along similar lines. A lot of people left then, also due to the galaxy installer not being fixed for linux users, but, yes, compared to steam, it’s definitely a better option and DRM-free is always very appreciated.

In concluding, I can see we don’t agree at all about AI/ML. I’m not sure if some of my points may even be disconcerting, but, if so, that was definitely not my aim. It’s almost a physical feeling, how impossible it is for me to say anything positive about AI, I feel that strongly, and I just can’t find common ground about it. Maybe you feel similarly, regarding what I’m saying. I CAN see a place beyond disagreeing 100% about AI/ML though, and that would be to offer to accept we disagree and instead focus on enjoying creativity and what’s fairest to everyone. I can’t actually think of anything more to say at this point anyway, lol. In the other thread, you’d said there’d need to be proper discussion about anything further down the road, not right now. I do seem to have ended up providing some feedback that could be useful later, so do hope that contributes some balance to things later down the road.

(Edited to include re Copilot and SFC link explaining action about that).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.