This is the one problem I predicted about A.I.
People are not making the distinction between A.I. generated art and people using A.I. tools to create art.
Photoshop already has several plugins and filters that use A.I. to help create an effect or complete a task. These are tools to help artists bypass tedious work that does not require any artistic input. I don't consider making a difficult selection an art form. It's just a task I need to complete in order to make art. And if there is a way to make the selection faster, I want that tool.
But these titles are created in a limbo area. And we need to decide if this process is included in the "A.I. is bad" discussion.
Marvel hired a VFX company to create titles that did not look like a human created them. This fits the theme of the show. So, artists trained an A.I. with relevant images of the characters and elements of the show. And then they used the A.I. to output weird, non-human imagery. The A.I. did not create a finished project. Human artists had to revise, tweak, animate, and dramatically alter the output from the A.I.
They used A.I. as a tool rather than a replacement for human artists.
This kind of process still requires a ton of labor. It did not rob artists of a job.
Corridor Digital did something similar to create an original anime short and it took them months to complete. The sketchy thing that made their short movie unethical was that they trained the A.I. using an old movie which they did not create themselves. If they had paid artists to create a style to train the A.I., I think that would have been more ethical.
While the VFX company for Secret Invasion did use original assets, I can't find out if they used any additional images for training the A.I.
Adobe uses only stock photos that they own for their Firefly A.I. So it is entirely possible to create an image generator with assets that are not stolen or copyrighted.
So I guess my question is... if they trained the A.I. with images they owned, and they used the A.I. as a tool, and it still created many hours of labor and did not replace artists, is that still unethical?
That is not for me alone to decide. And I am not entirely sure I personally have a definitive answer to that yet.
As a disabled artist, I might see this a little differently. If I had a tool that could keep me from manually having to do tedious tasks, that would save me a great deal of energy. I could focus more on the art and the result rather than meticulously cutting things out or doing hours of cloning or manually painting in the top or sides of an image so it fits a desired aspect ratio.
Or what if I have a photo of a person that stops at the knees, but I need to see the feet for the desired result? Right now, I would have to find legs from another image and blend them in.
Or what if I don't have a clean background plate and I need to remove a large element in a photo? I might have to drag an entire building or brick wall in and color match, perspective match, and levels match.
These things can take hours. But with the new A.I. powered generative fill, it only takes seconds. For me, time is energy and A.I. has the potential to let me create more art despite my disability.
I have a hard time seeing it as pure evil. I think it is a tool. And I think history has shown that tools can be used for good and bad. I understand that people are really worried about art theft and losing jobs. But I think the solution might be to regulate the tool rather than destroy it altogether. And as I said in the past, I don't think it can be destroyed. I think it is here and we have to deal with it existing.
I will say, Marvel/Disney using an A.I. tool at this moment when we haven't figured hardly anything out... that definitely sucks. This was just awful timing and did not help peoples' fears that big studios are going to replace artists with A.I. content.