While I am quite excited about the Walton Goggins-infused Amazon Fallout series, the show debuted some promo art for the project ahead of official stills or footage and…it appears to be AI generated.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    151
    arrow-down
    1
    ·
    1 year ago

    My guess is that AI’s first big victim for graphic design will be stock art. Previously, crap like that background asset would just be stock purchased from Getty or Adobe stock. Now it can be generated.

    I’m already starting to use it instead of paying for bullshit licenses.

    • iforgotmyinstance@lemmy.world
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      15
      ·
      edit-2
      1 year ago

      I’ve been using AI for school and work, as God intended: give it the raw, have it do the grunt organization work, and then proofread to correct anything.

      There is very little to say that hasn’t been said. For an example of our limitations as humans, there’s only 50ish unique plot lines in the English language. To expect each person to be completely original is asinine.

      It’s a tool, one of many in my toolbox. People who are just flat against any and all AI or LLMs are behind the curve.

      • antonim@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        2
        ·
        1 year ago

        For an example of our limitations as humans, there’s only 50ish unique plot lines in the English language.

        How would the unique plotlines be determined by the language they’re told in? Why would the amount of plotlines be based on human cognitive capabilities? None of this makes sense.

        Either way, “unique plotline” doesn’t mean anything, from the perspective of literary or narrative studies. There’s no universal, objective way to dissect narratives, and they cannot be boiled down to a distinct number of basic models. There have been attempts to get to the most fundamental narrative model (Greimas, Campbell), but they’re far from widely accepted.

        People who are just flat against any and all AI or LLMs are behind the curve.

        Art is, by itself, not something that has “the curve”. If you’re doing something with very practical goals and need hyperproduction, sure, but art is not necessarily made or consumed with such a logic.

      • soulfirethewolf@lemdro.id
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        7
        ·
        1 year ago

        Pretty much.

        People very frequently complain about AI taking the jobs of artists. But if the money was never actually going to be put on the table for artists to claim, I really don’t think that was going to help much.

        That doesn’t mean I hate artists what do, absolutely not. It’s just that artists are people and people are limited in how much they can do at any single time.

        For the past couple of months. I’ve currently been waiting on multiple artists to finish up their commission queue. And one of which I’m worried I’ll have to turn away because of a variety of life changes in my life that’s led me to losing my job and me having reduced income.

        As of right now, the costs of generating a picture with a tool like Stable Diffusion or DALL-E has been pretty low, the former even being free if you have the right hardware. And these systems manage to be almost always available, as well as being capable of working in a matter of seconds.

        Of course, that doesn’t change the fact that these tools are only good at painting the bigger picture. They have a tendency to choke on the smaller details. And I would personally rather wait for an actual person to be available to work on something original that’s also capable of filling a niche that AI models have yet to be trained on.

        • niisyth@lemmy.ca
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          6
          ·
          1 year ago

          This entirely disregards the fact that the training of these models was done on human artists’ work without consent or renumeration. As it is, it is not “AI”, It is just a glorified plagiarism machine. Not to say it isn’t impressive, but it has already stolen work already done by artists and further stealing upcoming work by mashing together older works.

          There’s ways to do it ethically by training on artwork with permission kind of like how Adobe is doing it, but that isn’t going to have as wide of a reach as the other free ones.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            8
            arrow-down
            11
            ·
            1 year ago

            but it has already stolen work already done by artists and further stealing upcoming work by mashing together older works.

            You keep using that word “stolen”, I do not think it means what you think it means.

            Also, AIs do not “mash together” works from their training sets. This is a very common and very incorrect conception of how they work. They are not collage generators or copy-and-paste machines. They learn concepts from the images they train on, they don’t actually remember fragments of those images to later regurgitate in some sort of patched-together Frankenstein’s Monster.

            • Send_me_nude_girls@feddit.de
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              1 year ago

              You’re correct but it’s still too early and most people haven’t spend enough time with AI to fully understand. Maybe they never will.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                5
                arrow-down
                3
                ·
                1 year ago

                Like the classic quote says, it is difficult to get a man to understand something when his salary depends upon his not understanding it.

            • Pandemanium@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I just asked Wombo Dream to make the Mona Lisa and it did. Sure, you can tell it’s not exactly the real thing, but I don’t know how you can say it didn’t copy any of the actual Mona Lisa original.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                edit-2
                1 year ago

                I considered including mention of overfitting in my earlier comment, but since it’s such an edge case I felt it would just be an irrelevant digression.

                When a particular image has a great many duplicates in the training set - hundreds or even thousands of copies are necessary - then you get the phenomenon of overfitting. In that case you do get this sort of “memorization” of a particular image, because during training you are hitting the neural net over and over with the exact same inputs and really drilling it into them. This is universally considered undesirable, because there’s no point to it - why spend thousands of dollars to do something that a copy/paste command could do so much better and more easily? So when image generators are trained the training data goes through a “de-duplication” step intended to try to prevent this sort of thing from happening. Images like the Mona Lisa are so incredibly common that they still slip through the cracks, though.

                There’s a paper from some months back that commonly comes up when people want to go “aha, generative AI copies its training data!” But in reality this paper shows just how difficult it is to arrange for overfitting to happen. The researchers used an older version of Stable Diffusion whose training set was not well curated and is no longer used due to its poor quality, and even then it took them hundreds of millions of attempts to find just a handful of images from the training set that they could dredge back out of it in recognizable form.

              • emeralddawn45@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                1 year ago

                People have also copied art for as long as art has existed. You can buy a copy of the Mona Lisa in the gift shop, or print your own. That’s why the market for art has always been hyperfocus3d on ‘originals’. But rarely are the artists the ones getting rich off their art, especially now. I hate capitalism as much as anyone but if your motivation for making art is money you’re in the wrong business and your art probably isn’t that good anyway.

    • coffeebiscuit@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      5
      ·
      1 year ago

      Graphic designers aren’t the first. Automation ended a lot of jobs for decades. Ai is just a form of automation.

    • jimmux@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      They will be generating it themselves soon enough. I contributed some stock photos in the past. They recently sent me info about their new contribution pipeline, for content that may not pass the usual quality threshold, but will help train the models. If they do it right, who knows, maybe they can get better results worth paying for.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      12
      ·
      edit-2
      1 year ago

      The fun part here though is they dont have copyright on that art. If any of the “stock AI footage” becomes iconic, its public domain.

      Dicey spot for a studio to be in, but it does save some bucks, so they are plowing ahead.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        37
        arrow-down
        4
        ·
        1 year ago

        You should consult with a lawyer first. The amount of misinformation circulating on the Internet about how AI art is all public domain is enormous. That recent court case (Thaler v. Perlmutter) that made the rounds just recently, for example, does not say what most people seemed to be eagerly assuming it said.

        • affiliate@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          im also someone who has been misinformed on the AI art copyright status. could you explain how it actually works or link to a resource that does? i tried searching around for a bit but couldn’t find a clear consensus on it.

          • BetaDoggo_@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            1 year ago

            There isn’t anything conclusive yet because there’s still very little legal precedent. There was a case where someone made a comic which was essentially machine art with text over it, and there was one where the creation was completely unguided. In both cases they were denied protection because not enough human input was used.

            There has yet to be a case where there was a greater amount of human input, such as using a method like controlnet to guide composition.

            I think it will eventually come down to proving that a work involved significant human guidance rather than just luck.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              7
              ·
              edit-2
              1 year ago

              I should note that in the case of the comic, the text would still be fully copyrighted because it was written in a conventional way. So someone couldn’t simply republish the comic, the comic as a whole would still be copyrighted. And also as I recall that wasn’t a thing decided by a court but rather just by the copyright office, which is the lowest rung on the deciding-what-the-law-actually-means ladder.

              In the specific case of Thaler v. Perlmutter, Thaler was making some strange claims that were pretty obviously wrong IMO and the judge was basically forced to rule that the art was public domain because every other option was kind of nonsensical.

              Basically, Thaler was arguing that the AI itself should hold the copyright to the art that it had generated, and that since he was the one who had run the AI he should be assigned the copyright as a work-for-hire (like if you employ an artist in your company to make art for you, the company is assigned the copyright). Thaler was insistent that he himself didn’t “make” the art.

              So the judge quite reasonably went “AIs are not legal persons like humans or companies are, and in order to hold a copyright you must be a legal person. So the AI itself cannot be the copyright holder. Thaler has explicitly stated that he himself is not the copyright holder. That means that in this case there is no copyright holder for this piece of art. No copyright holder means public domain, so this piece of art is in the public domain.”

              The common argument for people who aren’t just trying to make some kind of strange point about AI personhood like Thaler apparently was is that the AI is a tool that a human is using to make art, like a paintbrush, and so the human that used the AI is the copyright holder. As far as I’m aware this argument is far less settled because it actually requires some thought, as opposed to Thaler’s which was pretty straightforward to come to a conclusion of “this is silly” about.

        • Xartle@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It will be really interesting to see how the case law develops. Personally, I am more interested in things on the IP side. A lot of lawyers I work with currently view LLMs like a shredder in front of a leaf blower. Which, it kind of is.

      • Balios@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Neither do they have copyright of the stock art they used to purchase. The complete piece, however, including pip boy, is not AI generated. Someone put this together, put effort into it, which easily qualifies it for copyright protection, even if the background is AI generated instead of bought stock art.

      • AEsheron@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If you’re talking about that recent legal case, look again. The artist made the claim that the AI was the sole author, but that he should own the IP. I think the vast majority of people would claim that, in it’s current state, the AI is a digital tool an author uses to make art. The recent ruling just reconfirm that A machines aren’t people, and B you can’t just own another author’s work.