An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

  • ExclamatoryProdundity@lemmy.world
    link
    fedilink
    English
    arrow-up
    256
    arrow-down
    59
    ·
    1 year ago

    Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      164
      arrow-down
      15
      ·
      edit-2
      1 year ago

      It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.

      Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most “professional” candidate for the job and it picked white-looking people.

      This is a real potential issue, not just “clickbait”.

      • HumbertTetere@feddit.de
        link
        fedilink
        English
        arrow-up
        35
        arrow-down
        2
        ·
        1 year ago

        If companies go pick the most professional applicant by their photo that is a reason for concern, but it has little to do with the image training data of AI.

        • AbouBenAdhem@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Some people (especially in business) seem to think that adding AI to a workflow will make obviously bad ideas somehow magically work. Dispelling that notion is why articles like this are important.

          (Actually, I suspect they know they’re still bad ideas, but delegating the decisions to an AI lets the humans involved avoid personal blame.)

          • Square Singer@feddit.de
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            It’s a massive issue that many people (especially in business) have this “the AI has spoken”-bias.

            Similar to how they implement whatever the consultant says, no matter if it actually makes sense, they just blindly follow what the AI says .

          • Water1053@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            Businesses will continue to use bandages rather than fix their root issue. This will always be the case.

            I work in factory automation and almost every camera/vision system we’ve installed has been a bandage of some sort because they think it will magically fix their production issues.

            We’ve had a sales rep ask if our cameras use AI, too. 😵‍💫

      • JeffCraig@citizensgaming.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Again, that’s not really the case.

        I have Asian friends that have used these tools and generated headshots that were fine. Just because this one Asian used a model that wasn’t trained for her demographic doesn’t make it a reflection of anything other than the fact that she doesn’t understand how MML models work.

        The worst thing that happened when my friends used it were results with too many fingers or multiple sets of teeth 🤣

      • drz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        No company would use ML to classify who’s the most professional looking candidate.

        1. Anyone with any ML experience at all knows how ridiculous this concept is. Who’s going to go out there and create a dataset matching “proffesional looking scores” to headshots?
        2. The amount of bad press and ridicule this would attract isn’t worth it to any company.
        • kbotc@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Companies already use resume scanners that have been found to bias against black sounding names. They’re designed to feedback loop successful candidates, and guess what shit the ML learned real quick?

    • hardypart@feddit.de
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      3
      ·
      1 year ago

      It still perfectly and visibly demonstrates the big point of criticism in AI: The tendencies the the training material inhibits.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      The AI might associate lighter skin with white person facial structure. That kind of correlation would need to be specifically accounted for I’d think, because even with some examples of lighter skinned Asians, the majority of photos of people with light skin will have white person facial structure.

      Plus it’s becoming more and more apparent that AIs just aren’t that good at what they do in general at this point. Yes, they can produce some pretty interesting things, but they seem to be the exception rather than the norm, and in hindsight, a lot of my being impressed with results I’ve seen so far is that it’s some kind of algorithm that is producing that in the first place when the algorithm itself isn’t directly related to the output but is a few steps back from that.

      I bet for the instances where it does produce good results, it’s still actually doing something simpler than what it looks like it’s doing.

    • Thorny_Thicket@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Almost like we’re looking for things to get mad about.

      Also what are these 50 people downvoting you for? Too much nuance I suppose.

    • notacat@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      25
      ·
      1 year ago

      You said yourself you hate inherent bias yet attempt to justify the result by saying if used again it’s just going to produce another white face.

      that’s the problem

      It’s a racial bias baked into these AIs based on their training models.

      • thepineapplejumped@lemm.ee
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        3
        ·
        1 year ago

        I doubt it is concious racial bias, it’s most likely that the training data is made up of mostly white people and labeled poorly.

        • notacat@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          4
          ·
          1 year ago

          I also wouldn’t say it was conscious bias either. I don’t think it’s intentionally developed in that way.

          The fact still remains though whether conscious or unconscious, it’s potentially harmful to people of other races. Sure it’s an issue with just graphic generation now. What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

          The possibilities are virtually endless, but if we don’t start pointing out and addressing any type of bias, it’s only going to get worse.

          • wmassingham@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            ·
            1 year ago

            What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

            Simple. It should not fucking be used for those things.

          • Altima NEO@lemmy.zip
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            I feel like you’re overestimating the capabilities of current ai image generation. And also presenting problems that don’t exist.

      • Blaidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        They aren’t justifying anything, they literally said it was about the training data.

  • gorogorochan@lemmy.world
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    3
    ·
    1 year ago

    Meanwhile every trained model on Civit.ai produces 12/10 Asian women…

    Joking aside, what you feed the model is what you get. Model is trained. You train it on white people, it’s going to create white people, you train it on big titty anime girls it’s not going to produce WWII images either.

    Then there’s a study cited that claims Dall-e has a bias when producing images of CEO or director as cis-white males. Think of CEOs that you know. Better yet, google them. It’s shit but it’s the world we live in. I think the focus should be on not having so many white privileged people in the real world, not telling AI to discard the data.

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Yeah there are a lot of cases of claims being made of AI “bias” which is in fact just a reflection of the real world (from which it was trained). Forcing AI to fake equal representation is not fixing a damn thing in the real world.

    • Altima NEO@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I recall the being a study of the typical CEO. 6+ feet tall, white males.

      But yeah, the output she was getting really depends heavily on the data that whatever model she used was trained on. For someone who is a computer science major, I’m surprised she simply cried “racial bias” rather than investigating the why, and how to get the desired results. Like cranking down the denoising strength.

      To me it just seems like she tried messing around with those easy to use, baity websites without really understanding the technology.

    • UmbrellAssassin@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Cool let’s just focus on skin color. If you’re white you shouldn’t be in power cause my racism is better than your racism. How about we judge people by their quality of work instead of skin color. I thought that was the whole point.

      • gorogorochan@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Also sure, let’s judge male white CEOs on merit. Let’s start with Elon Musk…

        Also I can’t understand why there are people here assuming that the only way to “focus on having less white male CEOs” == eliminating them. This shit is done organically. Eliminating wage gap, providing equal opportunities in education etc.

      • gorogorochan@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        How did you get from what I wrote to “tearing down” anyone is a bit puzzling. It’s simply about striving to change the status quo and not the AI model representing it. I’m not advocating guillotining Bezos or Musk, hope that’s clear.

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          1 year ago

          The “pre-training” is learning, they are often then fine-tuned with additional training (that’s the training that isn’t the ‘pre-training’), i.e. more learning, to achieve specific results.

      • postmateDumbass@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Humans will identify sterotypes in AI generated materials that match the dataset.

        Assume the dataset will grow and eventually mimic reality.

        How will the law handle discrimination based on data supported sterotypes?

        • Pipoca@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Assume the dataset will grow and eventually mimic reality.

          How would that happen, exactly?

          Stereotypes themselves and historical bias can bias data. And AI trained on biased data will just learn those biases.

          For example, in surveys, white people and black people self-report similar levels of drug use. However, for a number of reasons, poor black drug users are caught at a much higher rate than rich white drug users. If you train a model on arrest data, it’ll learn that rich white people don’t use drugs much but poor black people do tons of drugs. But that simply isn’t true.

          • postmateDumbass@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            The datasets will get better because people have started to care.

            Historically much of the data used was what was easy and cheap to acquire. Surveys of class mates. Arrest reports. Public available, government curated data.

            Good data costs money and time to create.

            The more people fact check, the more flaws can be found and corrected. The more attention the dataset gets the more funding is likely to come to resurvey or w/e.

            It part of the peer review thing.

            • Pipoca@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              It’s not necessarily a matter of fact checking, but of correcting for systemic biases in the data. That’s often not the easiest thing to do. Systems run by humans often have outcomes that reflect the biases of the people involved.

              The power of suggestion runs fairly deep with people. You can change a hiring manager’s opinion of a resume by only changing the name at the top of it. You can change the terms a college kid enrolled in a winemaking program uses to describe a white wine using a bit of red food coloring. Blind auditions for orchestras result in significantly more women being picked than unblinded auditions.

              Correcting for biases is difficult, and it’s especially difficult on very large data sets like the ones you’d use to train chatgpt. I’m really not very hopeful that chatgpt will ever reflect only justified biases, rather than the biases of the broader culture.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        That’s just stupid and shows a lack of understanding of how this all works.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    2
    ·
    1 year ago

    This is not surprising if you follow the tech, but I think the signal boost from articles like this is important because there are constantly new people just learning about how AI works, and it’s very very important to understand the bias embedded into them.

    It’s also worth actually learning how to use them, too. People expect them to be magic, it seems. They are not magic.

    If you’re going to try something like this, you should describe yourself as clearly as possible. Describe your eye color, hair color/length/style, age, expression, angle, and obviously race. Basically, describe any feature you want it to retain.

    I have not used the specific program mentioned in the article, but the ones I have used simply do not work the way she’s trying to use them. The phrase she used, “the girl from the original photo”, would have no meaning in Stable Diffusion, for example (which I’d bet Playground AI is based on, though they don’t specify). The img2img function makes a new image, with the original as a starting point. It does NOT analyze the content of the original or attempt to retain any features not included in the prompt. There’s no connection between the prompt and the input image, so “the girl from the original photo” is garbage input. Garbage in, garbage out.

    There are special-purpose programs designed for exactly the task of making photos look professional, which presumably go to the trouble to analyze the original, guess these things, and pass those through to the generator to retain the features. (I haven’t tried them, personally, so perhaps I’m giving them too much credit…)

    • CoderKat@lemm.ee
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 year ago

      If it’s stable diffusion img2img, then totally, this is a misunderstanding of how that works. It usually only looks at things like the borders or depth. The text based prompt that the user provides is otherwise everything.

      That said, these kinds of AI are absolutely still biased. If you tell the AI to generate a photo of a professor, it will likely generate an old white dude 90% of the time. The models are very biased by their training data, which often reflects society’s biases (though really more a subset of society that created whatever training data the model used).

      Some AI actually does try to counter bias a bit by injecting details to your prompt if you don’t mention them. Eg, if you just say “photo of a professor”, it might randomly change your prompt to “photo of a female professor” or “photo of a black professor”, which I think is a great way to tackle this bias. I’m not sure how widespread this approach is or how effective this prompt manipulation is.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’ve taken a look at the website for the one she used and it looks like a cheap crap toy. It’s free, which is the first clue that it’s not going to be great.

      Not a million miles from the old “photo improvement” things that just run a bunch of simple filters and make over-processed HDR crap.

  • BURN@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    1 year ago

    Garbage in = Garbage out

    ML training data sets are only as good as their data, and almost all data is inherently flawed. Biases are just more pronounced in these models because they scale the bias with the size of the model, becoming more and more noticeable.

  • notapantsday@feddit.de
    link
    fedilink
    English
    arrow-up
    46
    ·
    1 year ago

    Can we talk about how a lot of these AI-generated faces have goat pupils? That’s some major bias that is often swept under the rug. An AI that thinks only goats can be professionals could cause huge disadvantages for human applicants.

  • sirswizzlestix@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    1 year ago

    These biases have always existed in the training data used for ML models (society and all that influencing the data we collect and the inherent biases that are latent within), but it’s definitely interesting that generative models now make these biases much much more visible (figuratively and literally with image models) to the lay person

    • SinningStromgald@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      But they know the AI’s have these biases, at least now, shouldn’t they be able to code them out or lessen them? Or would that just create more problems?

      Sorry, I’m no programer so I have no idea if thats even possible or not. Just sounds possible in my head.

      • Dojan@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        1 year ago

        You don’t really program them, they learn from the data provided. If say you want a model that generates faces, and you provide it with say, 500 faces, 470 of which are of black women, when you ask it to generate a face, it’ll most likely generate a face of a black woman.

        The models are essentially maps of probability, you give it a prompt, and ask it what the most likely output is given said prompt.

        If she had used a model trained to generate pornography, it would’ve likely given her something more pornographic, if not outright explicit.


        You’ve also kind of touched on a point of problem with large language models; they’re not programmed, but rather prompted.

        When it comes to Bing Chat, Chat GPT and others, they have additional AI agents sitting alongside them to help filter/mark out problematic content both provided by the user, as well as possible problematic content the LLM itself generates. Like this prompt, the model marked my content as problematic and the bot gives me a canned response, “Hi, I’m bing. Sorry, can’t help you with this. Have a nice day. :)”

        These filters are very crude, but are necessary because of problems inherent in the source data the model was trained on. See, if you crawl the internet for data to train it on, you’re bound to bump into all sorts of good information; Wikipedia articles, Q&A forums, recipe blogs, personal blogs, fanfiction sites, etc. Enough of this data will give you a well rounded model capable of generating believable content across a wide range of topics. However, you can’t feasibly filter the entire internet, among all of this you’ll find hate speech, you’ll find blogs run by neo nazis and conspiracy theorists, you’ll find blogs where people talk about their depression, suicide notes, misogyny, racism, and all sorts of depressing, disgusting, evil, and dark aspects of humanity.

        Thus there’s no code you can change to fix racism.

        if (bot.response == racist) 
        {
            dont();
        }
        

        But rather simple measures that read the user/agent interaction, filtering it for possible bad words, or likely using another AI model to gauge the probability of an interaction being negative,

        if (interaction.weightedResult < negative)
        {
            return "I'm sorry, but I can't help you with this at the moment. I'm still learning though. Try asking me something else instead! 😊";
        }
        

        As an aside, if she’d prompted “professional Asian woman” it likely would’ve done a better job. Depending on how much “creative license” she gives the model though, it still won’t give her her own face back. I get the idea of what she’s trying to do, and there’s certainly ways of acheiving it, but she likely wasn’t using a product/model weighted to do specifically the thing she was asking to do.


        Edit

        Just as a test, because I myself got curious; I had Stable Diffusion generate 20 images given the prompt

        professional person dressed in business attire, smiling

        20 sampling steps, using DPM++ 2M SDE Karras, and the v1-5-pruned-emaonly Stable Diffusion model.

        Here’s the result

        I changed the prompt to

        professional person dressed in business attire, smiling, [diverse, diversity]

        And here is the result

        The models can generate non-white men, but it is in a way just a reflection of our society. White men are the default. Likewise if you prompt it for “loving couple” there’ll be so many images of straight couples. But don’t just take my word for it, here’s an example.

          • Dojan@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            It can do faces quite well on second passes but struggles hard with hands.

            Corporate photography tends to be uncanny and creepy to begin with, so using an AI to generate it made it even more so.

            I totally didn’t just spend 30 minutes generating corporate stock photos and laughing at the creepy results. 😅

        • Buttons@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Indeed, there seems to be some confusion about the wording too. She wrote instructions, like she was instructing a state-of-the-art LLM, “please alter this photo to make it look professional”, but the AI can’t understand sentence structure or instructions, it just looks for labels that match pictures. So the AI sees “photo, professional” and it sees her starting photo, and it alters the starting photo to produce something that resembles “photo, professional”. It doesn’t know what those other words mean.

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          This is well-stated. As an addendum, if you were to use something like the RevAnimated model (a fairly popular one), it would bias toward digital images rather than photos for essentially the same reason. If you used one of the Anything models, it would generate anime-like images for the same reason. It is pushing toward (one of) the default “understandings” it has of whatever the prompt is. The main issue in the article is the student had both a.) unrealistic expectations about what the AI would do — without being trained on her and her specifically, it wouldn’t know what parameters to hold. It’s not a magic box where you can put in an unknown image and simple manipulate it, at least not in the way one might expect from Photoshop and b) a useless prompt, reflecting a more magical understanding of what the AI would do. It gave her exactly what she asked for, which was (the model’s interpretation) of a person’s linkedin photo using the base image as its target.

          That said there’s an exceptionally valid question that’s being sort of asked around and is implied by your post which is what does the model understand the “default” to be. Obviously this aspect isn’t demographically balanced in any way, which is the main problem. That problem, however, isn’t really demonstrated by this example in the way people are thinking. I’d frame the issue as — if she’d asked for a <whatever her nationality is> <whatever her age is> professional LinkedIn photo, she would definitely have gotten a photo of a woman approximately that age and that ethnicity and whatever it thinks a professional linkedin photo is, but it probably still wouldn’t have really looked like her (because it doesn’t know who, in terms of weights, she is); the problem is that without prompting it does default to its default weights of assuming that a generic female is probably white. But a solution would either look like it choosing a totally random nationality every time (which is not what she wanted) or some other specific nationality every time (as some of the more Asian-biased models do) due to her poor prompting.

          • Dojan@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            You put it really succinctly.

            These models are trained on images generally depicting a lot of concepts. If I do a simple prompt like monkey I’m probably going to get a lot of images of monkeys in a natural setting, because when trained it’s given images essentially stating that this is an image of “monkey, tree, fruit, foliage, green” and so forth. Over time this pattern repeats, and it will start associating one concept with another. Because a lot of pictures that depict monkey also depict foliage, nature and what have you, it makes sense for it to fill in the rest of monkey with foliage, green, nature and less volcano, playful dog, ice cream since those concepts haven’t been very prominent when presented with monkey.

            That is essentially the problem.

            Here is the result for “monkey”

            And here is monkey, volcano, playful dog, ice cream

            The datasets are permeated with these kinds of examples. If you prompt for nurse you’ll get a lot of women in blue/hospitaly clothing, inside hospitals or non-descript backgrounds, and very few men.

            Here’s photo of a nurse

            The more verbose and specific you get though, the likelier it is that you’ll get the outcome you want. Here for example is male (nurse:1) (wearing white scrubs:1) with pink hair skydiving

            This was so outlandish that without the (wearing white scrubs:1) it just put the skydiver in a random pink outfit, even with the added weight on wearing white scrubs it has a tendency to put the subject in something pink. Without the extra weight on (nurse:1) it gave me generic pink (mostly) white men.

            If we were to fix the biases present in our society, we’d possibly see less biases in the datasets, and subsequently less bias in the final models. The issue I believe, isn’t so much a technological one, as it is a societal one. The machines are only racist because we are teaching them to be, by example.

            • rebelsimile@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              9
              ·
              1 year ago

              More to the point, there are so many parameters that can be tweaked, to throw your image into a “generator” without knowing what controls you have, what the prompt is doing, what model it’s using etc is like saying “the internet” is toxic because you saw a webpage that had a bad word on it somewhere.

              I put her actual photo into SD 1.5 (the same model you used) with 30 step Euler, 5.6 cfg and 0.7 noise and got these back. I’d say 3/4 of them are Asian (and the model had a 70% chance to influence that away if it were truly “biased” in the way the article implies), obviously none of them look like the original lady because that’s not how it works. You could generate a literally infinite number of similar-looking women who won’t look like this lady with this technique.

              The issue isn’t so much that the models are biased — that is both tautologically obvious and as mentioned previously, probably preferred (rather than just randomly choosing anything not specified at all times — for instance, your monkey prompt didn’t ask for forest, so should it have generated a pool hall or plateau just to fill something in? The amount of specificity anyone would need would be way too high; people might be generated without two eyes because it wasn’t asked for, for instance); it is that the models don’t reflect the biases of all users. It’s not so much that it made bad choices but that it made choices that the user wouldn’t have made. When the user thinks “person”, she thinks “Asian person” because this user lives in Asia and that’s what her biases toward “person” start with, so seeing a model biased toward people from the Indian subcontinent doesn’t meet her biases. On top of that, there’s a general potential impossibility of having some sort of generic “all people” model given that all people are likely to interpret its biases differently.

              • Dojan@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                With a much lower denoising value I was able to get basically her but airbrushed. It does need a higher denoising value in order to achieve any sort of “creativity” with the image though, so at least with the tools and “skill” I have with said tools, there’s a fair bit of manual editing needed in order to get a “professional linkedin” photo.

                • rebelsimile@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  Yeah if she wants the image to be transformed lower denoising won’t really do it.

                  Honestly, I know what she did, because I had the same expectation out of the system. She threw in an image and was expecting to receive an infinite number of variations of specifically her but in the style of a “LinkedIn profile photo”, as though by providing the single image, it would map her face to a generic 3d face and then apply that in a variety of different poses, lighting situations and clothing. Rather, what it does is learn the relations between elements in the photo combined with a healthy amount of static noise and then work its way toward something described in the prompt. With enough static noise and enough bias in the model, it might interpret lots of fuzzy stuff around her eyes as “eyes”, but specifically Caucasian eyes since it wasn’t specified in the prompt and it just sees noise around eyes. It’s similarly easy to get a model like Chillout (as someone mentioned in another thread) to bias toward Asian women.

                  (This is the same picture, just a model change, same parameters. Prompt is: “professional quality linkedin profile photo of a young professional”)

                  After looking at a number of different photos it’s also easy to start to see where the model is overfit toward a specific look, which is a problem in a technical sense (in that it’s just bad at generating a variety of people, which is probably the intention of the model) and in an ethical sense (in that it’s also subjectively biased).

      • CharlestonChewbacca@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        That’s not how it works. You don’t just “program out the biases” you have to retain the model with more inclusive training data.

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        If you can code it, it isn’t really AI.

        AI is able to make the connections when given the data by itself. The problem is that the data required is usually enormous, so the quantity of data is more valued than the quality.

        • Mininux@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          If you can code it, it isn’t really AI

          thx I’m gonna use that sentence so much now, I’m so tired of hearing people calling themselves AI experts when they merely do regular programing, like yes your program is pretty cool, but no it’s not AI.

          “but… but it reacts to its environment, it’s intelligent” NO BECAUSE THEN LITERALLY EVERYTHING WOULD BE “INTELLIGENT” and the word AI would be useless, as we have been doing that for decades with if/else

          sory im angery

      • Tgs91@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Shouldn’t they be able to code them out?

        You can’t “code them out” because AI isn’t using a simple script like traditional software. They are giant nested statistical models that learn from data. It learns to read the data it was trained on. It learns to understand images that it was trained on, and how they relate to text. You can’t tell it “in this situation, don’t consider race” because the situation itself is not coded anywhere. It’s just learned behaviors from the training data.

        Shouldn’t they be able to lessen them?

        For this one the answer is YES. And they DO lessen them as much as they can. But they’re training on data scraped from many sources. You can try to curate the data to remove racism/sexism, but there’s no easy way to remove bias from data that is so open ended. There is no way to do this in an automated way besides using an AI model, and for that, you need to already have a model that understands race/gender/etc bias, which doesn’t really exist. You can have humans go through the data to try to remove bias, but that introduces a ton of problems as well. Many humans would disagree on what is biased. And human labelers also have a shockingly high error rate. People are flat out bad at repetitive tasks.

        And even that only covers data that actively contains bigotry. In most of these generative AI cases, the real issue is just a lack of data or imbalanced data from the internet. For this specific article, the user asked to make a photo look professional. Training data where photos were clearly a professional setting probably came from sites like LinkedIn, which had a disproportionate number of white users. These models also have a better understanding of English than other languages because there is so much more training data available in English. So asian professional sites may exist in the training data, but the model didn’t understand the language as well, so it’s not as confident about professional images of Asians.

        So you can address this by curating the training data. But this is just ONE of THOUSANDS and THOUSANDS of biases, and it’s not possible to control all of them in the data. Often if you try to correct one bias, it accidentally causes the model to perform even worse on other biases.

        They do their best. But ultimately these are statistical models that reflect the existing data on the internet. As long as the internet contains bias, so will AI

      • Dale@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        It’s possible sure. In order to train these image AIs you essentially feed them a massive amount of pictures as “training data.” These biases happen because more often than not the training data used is mostly pictures of white people. This might be due to racial bias of the creators, or a more CRT explanation where they only had the rights to pictures of mostly white people. Either way, the fix is to train the AI on more diverse faces.

      • ∟⊔⊤∦∣≶@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        There are LoRAs available (hundreds, maybe thousands) to tweak the base model so you can generate exactly what you want. So, problem solved for quite a while now.

  • pacoboyd@lemm.ee
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    1 year ago

    Also depends on what model was used, prompt, strength of prompt etc.

    No news here, just someone who doesn’t know how to use AI generation.

    • deadbolt@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      1 year ago

      Yeah they forgot to say “don’t change my ethnicity” to the prompt. Normal shit, right?

      • biddy@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Yes. Or even better, just add “asian” to the prompt. It’s just a tool and tools are flawed.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        “Don’t change my ethnicity” would do nothing, as these programs can not get descriptions from images, only create images from descriptions. It has no idea that the image contains a woman, never mind an Asian woman. All it does is use the image as a starting point to create a “professional photo”. There absolutely is training bias and the fact that everyone defaults to pretty white people in their 20-30s is a problem. But this is also using the tool badly and getting a bad result.

      • ryannathans@lemmy.fmhy.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        It would be the same if the user wanted to preserve or highlight any other feature, simply specify what the output needs to look like. Ask for nothing but linkedin professional and you get the average linkedin professional.

        It’s like being surprised the output looks asian when asking to look like a wechat user

  • RobotToaster@infosec.pub
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    She asked the AI to make her photo more like what society stereotypes as professional, and it made her photo more like what society stereotypes as professional.

  • ghariksforge@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    1 year ago

    Why is anyone surprised at this? People are using AI for things it was never designed and optimized for.

    • reallynotnick@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      This was kind of my thought, this is a rather complex task that I’m not clear what even a “good” outcome would look like especially given the first photo was a pretty good photo. Should it just color correct and sharpen it? Should it change the background? Should it position your head?

      I’m curious what it would do if you just fed it already good professional photos of white people, would it just spit back the same image?

      Like there has to be a cap on how much it will change so it still looks like you, in which case I assume you’d need to feed it multiple images to get a good result.

  • starcat@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    Racial bias propagating, click-baity article.

    Did anyone bother to fact check this? I ran her exact photo and prompt through Playground AI and it pumped out a bad photo of an Indian woman. Are we supposed to play the raical bias card against Indian women now?

    This entire article can be summarized as “Playground AI isn’t very good, but that’s boring news so let’s dress it up as something else”