The moment word was that Reddit (and now Stackoverflow) were tightening APIs to then sell our conversations to AI was when the game was given away. And I’m sure there were moments or clues before that.
This was when the “you’re the product if its free” arrangement metastasised into “you’re a data farming serf for a feudal digital overlord whether you pay or not”.
Google search transitioning from Good search engine for the internet -> Bad search engine serving SEO crap and ads -> Just use our AI and forget about the internet is more of the same. That their search engine is dominated by SEO and Ads is part of it … the internet, IE other people’s content isn’t valuable any more, not with any sovereignty or dignity, least of all the kind envisioned in the ideals of the internet.
The goal now is to be the new internet, where you can bet your ass that there will not be any Tim Berners-Lee open sourcing this. Instead, the internet that we all made is now a feudal landscape on which we all technically “live” and in which we all technically produce content, but which is now all owned, governed and consumed by big tech for their own profits.
I recall back around the start of YouTube, which IIRC was the first hype moment for the internet after the dotcom crash, there was talk about what structures would emerge on the internet … whether new structures would be created or whether older economic structures would impose themselves and colonise the space. I wasn’t thinking too hard at the time, but it seemed intuitive to that older structures would at least try very hard to impose themselves.
But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.
Well that’s a happy note on which to end this day
(Well written though, thank you)
Especially coming from Google, who was one of the good guys pushing open standards and interoperability.
Power corrupts. Decentralize.
Eh, open-sourcing is just good business, the only reason every big tech company doesn’t is that loads of executives are stuck in the past. Of course having random people on the internet do labor for you for free is something Google would want. They get the advantage of tens of thousands of extra eyes on their code pointing out potential security vulnerabilities and they can just put all the really shady shit in proprietary blobs like Google Play Services, they’re getting the best of both worlds as far as they’re concerned.
Large publicly-traded companies do not do anything for the good of anyone but themselves, they are literally Legally Obligated to make the most profitable decisions for themselves at all times. If they’re open-sourcing things it’s to make money, not because they were “good guys”.
We ruined the world by painting certain men or groups as bad. The centralization of power is the bad thing. That’s the whole purpose of all Republics as I understand it. Something we used to know and have almost completely forgotten
Well said! I’m still wondering what happens when the enviable ouroboros of AI content referencing AI content referencing AI content makes the whole internet a self perpetuating mess of unreadable content and makes anything of value these companies once gained basically useless.
Would that eventually result in fresh, actual human created content only coming from social media? I guess clauses about using your likeness will be popping up in TikTok at some point (if they aren’t already)
I dunno, my feeling is that even if the hype dies down we’re not going back. Like a real transition has happened just like when Facebook took off.
Humans will still be in the loop through their prompts and various other bits and pieces and platforms (Reddit is still huge) … while we may just adjust to the new standard in the same way that many reported an inability to do deep reading after becoming regular internet users.
I think it’ll end up like Facebook (the social media platform, not the company). Eventually you’ll hit model collapse for new models trained off uncurated internet data once a critical portion of all online posts are made by AI, and it’ll become Much more expensive to create quality, up-to-date datasets for new models. Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda and everyone else will move to newer, less enshittified platforms until the cycle repeats.
Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human. I also think most current fediverse platforms are also marginally more resistant to AI bots because individual servers can have an application process that verifies your humanity, and then defederate from instances that don’t do that.
Basically anything that can segment the Unceasing Firehose of traffic on the big social media platforms into smaller chunks that can be more effectively moderated, ideally by volunteers because a large tech company would probably just automate moderation and then you’re back at square 1.
Honestly, that sounds like the most realistic outcome. If the history of the internet is anything to go by, the bubble will reach critical mass and not so much pop, as slowly deflate when something else begins to grow and take its place of hype.
Great take.
Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda
Ooof … great way of putting it … “brain melting AI propaganda” … I can almost see a sci-fi short film premised on this image … with the main scene being when a normal-ish person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.
Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human.
Yep. This is a pretty vital project in the social media space right now that, IMO, isn’t getting enough attention, in part I suspect because a lot of the current movements in alternative social media are driven by millennials and X-gen nostalgic for the internet of 2014 without wanting to make something new. And so the idea of an AI-protected space doesn’t really register in their minds. The problems they’re solving are platform dominance, moderation and lock-in.
Worthwhile, but in all serious about 10 years too late and after the damage has been done (surely our society would be different if social media didn’t go down the path it did from 2010 onward). Now what’s likely at stake is the enshitification or en-slop-ification (slop = unwanted AI generated garbage) of internet content and the obscuring of quality human-made content, especially those from niche interests. Algorithms started this, which alt-social are combating, which is great.
But good community building platforms with strong privacy or “enclosing” and AI/Bot protecting mechanisms are needed now. Unfortunately, all of these clones of big-social platforms (lemmy included) are not optimised for community building and fostering. In fact, I’m not sure I see community hosting as a quality in any social media platforms at the moment apart from discord, which says a lot I think. Lemmy’s private and local only communities (on the roadmap apparently) is a start, but still only a modification of the reddit model.
person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.
I see you have met my Fox News watching parents.
LOL (I haven’t actually met someone like that, in part because I’m not a USian and generally not subject to that sort of type ATM … but I am morbidly curious TBH.
You’re absolutely right about not going back. Web 3.0 I guess. I want to be optimistic that a distinction between all the garbage and actual useful or real information will be visible to people, but like you said, general tech and media literacy isn’t encouraging, hey?
Slightly related, but I’ve actually noticed a government awareness campaign where I live about identifying digital scams. Be nice if that could be extended to incorrect or misleading AI content too.
It should end up self regulating once AI is using AI material. That’s the downfall of the companies not bothering to put very clear identification of AI produced material. It’ll spiral into a hilarious mess.
I’m legit looking forward to when Google returns completely garbled and unreadable search results, because someone is running an automated Ads campaign that sources another automated campaign and so on, with the only reason it rises to the top is that they put the highest bid amount.
I doubt Google will do shit about it, but at least the memes will be good!
Hasn’t it already happened? All culture is derivative, yes all of it. And look at how much of it is awful, yet we navigate fine. I keep hearing stats like every one second YouTube gets 4 hours more content and yet I use YouTube daily. Despite being very very confident that all but a fraction of a percent of what it has is of any value to me.
Same for books, magazines, news, podcasts, radio programs, music, art, comics, recipes, articles…
We already live in the post information explosion. Where the same stuff gets churned over and over again. All I am seeing AI doing is speeding this up. Now instead of a million YouTube vids I won’t watch getting added next week it will be ten million.
Tik Tok was banned so it ain’t coming from there. Can’t get universal healthcare but we can make sure to protect kids from the latest dance craze.
Thats a technical issue that likely can be solved. I doubt some feedback loop of training data will be the downfall of AI… The way to stop it is to refuse to use it( lets be real the regulators arnt gana do shit)
But I never thought anything like this would happen. That the cloud, search/google, mega platforms and AI would swallow the whole thing up.
I didn’t think so either. The funny thing is, Blade Runner, The Matrix, and the whole cyberpunk genre was warning us…
Yea but this feels quicker than anyone expected. It’s easy to forget, but alpha Go beating the best in the world was shocking at the time and no one saw it coming. We hadn’t sorted out what to do with big monopoly corps yet, we weren’t ready for a whole new technology.
Nice to hear I’m not the only one who thought the same exact thing.
Oh yea, it’s basically a vibe now for those who see it, which I was mostly channeling.
"AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
So delusional.
Do they think that their AI will actually dig the cobalt from the mines, or will the AI simply be the one who sends the children in there to do the digging?
It will design the machines to build the autonomous robots that mine the cobalt… doing the jobs of several companies at one time and either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.
Have you seen the real fucking world?
It’s gonna make the rich richer and the poor poorer. At least until the gilded age passes.
I agree and I gave that option as the last one in the list.
AI absolutely will not design machines.
It may be used within strict parameters to improve the speed of theoretically testing types of bearing or hinge or alloys or something to predict which ones would perform best under stress testing - prior to acutal testing to eliminate low-hanging fruit, but it will absolutely not generate a new idea for a machine because it can’t generate new ideas.
The model T will absolutely not replace horse drawn carts – Maybe some small group of people or a family for a vacation but we’ve been using carts to do war logistics for 1000s of years. You think some shaped metal put together is going to replace 1000s of men and horses? lol yeah right
apples and oranges.
You’re comparing two products with the same value prop: transporting people and goods more effectively than carrying/walking.
In terms of mining, a drilling machine is more effective than a pickaxe. But we’re comparing current drilling machines to potential drilling machines, so the actual comparison would be:
- is an AI-designed drilling machine likely to be more productive (for any given definition of productivity) than a human-designed one?
Well, we know from experience that when (loosely defined) “AI” is used in, for e.g. pharma research, it reaps some benefits - but does not replace wholesale the drug approval process and its still a tool used by - as I originally said - human beings that impose strict parameters on both input and output as part of a larger product and method.
Back to your example: could a series of algorithmic steps - without any human intervention - provide a better car than any modern car designers? As it stands, no, nor is it on the horizon. Can it be used to spin through 4 million slight variations in hood ornaments and return the top 250 in terms of wind resistance? Maybe, and only if a human operator sets up the experiment correctly.
No, the thing I’m comparing is our inability to discern where a new technology will lead and our history of smirking at things like books, cars, the internet and email, AI, etc.
The first steam engines pulling coal out of the ground were so inefficient they wouldn’t make sense for any use case than working to get the fuel that powers them. You could definitely smirk and laugh about engines vs 10k men and be totally right in that moment, and people were.
The more history you learn though, you more you realize this is not only a hubrisy thing, it’s also futile as how we feel about the proliferation of technology has never had an impact on that technology’s proliferation.
And, to be clear, I’m not saying no humans will work or have anything to do – I’m saying significantly MORE humans will have nothing to do. Sure you still need all kinds of people even if the robots design and build themselves mostly, but it would be an order of magnitude less than the people needed otherwise.
Maybe I’m pessimistic but all I see is every call center representative disappearing and that’ll be it
I agree that AI is just a tool, and it excels in areas where an algorithmic approach can yield good results. A human still has to give it the goal and the parameters.
What’s fascinating about AI, though, is how far we can push the algorithmic approach in the real world. Fighter pilots will say that a machine can never replace a highly-trained human pilot, and it is true that humans do some things better right now. However, AI opens up new tactics. For example, it is virtually certain that AI-controlled drone swarms will become a favored tactic in many circumstances where we currently use human pilots. We still need a human in the loop to set the goal and the parameters. However, even much of that may become automated and abstracted as humans come to rely on AI for target search and acquisition. The pace of battle will also accelerate and the electronic warfare environment will become more saturated, meaning that we will probably also have to turn over a significant amount of decision-making to semi-autonomous AI that humans do not directly control at all times.
In other words, I think that the line between dumb tool and autonomous machine is very blurry, but the trend is toward more autonomous AI combined with robotics. In the car design example you give, I think that eventually AI will be able to design a better car on its own using an algorithmic approach. Once it can test 4 million hood ornament variations, it can also model body aerodynamics, fuel efficiency, and any other trait that we tell it is desirable. A sufficiently powerful AI will be able to take those initial parameters and automate the process of optimizing them until it eventually spits out an objectively better design. Yes, a human is in the loop initially to design the experiment and provide parameters, but AI uses the output of each experiment to train itself and automate the design of the next experiment, and the next, ad infinitum. Right now we are in the very early stages of AI, and each AI experiment is discrete. We still have to check its output to make sure it is sensible and combine it with other output or tools to yield useable results. We are the mind guiding our discrete AI tools. But over a few more decades, a slow transition to more autonomy is inevitable.
A few decades ago, if you had asked which tasks an AI would NOT be able to perform well in the future, the answers almost certainly would have been human creative endeavors like writing, painting, and music. And yet, those are the very areas where AI is making incredible progress. Already, AI can draw better, write better, and compose better music than the vast, vast majority of people, and we are just at the beginning of this revolution.
It can solve existing problems in new ways, which might be handy.
can
might
sure. But, like I said, those are subject to a lot of caveats - that humans have to set the experiments up to ask the right questions to get those answers.
That’s how it currently is, but I’d be astounded if it didn’t progress quickly from now.
OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they’re incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.
5 years ago I don’t think most people thought ChatGPT was possible, or StableDiffusion/MidJourney/etc.
We’re in an era of insane technological advancement, and I don’t think it’ll slow down.
i would be extremely surprised if before 2100 we see AI that has no human operator and no data scientist team even at a 3rd party distributor - and those things are neither a lie, nor a weaselly marketing stunt (“technically the operators are contractors and not employed by the company” etc).
We invented the printing press 584 years ago, it still requires a team of human operators.
A printing press is not a technology with intelligence. It’s like saying we still have to manually operate knives… of course we do.
either freeing up several people to pursue leisure or the arts or starve to death from being abandoned by society.
You know EXACTLY which one it’s gonna be.
It can’t design.
define design – I had Chat GPT dream up new musical instruments and then we implemented one. It wrote all the code and architecture, though I did have to prod/help it along in places.
https://pwillia7.github.io/echosculpt3/
you can read more here: https://reticulated.net/dailyai/daily-experiments-gpt4-bing-ai/
Thx, will read.
Neither can the majority of engineers I have meet, but that hasn’t stopped them. You really don’t need any design ability if your whole day is having endless meetings terrorizing OEMs.
deleted by creator
It isn’t the intelligence of the machine designer that is the issue, it is the middlemen and the end user.
Continuously having to downgrade machines. Wouldn’t want some sales rep seeing something new.
Hahaha, current ML is basically good guessing, that doesn’t really transfer to building machines that actually have to obey the laws of physics.
is it good guessing that you know when you step out of your bed without looking you won’t fall to your death?
Big fail to forget the /s here…
Why? This is a very real possibility.
Work a blue collar job your whole life and tell me it’s possible. Machines suck ass. They either need constant supervision, repairs all the time, or straight up don’t function properly. Tech bros always forget about the people who actually keep the world chugging.
They suck because your employer wouldn’t pay me more for a better machine. Chemical is where it is at, outside of powerplants and some of the bigger pharms the chemical operator is a dead profession. Entire plants are automated with the only people doing work are doing repairs or sales.
LLMs aren’t going to be designing anything; they’re just fancy auto complete engines with a tendency to hallucinate facts they haven’t been trained on.
LLMs are preventing real advancements in AI by focusing the attention and funding into what’s evidently a dead end.
AGI != LLMs.
AGI is a pipedream
I hope not. I want more types of sentient beings to exist. But, I also don’t believe any company is actually working towards AGI.
No, the existence of humans inherently disproves that. We just have hardware so advanced many still think it’s magic.
Now, if you said it was a pipe dream within the next decade? I’d agree.
Exactly, but LLMs are preventing further advances in AGI.
Proof?
TFW you realize you’re just a fancy autocomplete engine :P
No, I’m a self-referential pattern recognition machine.
same same?
Why? This is a very real possibility.
AI cannot come even CLOSE to reasoning.
And a submarine can’t even swim.
Proper AI definitely could.
LLMs…? Not a chance, absolute dead end, just a modern Eliza.
if
This word is like Atlas, holding up the world’s shittiest argument that anyone with 3 working braincells can see through.
it isn‘t delusional, it is a lie
It’s a big year in robotics, so, the former.
They just mean “steal from the weaker ones” by “create”.
Psychology of advertising a Ponzi scheme.
They say “we are going to rob someone and if you participate, you’ll get a cut”, but change a few things so that people would understand, but would think that someone else won’t and will be the fool to get robbed. Then those people considering themselves smart find out that, well, they’ve been robbed.
Humans are very eager to participate in that when they think it’s all legal and they won’t get caught.
The idea here is that the “AI” will help some people own others and it’s better to be on the side of companies doing it.
I generally dislike our timeline in the fact that while dishonorable people are weaker than honorable people long term, it really sucks to live near a lot of dishonorable people who want to check this again the most direct way. It sucks even more when that’s the whole world in such a situation.
Let’s not forget this is all driven by people with the right skillset, in the right place at the right time, who are hell-bent on making vast amounts of money.
The “visionary technological change” is a secondary justification.
Permission granted to scrape this comment too, if you like.
AI might be the one to say “solving global warming needs a drastic reduction car-based infrastructure, plus heavy government regulation and investment in new infrastructure”. They’ll throw out that answer because it isn’t what they wanted to hear.
A point I have been repeating for a while. You can’t out-think every problem. Often the solution is right there and no one wants it.
How do you get in better shape? Diet and exercise. Ok? What exactly was confusing? It’s the same freaken solution that everyone has known forever. Hell Aristotle talked about the dangers of red meat. They hadn’t even gotten to the point where they thought leaches worked and they knew that people who ate red meat all the time had medical problems.
There are lots of great solutions to climate change from stuff that just buys us a little more time (plant a billion trees) to long term solutions (nuclear and renewables) to hell mary solutions (climate engineering). And we have tried none of them.
Nah, they’re probably planning to do what Amazon did with their “Just Walk Out” stores… force children into mines and just claim it’s actually AI. As NFT’s, Cryptocurrency, and so many other hype tech fads have taught us: marketing is cheaper than development.
Just like the industrial revolution!
To be fair, that did improve things for the average person, and by a staggering amount.
The vast majority of people working before the industrial revolution were lowly paid agricultural workers who had enormous instability in employment. Employment was also typically very seasonal, and very hard work.
That’s before we even get into things like stuff being made cheaper, books being widely available, transport being opened up, medical knowledge skyrocketing, famines going from regular occurrence to rare occurrence, etc as a result of the industrial revolution.
We had been on a constant trajectory of everyone getting wealthier up until the late 1970s where afterwards we saw a sharp rise in inequality, a trend that hasn’t stopped. (Thatcher and her other shithead twin Reagan?)
In the mid 70s, the top 1% owned 19.9% of wealth. Now that figure is around 53%.
Even then it is “only” the west. China was starving only two generations ago. As a whole humanity just keeps getting richer and richer. No part of what I am saying is meant to excuse the damage neoliberalism did to wealthy equality in the developed world.
Well yeah, the industrial revolution only helped the areas it affected. But that kinda goes without saying.
The very first prompt this AGI is given will be “secure as much wealth as possible without breaking any laws that might see us punished”.
Quote from the subtitle of the article
and you can’t stop it.
Don’t ever let life-deprived, perspective-bubble wearing, uncompassiontate, power hungry manipulators, “News” people, tell you what you can and cannot do. Doesn’t even pass the smell test.
My advice, if a Media Outlet tries to Groom you to think that nothing you do matters, don’t ever read it again.
Closed it as soon as I saw the paywall anyway
god, i love this statement. it’s so true. people have to understand our collective power. even if the only tool we have is a hammer, we can still beat their doors down and crush them with it. all it takes is organization and willingness.
The implication being that this is the deal that the AI boom is offering, it’s not necessarily an endorsement of that philosophy by the writer.
I don’t care what the implication was, I didn’t read past the slight/insult to my character, morality and intelligence. Who is some MSM empty suit tank to play cognitive narrative shaping with me, absolutely zero.
Okay.
elias griffin ain’t fucking around. neither am i. weakling pacifists will be crushed under the heel of the coming dystopia.
The Atlantic huh? Alright then, The Atlantic, I’ll remember your name and that you published a piece concluding people are powerless to affect change.
Now (steelman) can I square this with the sentiment from Propaghandi’s “A People’s History of the World”:
…we’ll have to teach ourselves to analyze and understand
the systems of thought-control.
And share it with each other,
never sayed by brass rings or the threat of penalty.
I’ll promise you- you promise me- not to sell each
other out to murderers, to thieves.
. who’ve manufactured our delusion that you and me
participate meaningfully in the process of running
our own lives. Yeah, you can vote however the fuck
you want, but power still calls all the shots.
And believe it or not, even if
(real) democracy broke loose,
power could/would just “make the economy scream” until we vote responsibly.…
Does this apply here? The song is talking about ballot boxes and corporate explotation on a nation-state imperialist. The topic at hand is to do with the corporate exploitation on a worldwide colonization-of-attention level.
So i think the way I best square this question, do we have the ability to do something about it, is this:
Yes. You can do something. Not in the way that popular media depicts the french revolution. Revolution will instead be boring. In fact, IS: Change minds. Change your own mind about whatever forms of domination you have accepted as just. Demand to know who made OpenAI king. While you’re at it, demand to know why it was just for Imperialist campaigns by “superpowers” justified The Contras. It’s a history lesson we can learn from, believe it or not.
Will you stay down on your knees, or does power still call all the shots?
Any pay wall that let’s you read that much article before showing itself to be behind a pay wall can burn in hell and would have no hope of getting my business purely out of spite.
FWIW if you turn off scripts you can see the whole article.
I need a hot key on my android phone to just flip off scripts real quick instead of having to go three pages deep in settings to turn it on or off.
I just use the NoScript extension on Firefox, though it still takes a couple clicks to whitelist or temp-whitelist a site. Apparently uBlock Origin can do the same in Advanced mode, but I never got around to figuring it out.
I want a blacklist instead of a whitelist. On by default, but always off on some sites.
you can do that with noscript, too.
Now we’re talking. I wonder if I can set up the apk I use for Lemmy (Thunder) to use ff instead of chrome. Time to check some stuff.
Just use Reader view or whatever that’s called in your browser. I use Arctic for Lemmy on iOS and it has a ‘default to reader’ for opening links. Can’t remember the last time I saw a paywall. There’s one news site that doesn’t work but it’s pretty obvious straight away.
Yes just ask chatgpt to read it for you and give it the url
Like this
https://chatgpt.com/share/d9010273-9e39-4db0-b05d-0986d7044b7f
Abolish intellectual property, it is a mental illness that has infected our legal system.
“We need you to reconsider… because we already did it and we’re just looking for your stamp of approval after the fact.”
AI has barely started infecting things, it’s still avoidable… Yet even at this early stage it’s obvious these companies have no morality and are willing to break laws and violate social norms.
It’s obvious they’re evil and they’ve barely just begun.
Corporations are as callous and mechanical as they have always been, with an ever expanding range of tools to exploit. They will do anything and everything they can unless it is less profitable to do it.
asking for forgiveness rather than permission sorta just seems to be their policy these days, yeah?
If by “forgiveness” you mean an avoidance of legal liability, sure. :P
our collective time would be better spent destroying capitalism than trying to stop AI. AI is wonderful in the right social system.
On the other hand, assuming the social system isn’t the right one, hypothetically AI fully realized could make it more unreasonable and more tightly stuck the way it is.
Not to mention, any other, more just social system wouldn’t be fucking decimating the environment, ultimately hurting the poorer nations first, for money. And AI is accelerating our CO2 output when we need to be drastically cutting it back. This is very much a pacifying tool as we barrel toward oblivion.
AI is accelerating our CO2 output
Could you explain that a little bit more please?
https://www.ft.com/content/61bd45d9-2c0f-479a-8b24-605d5e72f1ab
https://www.technologyreview.com/2023/12/05/1084417/ais-carbon-footprint-is-bigger-than-you-think/
https://hai.stanford.edu/news/ais-carbon-footprint-problem
When the world needs to be drastically altering our way of life to avert the worst of climate change, these companies are getting away with accelerating their output and generating tons of investment and revenue because “that’s what the market dictates.” Just like with crypto/blockchain a few years ago, adding “AI” into any business pitch/model is basically printing money. So companies are more inclined to incorporate this machine learning tech into their business, and this is all happening while the energy demand for increased usage and the constant “updates” and advancements in the field are gobbling up way more energy than we can honestly afford—and really even conceive of. Because they’re trying to hide this fact, given, yknow, the world fuckin ending. Basically, the market and the entire system of media is encouraging and fawning over this “leap” in tech, when we can’t realistically afford to continue our habits we had before this market even existed. So they are accelerating co2 output, everyone cheers, and we all ride merrily to the edge of our doom.
It’s capitalism once again destroying us and the planet for profit. And everyone who mindlessly jumps on board, ooh’ing and aww’ing at the stupid new shit they’re doing (while they infringe upon the work of all artists without compensation, driving human creativity out in the job market in favor of saving corporations some scratch by firing their artists and using AI instead…I genuinely can’t really conceive of how people seem so on board with this concept.
"Cutting-edge technology doesn’t have to harm the planet, and research like this is very important in helping us get concrete numbers about emissions. It will also help people understand that the cloud we think that AI models live on is actually very tangible, says Sasha Luccioni, an AI researcher at Hugging Face who led the work.
Once we have those numbers, we can start thinking about when using powerful models is actually necessary and when smaller, more nimble models might be more appropriate, she says."
that’s a shame and i’m not surprised at all to see that corporations are using AI for completely unimportant things.
But one thing to consider is that AI could also lead to solutions that help save the planet, like solving problems with fusion technology. I still believe in science, and I still believe that capitalism is the root of the problem, not the technology itself.
I mean, sure, I agree with you. Capitalism is the problem, no question. I would love a job-replacing tech so people could live lives of leisure and art. But…this system is being built for capitalist ends. It’s built by, funding by, and being put in the hands of the exact people causing the problem.
I agree that in a hypothetical world, machine learning technology could very well help humanity. But the code and money is in the hands of people who aren’t interested in helping humanity.
I’m no fan of forced labor for basic necessities. And I’m not advocating for that system by any means, but this tech, in this world, will drive the cost of labor down, drive people from the jobs they’ve been forced to rely upon, and it’s literally taking one of the few job fields where people actually got to express their humanity for their wages: art. Creative writing and design/visual art were one of the few fields people actually dreamt of doing. Because it offered us a living for creating. For being human. And that tiny outlet of humanity in the vast contrivance of capitalism is being devoured by this tech.
That’s just one small part of my distrust of “AI.” But the underlying problem is as I stated first, which is that this tech, existing in this world at this point in time, isn’t going to free us. It’s another tool by the ownership class to cut costs, decimate the environment, and drive profit. While also killing the small little sliver of human creativity that was allowed to exist under capitalism.
So again, hypothetically, yes, the tech could be a force for good and for human liberation from meaningless work. But it’s actually making our work even more meaningless, while sequestering another huge chunk of power for the ruling class. It would be great if it could reach its potential as a force for good. But given everything, that is not how it’s being implemented.
your points are completely valid, which is why we really need to start banding together to dismantle the ownership class
by
any
means
necessary
for the sake of humanity (and all other living things on the planet)
Exactly. I know it’s easy to automatically froth at the mouth with rage when seeing “AI”, and here anything mentioning it gets automatically rejected, but there are genuinely good usecases.
Amazing speech synthesis and recognition is useful for anybody, but especially people with certain disabilities.
Much better translation, spell checking, help with writing. Helping people understand texts that are written in a complicated way (legalese, technical jargon, condensing EULA’s, etc)
Infrastructure planning and traffic control.
Grid energy usage and distribution.
Image recognition, useful for anybody for things like searching a photo library for a specific thing, but also for people with visual issues who previously had to rely on awful screen reader software that can’t tell you the content of images unless it was properly tagged (as someone with a blind sister who uses computers - rare!)
Spotting fake reviews, a massive issue online. Flagging bot accounts.
The potential for them to take over some jobs and free up people to pursue other things in life.
This technology, if trained ethically, and not used to siphon more data from people, is amazing. It’s how megacorps are using it that’s the problem.
I mean, that’s just how it has always worked, this isn’t actually special to AI.
Tom Hanks does the voice for Woody in Toy Story movies, but, his brother Jim Hanks has a very similar voice, but since he isnt Tom Hanks he commands a lower salary.
So many video games and whatnot use Jim’s voice for Woody instead to save a bunch of money, and/or because Tom is typically busy filming movies.
This isn’t an abnormal situation, voice actors constantly have “sound alikes” that impersonate them and get paid literally because they sound similar.
OpenAI clearly did this.
It’s hilarious because normally fans are foaming at the mouth if a studio hires a new actor and they sound even a little bit different than the prior actor, and no one bats an eye at studios efforts to try really hard to find a new actor that sounds as close as possible.
Scarlett declined the offer and now she’s malding that OpenAI went and found some other woman who sounds similar.
Thems the breaks, that’s an incredibly common thing that happens in voice acting across the board in video games, tv shows, movies, you name it.
OpenAI almost certainly would have won the court case if they were able to produce who they actually hired and said person could demo that their voice sounds the same as Gippity’s.
If they did that, Scarlett wouldn’t have a leg to stand on in court, she cant sue someone for having a similar voice to her, lol.
She sure can’t. Sounds like all OpenAI has to do is produce the voice actor they used.
So where is she? …
Right.
Get real. They have made it like her deliberately. Not anybody “nearly alike”. They even admitted it.
That was the point… Did you reply to the wrong comment?
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Government administrations wanting to know the ingredients of how something is made isn’t exactly new.
Removed by mod
That’s flattering, but I was actually just expecting a press release. So where is it?
Yes but also no, the whole appeal is tied to her brand (her public image x the character HER), unlike Woody who is an original creation.
It’s like doing a commercial using a lookalike dressed like the original guy and pretending that’s a completely different actor.
I agreed with op, then i read your astute response and now I don’t know which position is correct.
Thinking it through as i type… If you photoshopped an image of Tom Hanks giving a thumbs up to your product, that would clearly be illegal, but if you hired an exact flawless lookalike impersonator of Tom Hanks and had him pose for a picture with a thumbs up to your product, would that be illegal? I think it might still be illegal, because you purposely hired a lookalike impersonator to gain the benefit of Tom Hanks’ brand.
I think the law on AI should match what the law says about impersonators. If hiring an indistinguishable celebrity impersonator to use in media is legal, then ai soundalikes should be legal too, and vice versa.
when you get into these nitty gritty copyright/ip arguments you realize it’s all just a house of cards to make capital king and the main ism
I think what it comes down to is intention. Are you intending to mimic someone else’s likeness without that person’s permission? That’s wrong. But if you just like someone’s voice and want to use them, and they happen to have a similar likeness, that’s fine.
Where OpenAI gloriously fucked up is asking Johansson first. If they hadn’t, they would have plausible deniability that they just liked the voice actor’s voice. If it reminds them of Johansson, that’s even fine. What’s wrong is that they specifically wanted her likeness, even after she turned them down.
I get that she is grappling with identity and it’s not a clear cut case, but if the precedent is set that similar voices (and I didn’t even think it was that similar in this case) are infringement, that would be a pretty big blow to commercial creativity projects.
Maybe it’s more a brand problem than an infringement problem.
That reminded me of Ice Ice Baby and the rip-off of Queen’s Under Pressure bass riff. Queen won i think.
I don’t think this is the same thing though. They asked her, she said so, they went for her cute cousin instead… typical.
The difference is that apparently they asked ScarJo first and she said no. When they ask Tom Hanks (or really his agent, I assume) the answer is “he’s too busy with movies, try Jim”.
You think celebrities need to consent to someone that sounds similar to them getting work? That’s insane.
Having a talking woman in your phone is not stealing Scarlet Johansson’s likeness, even if they sound somewhat similar. US copyright law is already ridiculous, and you want to make it even more bullshit?
By that logic her role in Her was already stealing the voice actor for Siri’s likeness, and she should have sued for that too.
If you don’t own your image what do you own?
Also you know scale. There is a difference between an Elvis impersonation in Vegas vs a huge ass corporation.
You own the pile of money you earned for the role you played in someone else’s creative project.
This isn’t back to the future 2 making a Crispin Glover face mask and putting it on an extra, its using a woman for a voice acting role for an AI speaking from your phone, and somehow that’s stealing from a movie with the same concept, but not stealing from the actual phone AIs voiced by women that existed before the movie.
How would you feel if I made wheelbarrows of money off your face or voice without your consent and not paying you a penny? What about your family, got a relatives you care about who would look great in my AI generated porno?
The world is schizophrenic about this. On one hand we know that data is king and knowing about a person and having access to what they produce is a super important very lucrative field. The biggest companies on earth buy and sell data about people. On the other hand we argue that your image and data has no value and anyone can do what they want with it.
Then I’d have grounds to sue you for stealing my likeness, just like Crispin Glover did in the example I just gave.
Are you under the impression that’s what happened here? It isn’t. The voice is clearly not Scarlet Johansson’s, and she doesn’t have any kind of ownership over the concept of an AI in your phone using an upbeat woman’s voice to speak to you.
Scarlett actually would have a good case if she can show the court that people think it’s her. Tom Waits won a case against Frito Lay for “voice misappropriation” when they had someone imitate his voice for a commercial.
Well, in the “soundalike” situation you describe people were getting paid to voice things. Now it’s just an AI model that’s not getting paid and the people that made the model probably got paid even less than a soundalike voice actor would. It’s just more money going to the top.
Wouldn’t the difference here wrt Tom/Woody be that Tom had already played the role before so there is some expectation that a similar voice would be used for future versions of Woody if Tom wasn’t available?
Serious question, I never thought about the point you made so now I’m curious.
I wish I had enough bandwidth to be angry at a new voice actor being hired to play in a children’s movie franchise.
Removed by mod
“Yeah, let’s go up against the woman who sued Disney and won What could go wrong!?”
The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
It’s just so fitting that microsoft is the company most fervently wallowing in it.
I hate that I have to keep saying this- No one seems to be talking about the fact that by giving their AI a human-like voice with simulated emotions, it inherently makes it seem more trustworthy and will get more people to believe its hallucinations are true. And then there will be the people convinced it’s really alive. This is fucking dangerous.
Please keep saying it.
I plan to. It really upsets me.
paywall.
It’s still just LLM and therefore just autocomplete
Some days I’m just an autocomplete
deleted by creator
Moot point
OpenAI should have given some money to the people who own the movie “Her”. Then they could have claimed they were just mimicking the character.
It doesn’t work like that. It will soon if Disney has their way, with actors selling away their likeness rights for perpetuity with their contracts.
What do you think the actor’s strike was about? And what do you think one of the key agreements the actors wrung out of the studios was? They were not about to allow their likenesses to be sold for all of eternity for pennies on the dollar.
Oh I know, that’s what I was refering to.
That’s very interesting… can you suggest a good article covering this topic?
For example this: https://www.cnbc.com/2023/08/08/disney-reportedly-creates-task-force-to-explore-ai-and-cut-costs.html and this: https://www.rollingstone.com/tv-movies/tv-movie-news/sag-aftra-amptp-studios-contract-artifial-intelligence-details-1234877093/ The second article details how due the strike the actor can consent to digital copies of themselves or not and reimbursed for their use but those digital actors are still coming.
Thank you, that was very interesting, especially the second article. It’ll be worth watching the details of how this plays out over the coming years.
I wish Altman would read Accelerando.
Knowing people like him, he would probably take the obvious literary warnings from a book like that and use them as inspiration for how to build an even more dystopian nightmare.
Which this very story proves. The AI voice that they generated was specifically based on “Her”, a movie about a guy who falls in love with an AI voice assistant. I haven’t seen the movie, but I’m going out on a limb to guess this is another “don’t make the torment vortex” situation.
The movie is actually pretty non-dystopian and kind of sweet. It’s basically a romcom, just one with a very creative premise.