They think if they say the criminally insane part out loud it will protect them.
Um, human history has repeatedly demonstrated that when a new technology emerges, the two highest priorities are:
- How can we kill things with this?
- How can we bone with this?
Right but a chatbot is light years away from artificial general intelligence. And also let’s be honest if we want to go the pornography route then that’s going to need robots, so that would need general intelligence. As would in fact killer drones.
So yeah, scepticism is highly warranted
If you’ve ever wondered why porn sites use pictures of cars, buses, stop signs, traffic lights, bicycles and sidewalks in their captchas, it’s because they’re using the data to train car-driving AIs to recognize those patterns.
This is not what an imminent breakthrough in cancer research looks like.
Source?
Google recaptcha? They literally talk about this publically. It’s in their mission statement or whatever. It’s used to train other kinds of model too.
They were. They haven’t been using recaptchas to collect trainings day for years now.
Y’know, it’s bullshit that a) you seem to expect this to be common knowledge, as if everyone is supposed to have an archive of internet minutiae saved in their heads or have read and remembered any such info at all…
And b) you chose to downvote and pretty much just said LMGTFY without even the sarcastically provided results instead of backing up your claim. It’s basic courtesy to provide a source for claims instead of downvoting like it’s some kind of affront to your ego that someone wants info on your claim.
It’s not even my claim you are talking about jackass. Read the usernames. If you have fallen into the rabbit hole that is Lemmy you should have been around enough to know about recapcha. If not it’s one DuckDuckGo search away. In fact you could just click the link on the recapcha itself that explains how they use the data for training. Hardly arcane knowledge.
Your comment to me read like Sealioning.
Ah, that makes it so much better. My bad for you jumping into an argument randomly? You’re not improving my view of the shitty attitude here when you double down on “you should have known.”
I appreciate that this post is using dark mode
FYI, using openAI/ChatGPT is expensive. Programming it to program users into dependency on its “friendship” gets them to pay for more tokens, and then why not blackmail them or coerce/honeypot them into espionage for the empire. If you don’t understand yet that OpenAI is arm of Trump/US military, among its pie in the sky promises is $35B for datacenters in Argentina.
What espionage is an AI simp going to be able to conduct?
I’m pretty sure this is just them flailing around not being able to come up with anything meaningful so they’re going this route so they have some profit. I don’t think a conspiracy beyond that is required.
startups hyping shit up to get the investors drooling is one of the most despicable things a man can observe.
The thing that makes it actually bad is that they’re taking advantage of the mentally handicapped(investors). That and that said investors have millions to toss at nonsense while so many people are licky to have pennies to toss at such luxuries as “food”.
Honestly though I don’t think taking advantage of evil people, who swear they deserve their millions because they’re definitely super smart, is really anything I care that much about.
facts
We are closer to making horny chatbots than a superintelligence figuring out a cure for cancer.
Actually, if the latter wins, would that super AI win a Nobel prize?
what if my kink is curing cancer?
It would probably go to whoever uses it to find the cure… And to none of the authors who wrote the data that it was trained on
That’s how the Nobel prize always works. The price goes to whoever managed to cross the finishing line, not all the thousands of scientists before who conducted preliminary research.
Yeah, fair point
To be fair, a better pattern finder could indeed lead to better ways of curing cancer.
This year has just been a constant stream of examples why capitalism is stupid. Machine learning has a lot of utility in medical research. Imagine if it were deployed in such a way to benefit society instead of to maximize techbro’s profit.
Well, guess I know what I’m using ASI for.
There’s not a single world where LLMs cure cancer, even if we decided to give the entirety of our energy output and water to a massive server using every GPU ever made to crunch away for months.
Not strictly LLMs, but neural nets are really good at protein folding, something that very much directly helps understanding cancer amount other things. I know an answer doesn’t magically pop out, but it’s important to recognise the use cases where NN actually work well.
I’m trying to guess what industries might do well if the AI bubble does burst. I imagine there will be huge AI datacenters filled with so-called “GPUs” that can no longer even do graphics. They don’t even do floating point calculations anymore, and I’ve heard their integer matrix calculations are lossy. So, basically useless for almost everything other than AI.
One of the few industries that I think might benefit is pharmaceuticals. I think maybe these GPUs can still do protein folding. If so, the pharma industry might suddenly have access to AI resources at pennies on the dollar.
integer calculations are lossy because they’re integers. There is nothing extra there. Those GPUs have plenty of uses.
I don’t know too much about it, but from the people that do, these things are ultra specialized and essentially worthless for anything other than AI type work:
anything post-Volta is literally worse than worthless for any workload that isn’t lossy low-precision matrix bullshit. H200’s can’t achieve the claimed 30TF at FP64, which is a less than 5% gain over the H100. FP32 gains are similarly abysmal. The B100 and B200? <30TF FP64.
Contrast with AMD Instinct MI200 @ 22TF FP64, and MI325X at 81.72TF for both FP32 and FP64. But 653.7TF for FP16 lossy matrix. More usable by far, but still BAD numbers. VERY bad.
AI isn’t even the first or the twentieth use case for those operations.
All the “FP” quotes are about floating point precision, which matters more for training and finely detailed models, especially FP64. Integer based matrix math comes up plenty often in optimized cases, which are becoming more and more the norm, especially with China’s research on shrinking models while retaining accuracy metrics.
But giving all the resources to LLMs slows/prevents those useful applications of AI.
which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…
But LLMs are non of those things. all they can do is look like text.
LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.
down in Ukraine we have a dozen or so image analysis projects that can’t catch a break because all investors can think about are either swarm drones (quite understandably) or LLM nothingburgers that burn through money and dissipate every nine months. Meanwhile those image analysis projects manage to progress on what is basically scraps and leftovers.
the problem is that technical people can understand the value of different AI tool. but tell an executive with a business major how mind blowing it is that a program trained in Go and StarCraft can solve protein folding (studied biology in 2010 and they kept repeating how impossible solving proteins in silico was).
But a chat bot that tells the executive how smart and special it is?
That’s the winner.
yeah, that’s tough to beat
Multimodal LLMs are definitely a thing, though.
yhea, but it’s better to use the right tool for the job than throwing a suitcase full of tools at a problem
That’s not…
sigh
Ok, so just real quick top level…
Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).
This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).
This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.
The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.
There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.
The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.
The main breakthrough of LLM happened when they figured out how to tokenize words… The subsequent transformer architecture was already being tested on various data types and struggled compared to similarly advanced CNN.
When they figured out word encoding, it created a buzz because transformers could work well with words. They never quite worked as well on images. For that, stable diffusion (a variation on CNN) has always been better.
It’s only because of the buzz on LLMs that they tried applying them to other data types, mostly because that’s how they could get funding. By throwing in disproportionate amount of resources, it works… But it would have been so much more efficient to use different architectures.
What year are you from? Have you not seen Gemini Flash, ChatGPT 4o, Sora 2, Genie 3, etc?
Stable Diffusion hasn’t been SotA for over a year now in a field where every few months a new benchmark is set.
Are you also going to tell me about how we’d be better off using ships for international travel because the Wright brothers seem to be really struggling with their air machine?
Hehe, true! I left the field about 4 years ago when it became obvious that “more GPUs!” was better than any architectural design changes…
Most of the image generation made by the products you mention are based on a mix of LLMs (for processing of user inputs) and some other modality for other media types. Last time I checked, ChatGPT was capable of handling images only because it offloaded the image processing to a branch of the architecture that was not a transformer, or at least not a classical transformer. They did have to grift CNN parts to the LLM to make progress.
Maybe in the last 4 years they reorganised it to completely remove CNN blocks, but I think people call these models “LLMs” only as a shorthand for the core of the architecture.
Again, you said that a new benchmark is set every few months, but considering they’re just consuming more power and water, it’s quite boring and I’d argue it’s not really progress in the academical/theoretical sense. That attitude is exactly why I don’t work with NN anymore.
go ask chatgpt to fold a protein
Oh, wow, look at that… research just a few weeks ago on protein folding using general transformers. Huh.
erm, that’s a transformer model, same as alphafold.
chatgpt is a transformer as well.
my point is that an LLM can’t do shit like a specialised tool can.
that example is a specific tool with a specific use.
And it’s clear we’re nowhere near achieving true AI, because those chasing it have made no moves to define the rights of an artificial intelligence.
Which means that either they know they’ll never achieve one by following the current path, or that they’re evil sociopaths who are comfortable enslaving a sentient being for profit.
they’re evil sociopaths who are comfortable enslaving a sentient being for profit.
i mean, look what is happening in the united states. that would be completely unsurprising to happen here.
It’s DEFINITELY both.
They sure do cure horny though.
There are tons of AIs besides the chat bots. There are definitely cancer hunter seekers.
Good thing I said “LLM” not “AI”.
ehm… cure cancer? I thought “cute cancer”. Sorry bout that
Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.
What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?
Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
Because AI can write programs? As it gets better at doing that, it can make AI’s that are even better, etc etc. Positive feedback loops increase exponentially.
The problem with that is they can’t actually point to a metric where when the number goes beyond that point we’ll have ASI. I’ve seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there’s no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn’t even be on the graph.
So even if things increase exponentially there’s no way they can possibly know how long until we get AGI.
Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.
Ow, that’s why they are restricting “organic” porn, to sell AI porn. Damn.
No money in curing cancer with an LLM. Heaps of money taking advantage if increasingly alienated and repressed people.
There’s loads of money in curing cancer. For one you can sell the cure for cancer to people with cancer.
That’s a weird take! It makes much more
moneysense to sell long termsubscriptionstreatments, rather than a one time cure./s off course
You could sell the cure for a fortune. Imagine something that can reliably cure late stage cancers. You could charge a million for the treatment, easily.
Yes, selling the actual cure would be profitable…but an LLM would only ever provide the text for synthesizing it but none of the extensive testing, licensing, or manufacturing, etc… An existing pharmaceutical company would have to believe the LLM and then front the costs for the development, testing, and manufacture…which constitutes a large proportion of the costs of bringing a treatment to market. Burning compute time on that is a waste of resources, especially when fleecing horny losers is available right now. It is just business.
and LLMs hallucinate a lot of shit they “know” nothing about. a big pharma company spending millions of dollars on an LLM hallucination would crack me the fuck up were it not such a serious disease.
Right, that is why I originally said there is no money in a cancer cure invented by LLM. It’s just not a serious possibility.
What a weird take, research use AI already? Some researchers even research things that, gasp, is not monetiseable right away!
I used to work in academic physics, and I currently work in data science. I am deeply familiar with both ends of the subject in question. LLMs are useful research tools because they speed up the reference finding and literature review process, not because they synthesize new information that does not need to be independently verified.
In the context of medical research, they could absolutely use LLMs to facilitate a literature search. What LLMs cannot do is hand researchers a proposed cure that they could sell to people. You still need to do the leg work of synthesizing the molecules, standardizing the process, industrializing it, patenting it, multiple rounds of testing on increasingly complex animals and eventually people, and then going through the drug approval process with the FDA and others. LLMs speed up the CHEAPEST and EASIEST part of the research process. That is why LLMs will not be handing us the cure for cancer.
But how else would it find the hard lump on yoir testicles?
What about an AI naughty nurse that does both?
You can use AI to fulfill your fantasies! for example: having healthcare (if you’re not American, this joke does not apply)
i’d rather lucid dream i have healthcare my friend. then i can use my care bear stare laser beam to apply vengeance to incompetent healthcare providers and administrators such that they will never know what it is like to satiate their hunger again. then ride a giant flying tardigrade named Hairy Terry off into the sunset.
LLMs only let me imagine it, not (from my perception) experience it. and remember, no crimes without Hairy Terry on lookout