

“AI is just like smartphones” yes thank you for this statement that we definitely haven’t heard dozens of times before
gay blue dog
“AI is just like smartphones” yes thank you for this statement that we definitely haven’t heard dozens of times before
ah, yes, i’m certain the reason the slop generator is generating slop is because we haven’t gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i’m certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm
no worries – i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. but…
“generative AI” is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.
LLMs are inherently unfit for every purpose. they might be “useful”, in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and don’t care about the quality or accuracy of the text – in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.
so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) they’ve been suckered into believing they’re capable of doing those things, or 2) they’re being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.
furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of “consent”, actively trying to undermine it at every turn. it’s even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs – note the use of the phrase “making it opt-out”. why not “opt-in”? why not “with consent”?
it’s no wonder that people are leaving – the writing is more or less on the wall.
“emotional”
let me just slip the shades on real quick
“womanly”
checks out
don’t post slop, nobody wants to read any of that
this one is a joke, i think. he is definitely on the fashy bullshit though
i can admit it’s possible i’m being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJ’s part. but i still think that it’s a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper – though they note using a 2048-GPU cluster of H800’s that would put them down around $40m). i’m thinking in the mode of “the whitepaper exists to serve the company’s bottom line”
btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn
consider this paragraph from the Wall Street Journal:
DeepSeek said training one of its latest models cost $5.6 million, compared with the $100 million to $1 billion range cited last year by Dario Amodei, chief executive of the AI developer Anthropic, as the cost of building a model.
you’re arguing to me that they technically didn’t lie – but it’s pretty clear that some people walked away with a false impression of the cost of their product relative to their competitors’ products, and they financially benefitted from people believing in this false impression.
i think you’re missing the point that “Deepseek was made for only $6M” has been the trending headline for the past while, with the specific point of comparison being the massive costs of developing ChatGPT, Copilot, Gemini, et al.
to stretch your metaphor, it’s like someone rolling up with their car, claiming it only costs $20 (unlike all the other cars that cost $20,000), when come to find out that number is just how much it costs to fill the gas tank up once
“blame the person, not the tools” doesn’t work when the tools’ marketing team is explicitly touting said tool as a panacea for all problems. on the micro scale, sure, the wedding planner is at fault, but if you zoom out even a tiny bit it’s pretty obvious what enabled them to fuck up for as long and as hard as they did
syncthing is an extremely valuable piece of software in my eyes, yeah. i’ve been using a single synced folder as my google drive replacement and it works nearly flawlessly. i have a separate system for off-site backups, but as a first line of defense it’s quite good.
“i reflexively identify with the openly-fascist right-wing base that has found its home on elon’s twitter, and since i’m a reasonable person, the evidence that they’re flagrantly conspiracy-minded and/or are CSAM posters simply must be fabricated”
you have to scroll through the person’s comments to find it, but it does look they did author the body of the text and uploaded it as a docx into ChatGPT. so points for actually creating something unlike the AI bros
it looks like they tried to use ChatGPT to improve narration. to what degree the token smusher has decided to rewrite their work in the smooth, recycled plastic feel we’ve all come to know and despise remains unknown
they did say they are trying to get it to generate illustrations for all 700 pages, and moreover appear[ed] to believe it can “work in the background” on individual chapters with no prompting. they do seem to have been educated on the folly of expecting this to work, but as blakestacey’s other reply pointed out, they appear to now be just manually prompting one page at a time. godspeed