I use a DIY keyboard called the Dactyl Manuform and I love it to bits
- 0 Posts
- 69 Comments
Sometimes they’re less of a short king and more like a small jester
Someone forgot a debounce
Hexarei@beehaw.orgto
Programmer Humor@programming.dev•The official Introduction to Github page included an AI-generated graphic with the phrase "continvoucly morged" on it, among other mistakes.
1·11 days agoAnd then made no effort to proofread it
Yeah they make no sense for most devices. For high end gaming laptops I can understand it though - My laptop has a 300W brick that is as slim as I would bet they could get it at the time.
Hexarei@beehaw.orgto
Linux@lemmy.ml•Cursed screenshot: XFCE desktop from remote machine launched over KDE Plasma of local machine
1·12 days agoI’ve used Xpra for similar
Hexarei@beehaw.orgto
Asklemmy@lemmy.ml•I think Lemmy in general is very against AI. I'm rather new here, is it like a fediverse group thing or is this even based on reality?
3·12 days agoIve had good success on similar hardware (5070 + more ram) with GLM-4.7-Flash, using llama.cpp’s
--cpu-moeflag - I can get up to 150k context with it at 20ish tok/sec. I’ve found it to be a lot better for agentic use than GPT-OSS as well, it seems to do a much more in depth reasoning effort, so while it spends more tokens it seems worth it for the end result.
Hmmm. Anal Cobalt? Anal Grand Cherokee? Anal R8?
Neovim has been my primary IDE for years, welcome to the vimclub :-)
Hexarei@beehaw.orgto
Programmer Humor@programming.dev•o(1) statistical prime approximation
5·13 days agoBecause only 5% of those numbers are prime
Hexarei@beehaw.orgto
TechTakes@awful.systems•AI vibe-generates the same ‘random’ passwords over and overEnglish
0·15 days agoI gave my agents a skill that has them cat from /dev/urandom (which is this corralled into text characters) any time they need to generate passwords for something. Even then I have only ever had one need it like twice.
A perfect image to post to catbox
Grand Poolbah?
Those AI detectors don’t work, btw.
Interestingly, none of the official sources for the model weights clickwrap the download in a way that forces the user to read or agree to those terms before downloading. There is precedent for such terms being unenforceable when the user isn’t forced to agree to the terms.
There’s lots of open source models you can download from Hugging Face, Ollama, and even github without signing any contracts or terms of use. Gemma3, Llama, Ministral, GLM, olmo, and a bajillion others. GLM-4.7-Flash is a very capable agentic model that can run at very usable speeds on commodity hardware - and none of what it generates is dictated by any agreements or policies agreed to anywhere.
Not mine, I run my own 😜
They’re still going, surprisingly
This take is weird, because none of the companies that do inference claim ownership of the generated content in their contracts for one, and because anyone can download open source models and generate code without entering into any ToS, for two.





Only if they’re contributing through GitHub and not through local AI coding apps like Opencode or Claude CLI