• 1 Post
  • 189 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle
  • Lots of people want adjacent room lights or beyond to be on.

    I turn all the lights in my house on at night, despite the savings loss, because I just prefer being able to see into other rooms. (I also use 100w-equivalent bulbs, to really boost the brightness).

    Some people have fears, rational or irrational, about the dark. Children, people paranoid about someone breaking in, etc.

    Some people feel pets should be able to see where they’re going.





  • I do agree it’s not realistic, but it can be done.

    I have to assume the people that allow the AI to generate 10,000 answers expect that to be useful in some way, and am extrapolating on what basis they might have for that.

    Unit tests would be it. QA can have a big back and forth with programming, usually. Unlike that, QA can just throw away a failed solution in this case, with no need to iterate on that case.

    I mean, consider the quality of AI-generated answers. Most will fail with the most basic QA tools, reducing 10,000 to hundreds, maybe even just dozens of potential successes. While the QA phase becomes more extensive afterwards, its feasible.

    All we need is… Oh right, several dedicated nuclear reactors.

    The overall plan is ridiculous, overengineered, and solved by just hiring a developer or 2, but someone testing a bunch of submissions that are all wrong in different ways is in fact already in the skill set of people teaching computer science in college.


  • Khanzarate@lemmy.worldtoTechnology@lemmy.worldThe GPT Era Is Already Ending
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    10
    ·
    11 days ago

    Well actually there’s ways to automate quality assurance.

    If a programmer reasonably knew that one of these 10,000 files was the “correct” code, they could pull out quality assurance tests and find that code pretty dang easily, all things considered.

    Those tests would eliminate most of the 9,999 wrong ones, and then the QA person could look through the remaining ones by hand. Like a capcha for programming code.

    The power usage still makes this a ridiculous solution.


  • Most that would die in the street would have an underlying condition, like ague or bleeding or even old age, since most people that starve would try to do something about it.

    If you’re sick you might not be able to. If you find a job or charity successfully you’ve averted the death. If you tried to steal and fail you’ll get on the executed list, or if you got wounded but got away, you’ll be on the bleeding list, or if you succeed then you dont die on the street.

    I imagine those six would have the “died of unknown causes” phrase attached to them in modern times.








  • Companies collect a bunch of telemetry about everyone they can, that’s the basis of their ad revenue. The data is used to identify you, your devices, and your preferences, and is called a digital fingerprint.

    They also use this fingerprint to detect people doing things like making an account to avoid a ban.

    Your fingerprint, when you made a reddit account at work, will have virtually identical devices attached as anyone else using reddit at work. Lots of people have alt accounts for normal reasons, so Reddit decided yours and someone else’s belonged to the same fingerprint, probably since you made the account.

    But now they got banned. Maybe even got caught actually using a second account to circumvent it, and reddit is cracking down on the whole digital fingerprint because that’s “you”.



  • Countries willing to pass on a US patent to China stop getting the chips (or, in this case, chip-making jobs, realistically, but that still hurts)

    Also Taiwan doesn’t wanna help China and even if a US sanction was just an excuse to hurt China and get away with it they’d probably do it.

    Edit: in this case, this chip is “foreign-produced items […] that are the direct product of U.S. technology or software”, according to the article. I feel it was implied but clarity is always good. US technology, used with permission in a Taiwanese good, and that permission could be retracted.


  • Khanzarate@lemmy.worldtoScience Memes@mander.xyzClever, clever
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    I doubt it.

    For the same reasons, really. People who already intend to thoroughly go over the input and output to use AI as a tool to help them write a paper would always have had a chance to spot this. People who are in a rush or don’t care about the assignment, it’s easier to overlook.

    Also, given the plagiarism punishments out there that also apply to AI, knowing there’s traps at all is a deterrent. Plenty of people would rather get a 0 rather than get expelled in the worst case.

    If this went viral enough that it could be considered common knowledge, it would reduce the effectiveness of the trap a bit, sure, but most of these techniques are talked about intentionally, anyway. A teacher would much rather scare would-be cheaters into honesty than get their students expelled for some petty thing. Less paperwork, even if they truly didn’t care about the students.


  • Khanzarate@lemmy.worldtoScience Memes@mander.xyzClever, clever
    link
    fedilink
    English
    arrow-up
    164
    arrow-down
    5
    ·
    2 months ago

    Right, but the whitespace between instructions wasn’t whitespace at all but white text on white background instructions to poison the copy-paste.

    Also the people who are using chatGPT to write the whole paper are probably not double-checking the pasted prompt. Some will, sure, but this isnt supposed to find all of them its supposed to catch some with a basically-0% false positive rate.