• 0 Posts
  • 480 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle

  • SomeGuy69@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    1 month ago

    The underfunded American healthcare system struggles to adequately address psychological issues. Limited funding results in insufficient mental health resources, such as a shortage of qualified therapists, long wait times for treatment, and high costs for care. Many individuals lack insurance coverage for mental health services, leading to unmet needs and worsening conditions. Public stigma around mental illness compounds the problem, discouraging people from seeking help. Without systemic reform and investment, millions face barriers to essential psychological care, exacerbating societal issues like homelessness, addiction, and workplace productivity loss.









  • Google has destroyed their own ads revenue by adding more and more ads. Imagine they’d have stopped with simple side banner and people would’ve not even bothered to use an adblocker because of it. This tiny little banner would’ve been worth as much as the multiple seconds ads now. The companies would pay as much, as there’d be no alternate.



  • SomeGuy69@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    arrow-up
    2
    arrow-down
    12
    ·
    1 month ago

    I’m not your servant to entertain you. If my words haven’t reached you yet, nothing ever will. Stop wasting your time chasing the unachievable. The only ones in need for a witty comeback, are clowns to entertain the crowd. I have no need for that.









  • Yes, there is a degeneration of replies, the longer a conversation goes. Maybe this student kind of hit the jackpot by triggering a fiction writer reply inside the dataset. It is reproducible in a similar way as the student did, by asking many questions and at a certain point you’ll notice that even simple facts get wrong. I personally have observed this with chatgpt multiple times. It’s easier to trigger by using multiple similar but non related questions, as if the AI tries to push the wider context and chat history into the same LLM training “paths” but burns them out, blocks them that way and then tries to find a different direction, similar to the path electricity from a lightning strike can take.