I make computers

  • 7 Posts
  • 116 Comments
Joined 1 year ago
cake
Cake day: October 30th, 2023

help-circle
  • The image-to-text model is impressive. I could see it being useful for smart search of your library, allowing users to find photos with a high-level description.

    I’m not sure why it’s being reported on as though the technology is a privacy or security threat, though. If you’ve given a storage provider access to your photos anyway, using a vision model isn’t going to give them anything extra.

    That said, I do love self-hosted photo solutions like Immich and Ente. Hope they continue to grow.




  • In general I agree with the sentiment of the article, but I think the broader issue is media literacy. When the Internet came about, people had similar reservations about the quality of information, and most of us learned in school how to find quality information online.

    LLMs are a tool, and people need to learn how to use them correctly and responsibly. I’ve been using Perplexity.AI as a search engine for a while now, and I think they’re taking the right approach. It employs LLMs at different stages to parse your query, perform web searches on your behalf, and summarize findings. It provides in-text citations as well, which is an opportunity for a media-literate person to confirm the validity of anything important.