TBH despite I don’t like this specific idea, nor use Firefox directly, I do like the usage of local inference vs sending your data to thirdparty to do AI.
They just needed to do it OPT IN, not OPT OUT.
It is though.
then why the fuck is this newsworthy? ugh. Why is there such a huge hateboner for firefox lately?
I really don’t get it either.
It’s not like it’s a paid product either.
Because they keep betraying their supposed values for short-term gains.
What is the gain? What is a single gain you think they have milked from their users?
Money for their executives
The pathological need to find something to use LLMs for is so bizzare.
It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.
But this:
AI-enhanced tab groups. Powered by a local AI model, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
Take out the word AI.
Enhanced tab groups. Powered by a local algorithm, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
If this feature took, say, a gigabyte of RAM and a bunch of CPU, it would be laughed out. But somehow it ships because it has the word AI in it? That makes no sense.
I am a massive local LLM advocate. I like “generative” ML, within reason and ethics. But this is just stupid.
I agree with you on almost everything.
It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.
Here i disagree. ML is using high dimensional statistics. There exist many problems, which are by their nature problems of high dimensional statistics.
If you have for an example an engineering problem, it can make sense to use an ML approach, to find patterns in the relationship between input conditions and output results. Based on this patterns you have an idea, where you need to focus in the physical theory for understanding and optimizing it.
Another example for “generative AI” i have seen is creating models of hearts. So by feeding it the MRI scans of hundreds of real hearts, millions of models for probable heart shapes can be created and the interaction with medical equipment can be studied on them. This isn’t a “desperate” approach. It is a smart approach.
Based on this patterns you have an idea, where you need to focus in the physical theory for understanding and optimizing it.
How do you tell what the patterns are, or how to interpret them?
The recognition of the pattern is done by the machine learning. That is the core concept of machine learning.
For the interpretation you need to use your domain knowledge. Machine learning together with knowledge in the domain analyzed can be a very powerful combination.
Another example in research i have heard about recently, is detection of brain tumors before they occur. MRIs are analyzed of people who later developed brain tumors to see if patterns can be detected in the people who developed the tumors that are absent in the people who didn’t develop tumors. This knowledge of a correlation between certain patterns and later tumor development could help specialists to further their understanding of how tumors develop as they can analyze these specific patterns.
What we see with ChatGPT and other LLMs is kind of doing the opposite by detaching the algorithm from any specific knowledge. Subsequently the algorithm can make predictions on anything and they are worth nothing.
where is this AI bloat exactly? I use Firefox every day and see no difference
There is none, this is all AI=bad knee-jerk reaction. From what I can tell, so far Firefox has 3 ML-based systems implemented:
- Site / text translation - fully local, small model, requires manual action from user
- Tab grouping suggestions - fully local, small model, requires manual action from user
- Image alt text generation (when adding images to a PDF) - fully local, small model, looks like it’s enabled by default but can be turned off directly in the modal that appears when adding alt text
All of these models are small enough to be quickly run locally on mobile devices with minimal wait time. The CPU spikes appear to be a bug in the inference module implementation - not an intended behavior.
Firefox also provides UI for connecting to cloud-based chatbots on a sidebar, but they need to be manually enabled to be used. The sidebar is also customizable so anyone who doesn’t want this button there can just remove it. There’s also a setting in about:config that removes it harder.
I actually really like the way Mozilla is introducing these features. I recently had to visit another country’s post office site and having the ability to just instantly translate it directly on my device is great.
You meant to tell me the general public has kneejerk reactions and don’t know how a computer works?
What a shock that lemmy bashes Mozilla for doing their job.







