

It’s not called Meta data by accident 🤣
It’s not called Meta data by accident 🤣
Shut that section down and ground the wires. Not really that dangerous. It’s only dangerous if you don’t follow protocol.
That’s not why JS is a big pile of crap. It’s because the language was not thought through at the beginning (I don’t blame the inventors for that) and because of the web it spread like wildfire and only backwards compatible changes could be made. Even if with all your points in mind the language could be way nicer. My guess is that once wasm/wasi is integrated enough to run websites without JS (dom access, etc.) JS will be like Fortran, Cobol and Telefax - not going away any time soon, but practically obsolete.
“Amazingly” fast for bio-chemistry, but insanely slow compared to electrical signals, chips and computers. But to be fair the energy usage really is almost magic.
But by that definition passing the Turing test might be the same as super human intelligence. There are things that humans can do, but computers can’t. But there is nothing a computer can do but still be slower than humans. That’s actually because our biological brains are insanely slow compared to computers. So once a computer is better or as accurate as a human it’s almost instantly superhuman at that task because of its speed. So if we have something that’s as smart as humans (which is practically implied because it’s indistinguishable) we would have super human intelligence, because it’s as smart as humans but (numbers made up) can do 10 days of cognitive human work in just 10 minutes.
AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
We don’t know how to train them “truthful” or make that part of their goal(s). Almost every AI we train, is trained by example, so we often don’t even know what the goal is because it’s implied in the training. In a way AI “goals” are pretty fuzzy because of the complexity. A tiny bit like in real nervous systems where you can’t just state in language what the “goals” of a person or animal are.
Approval voting and more parties could theoretically improve things quite a bit. But switching to approval voting is probably not simple because why should the biggest parties support something that hurts them.
Best thing is, they introduced some settings to turn that auto-conversion off and they don’t work 🤣 can’t make that stuff up.
I have no idea how small your toilet or large your penis is, but what do you do with your penis, when you have bowel movement?
How about just sitting down on the toilet? Don’t get me wrong it’s great you got it checked out but sometimes there are pretty simple solutions.
Look at his arm. Unless all the videos of that guy are fake (even during a time where making a convincing video fakes was really hard). That arm is not going down even if he wanted.
But there is a difference between making a claim about not drinking water and literally holding your hand up in a way you can’t fake.
I personally use a self hosted instance of kitchen owl https://kitchenowl.org/ I really like it’s simplicity and use it quite a lot and nobody has access to the data but me.
Hetzner ❤️
JPEG does not support lossless compression. There was an extension to the standard in 1993 but most de/encoders don’t implement that and it never took off. With JPEG XL you get more bang for your buck and the same visual quality will get you a smaller file. There would be no more need for thumbnails because of improved progressive decoding.
JPEG does not support lossless compression. There was an extension to the standard in 1993 but most de/encoders don’t implement that and it never took off. With JPEG XL you get more bang for your buck and the same visual quality will get you a smaller file. There would be no more need for thumbnails because of improved progressive decoding.
I think I know what you mean and it’s true if you look at world class. But that’s the reason why sometimes we use those properties to cluster people into groups to make it more fair. A lot of fighting sports are split into weight classes. We have paralympics, and we split many sports into male and female. Even in chess where almost everybody agrees that there is very likely no performance difference between man and women, but there are so many man playing chess that statistics alone tells you that most outliers (best and worst players) are also men.
But with “fairness” you have to draw the line somewhere or the groups that are able to play against each other would be too small. In the extreme it would mean that it’s only fair if you compete against yourself, like in trying to break you own personal records.
Even paralympics aren’t fair. It makes a difference to shoot with a bow if you are missing a leg, or an eye or an arm. Getting too specific will certainly make it fairer but would result in more groups and less people that would be allowed to compete against each other.
I’m personally not interested in watching any form of spots, so I couldn’t care less about who’s allowed to compete against whom, but I think it’s only understandable if people that went through puperty without testosterone would complain if they would have to compete against more and more people who went through testosterone puperty.
I’m from Europe and I always assumed that America does that, because it’s the cheapest option by far.