Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.
Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.
A quick rundown of what it entails and why it might matter in the US:
What is it?
Key Takeaways:
Why Does This Matter in the US?
Emotion-tracking AI is covered:
Sources:
Definitely a good start. Surveillance (or ““tracking””) is one of those areas where ““AI”” is actually dangerous, unlike some of the more overblown topics in the media.
Did you use an LLM to write this? Kinda ironic, don’t you think?
I spent the better half of 45 minutes writing and revising my comment. So thank you sincerely for the praise, since English is not my first language.
If you wrote this yourself, that’s even more ironic, because you used the same format that ChatGPT likes to spit out. Humans influence ChatGPT -> ChatGPT influences humans. Everything’s come full circle.
I ask though because on your profile you’ve used ChatGPT to write comments before.