There should be a federal agency that monitors emerging technologies similar to how the FCC regulates satellites and radio and other forms of mass comms to prevent this kind of misuse.
Top fucking KEK.
Gr8 joke, m8. I rate 8/8.
It will be corrupt from inception. You know this. Still, your point is valid. There absolutely needs to be governmental oversight to force the hands of private corporations from going full-blown Brave New World and 1984 on steroids.
The problem, of course, is that these federal agencies will have corporate agents who will infiltrate this governmental body to influence and make policy decisions to favor their corporate masters. It's happening right now, in real time, with the FDA in the US. Look into that, if you're interested.
This is not only a breach of privacy, but something that will only get progressively worse if nothing is done about it, and would also be a nightmare for those with ADHD, autism,neurotic thinkers. I work remotely so I'm not worried about this, but I can see this being implemented on a wide scale once it becomes readily available to the public. What makes it so dangerous is that those brainwaves (EEG) can be decoded by AI to observe your emotional state, concentration level, pre-conscious responses, and this data could be sold to other companies for research or stolen by black hat hackers if there are no security protocols. Those earpods for instance, could mistakenly flag a programmer as not working on a central task and doing an unrelated task, while they are writing a work-related email to a colleague in good faith.
Of course, it's a breach of privacy, and of course, it's going to get progressively worse. That's how these things start off. They're presented as benign and good projects and initiatives to warm people up to the prototypes and proofs of concepts, then once it's implemented on a mass scale and fully integrated into the culture and workforce, the truth will rear its ugly head. You and I can see where this will go. The average John and Mary who are paying bills and raising kids won't.
These globalist agents presenting these technologies with their intended agenda behind a thick curtain don't open with, "hey y'all! Kk so, my evil bosses liek totes want to rule the wOOOOrld and liek make you all their slAAAAves...." They get you to nibble on something without realizing that once you start taking bites out of it, you'll realize too little, too late that it was poisonous all along.
This can also be used as a model for other algorithms that claim to predict occurrences with 80 to 90 percent accuracy like future crimes and criminality based on facial recognition.
We can already do this with specific human behaviors. It's called targeted advertising. We can create tailor-made programs that are based off of these ML algorithms to anticipate other human behaviors, but that will require a whole lot of data that's not currently being collected and stored for those specific purposes (behaviors), whatever they may be.
You have to question what the intention behind it is. An atomic bomb is inanimate and has no will behind it, but its inherent purpose was destructive, and it would have been preferable if nuclear weapons had never been conceptualized and brought into existence. Brain-computer interfaces may purport to help the paralyzed and provide real time information about what's happening inside your brain, but like any other promising technology, will be exploited by wealthy (investors, businesses, the top 1%,) for financial gain first and foremost unless enough people voice their opposition.
Yes, you're right. The intent behind the design and development of any tool should always be a primary concern. The intent behind some technologies is overt and not left to anticipatory guesswork and conjecture, like the gun and the bomb. Other, more multi-faceted tools, are able to surreptitiously hide any insidious primary design functions. There is also the possibility that bad actors get destructively creative with technologies that had good or benign design intent.
It's our job to understand the technology and how people, societies, and governments use it in order to be able to anticipate hypothetical scenarios that, over time, start increasing in probability and could soon cease being hypothetical.