in

AI Assassins: Pentagon to Unleash Deadly Autonomous Weapons

The United States government, along with China and Israel, is developing new artificial intelligence (AI) weapons that can make decisions on whether to kill human targets. The deployment of these AI weapons has sparked concerns from critics who fear that machines making life-and-death decisions without human oversight could be dangerous. It’s like something out of the movie “Terminator”! Several countries, including Austria, are calling for the United Nations to pass a legally binding resolution to restrict or outlaw the use of AI killer drones. However, countries like the US, Russia, Australia, and Israel are opposed to this and prefer a non-binding resolution. Austria’s chief negotiator on the issue, Alexander Kmentt, believes that the role of human beings in the use of force is a significant security, legal, and ethical issue.

The US government is working on deploying swarms of AI-enabled drones, which would help counter China’s military superiority. Deputy Secretary of Defense Kathleen Hicks stated that these AI-controlled drone swarms would be harder to plan for, hit, and defeat compared to China’s army. The Pentagon is reportedly developing a network of AI-enhanced autonomous drones that could be deployed near China in the event of conflict. These drones would be used to weaken China’s missile systems and could potentially be a game-changer in military strategy. However, the US Air Force secretary emphasized that AI drones should have the capability to make lethal decisions under human supervision, as the difference between winning and losing lies in individual decisions.

The use of AI-controlled drones in conflicts is not new, as Ukraine used them against Russia in October. However, it is unclear if these drones caused human casualties. The campaign to Stop Killer Robots warns about the dangers of allowing machines to make life-and-death decisions, stating that machines view humans as just another piece of code to process and sort. The fear is that autonomous weapons could be mass-produced and fall into the wrong hands, hence the call for a treaty banning autonomous weapons. Stuart Russell, an AI scientist, believes that creating and deploying autonomous weapons would have disastrous consequences for human security.

AI technology is not just limited to military applications. It is already being utilized in law enforcement, such as the recent use of a robot dog by the Los Angeles Police Department to end an armed standoff. The use of AI in drones also provides law enforcement with actionable intelligence. However, there are concerns about the ethical implications of machines deciding to kill humans. To prevent this technological dehumanization and protect human lives, the campaign urges the prohibition of autonomous weapons systems and the development of professional codes of ethics that disallow the creation of machines that can decide to kill humans. It’s a slippery slope, and we must proceed with caution when it comes to AI technology.

Written by Staff Reports

Leave a Reply

Your email address will not be published. Required fields are marked *

Chef Scorches Sci-Guy’s Microwaved Steak Heresy

Vance and Ramaswamy Torch Ohio Diversity Guru’s Tax-Funded “Racism”