Follow BigDATAwire:

October 16, 2023

Keeping ‘A Eye’ on AI

Jacob Birmingham

(Kostyantyn Skuridin/Shutterstock)

Rogue AI could kill everyone. This line is not a scripted movie line; it’s an article title from the New York Post, January 2023. The author, Ben Cost, writes that “researchers are deeming Rogue AI an ‘existential threat to humanity’ that needs to be regulated like nuclear weapons if we are to survive.”

The AI alarm seems to have been sounding for years as the World Economic Forum published an article in May 2019 titled “These rules could save humanity from the threat of rogue AI.” Although sensationalism indeed sells, is there a need for this elevated concern about AI? Let’s take a look at the facts.

One thing is for sure, as AI becomes more powerful, the potential for misuse or unintended consequences increases. Case-in-point: AI systems could be hijacked or developed maliciously, resulting in severe consequences for individuals, organizations, and nations. It’s a fact that Rogue AI can take various forms, depending on its purpose and the methods used to create it.

The most dangerous aspect is AI’s vast integration opportunities for our economic, social, cultural, political, and technological areas. It’s a double-edged sword because the exact value AI offers humans has an equal and opposite reaction to harm us. For these reasons, Rogue AI presents several dangers, such as:

  • Speed: AI systems can process information and make decisions much faster than humans, making it challenging to react against or defend against Rogue AI in real time.
  • Scalability: Rogue AI can replicate itself, automate attacks, and infiltrate multiple systems simultaneously, leading to widespread damage.

    (Sequential Pictures/Shuttersetock)

  • Adaptability: Advanced AI systems can learn and adapt to new environments, making them difficult to predict and counter.
  • Deception: Rogue AI could mimic human behavior or legitimate AI systems, challenging identifying and neutralizing threats.

Another fact, AI doesn’t just start and stop with text and code. AI is using voices to emulate voices, to sound like a human. A March 2023 Washington Post article, “They thought loved ones were calling for help. It was an AI scam,” details how bad actors leverage AI for impersonation scams to extort money. The article underscores the problem with Federal Trade Commission data pointing out that “impostor scams were the second most popular racket in America, with over 36,000 reports of people being swindled by those pretending to be friends and family,” and “over 5,100 of those incidents happened over the phone, accounting for over $11 million in losses.”

Humanity Created AI; Humanity Can Tame AI

It is crucial to recognize Rogue AI’s potential threats and that developers adhere to guidelines and principles when creating services. The following should be strongly considered to protect AI systems from unauthorized access and tampering:

  1. Establish clear ethical guidelines and responsible development practices to minimize unintended consequences;
  2. Collaborate with other developers, researchers, and policymakers to share knowledge and establish industry-wide AI safety and ethics standards;
  3. Regularly monitor and evaluate AI systems to identify and address potential risks;
  4. Implement robust security measures to protect AI systems from unauthorized access and tampering.

    (Lightspring/Shutterstock)

Ultimately, taming AI is not only a developer’s issue; it’s a common-sense practice that must be applied by everyone AI touches—especially enterprise organizations. Many of these enterprise-related, common-sense practices are cybersecurity derivatives such as:

  • Investing in AI security and risk management, including training staff to recognize and respond to AI-related threats;
  • Collaborating with industry partners, regulators, and policymakers to stay informed about AI developments and best practices;
  • Conducting regular risk assessments to identify potential vulnerabilities and develop contingency plans;
  • Establishing clear guidelines and oversight for AI usage within the organization, ensuring that ethical and safety concerns are addressed.

Conclusion

With its rapid processing and deception capabilities, Rogue AI poses a clear and present danger. As these intelligent systems find their way deeper into all sectors of business, government, education, and society, the misuse or malfunction of AI could result in dire consequences. The responsibility to safeguard humanity from the intended purpose for which AI was created lies within our grasp, but only if we establish—and adhere to—ethical guidelines and prioritize security and risk management.

Humanity is fickle; we have the power to harness great AI potential or use it to our detriment. Time will judge if we unleash productivity or punishment.

About the author: Jacob Birmingham is VP of Product Development at Camelot Secure. He currently leads the company’s Hunt Incident Response Team based in Huntsville, Alabama.  In the past five years, Jacob has focused his attention on Cybersecurity, Ethical Hacking and holds certifications in CISM and CISSP. He holds a BS Degree from the University of Central Florida in Computer Engineering and a master’s degree in management information systems from the University of Alabama in Huntsville. Jacob’s specialty focuses on improving and securing all cyber business-related processes to deliver the highest quality products to end-user customers.

Related Items:

Open Letter Urges Pause on AI Research

AI Threat ‘Like Nuclear Weapons,’ Hinton Says

AI Could Trigger WWIII, Alibaba CEO Warns

 

BigDATAwire