My AI Can Beat Up Your AI

As the CEO of Infinitive, I spend a lot of time looking at technology trends. I wonder what technologies will be at the forefront of helping Infinitive’s clients achieve their business results. Recently, Artificial Intelligence has been in the spotlight. One interesting aspect of AI’s future is how AI battles AI in business.

Battlebots
Looking ahead, the only protection from a malevolent Artificial Intelligence (AI) application will be a benevolent AI application. From IBM’s groundbreaking Jeopardy! victory over two human experts in 2011 to the latest news about ChatGPT, the questions about AI’s ability to replace humans have continued to grow. Most AI advances have been around trivial competitions (like chess or Jeopardy!) or have focused on benign tasks such as using ChatGPT to write blogs faster. However, that is changing. Students are using ChatGPT to short-cut report writing and the learning that goes with actually researching and writing those reports. Educators need a way to determine whether the material was written by a student or an AI application. The answer? Use AI to detect AI.

ChatGPT
ChatGPT is a very popular generative AI tool that provides sophisticated responses to simple queries using a combination of its own algorithms, models and information from the internet. Despite ChatGPT’s relatively recent appearance on the technology scene, there are already cases of problems being created by the tool. A common problem is the use of ChatGPT by students in writing reports. Another issue is the use of ChatGPT by some employees to take on multiple full-time jobs without their bosses’ knowledge. The new term for this is being “overemployed”. Google is concerned about a proliferation of AI generated content. While Google’s search guidelines allow for “the appropriate use of AI” the guidelines have a caveat against using AI, “… to generate content primarily to manipulate search rankings”. As anybody who has ever written content for commercial publication knows … manipulating the search rankings is a large part of their efforts.

One answer to thwarting the unsanctioned use of ChatGPT is to detect ChatGPT written text by using … ChatGPT! OpenAI, the company that makes ChatGPT, allows free access to its AI Classifier. The classifier detects AI written text versus human written text. A teacher can see whether the term paper was written by the student or ChatGPT. A boss can test an employee’s blogs and Google routinely uses AI to look for AI generated content.

Unfortunately, OpenAI’s classifier comes with a lot of caveats. It doesn’t work well with short text; it sometimes confidently classifies AI generated text as human-written and visa-versa. One can confidently predict that some AI rapscallion will produce a product that takes ChatGPT output and “blurs it” to make it harder to automatically detect as AI-generated. The battle is joined.

Spam Battles
Marketers are forever trying to land their e-mails in the recipient’s main in-box while e-mail providers want to put mass-produced e-mails in the “promotions folder”. Both are using AI. The spam-blockers look for patterns that indicate mass marketing e-mails and decrease the reputation of sites that exhibit those patterns. The marketers use “warm up sites” and AI to create patterns that the blockers’ algorithms will see as legitimate.

Phishing Expedition
Phishing is the most common type of cybercrime today. Hackers are using generative AI to write credible e-mails for phishing attacks. One case involves asking ChatGPT to write an e-mail that notifies a potential victim that they have been removed from a LinkedIn Group based on inappropriate online behavior.

Here’s what ChatGPT wrote:

Subject: Notification: Removal from Company LinkedIn Group

Dear [Person1],

We hope this email finds you well. We are writing to inform you that you have been removed from the [Company Name] LinkedIn Group due to a complaint we received about your online behavior within the group.

As a professional networking platform, LinkedIn values respectful and professional interactions among its members. We have received a complaint that your behavior within the [Company Name] group did not align with our community guidelines, and as a result, we have taken the necessary steps to remove you from the group.

We understand that there may be a mistake or misunderstanding in this matter. If you believe that the account that received the complaint does not belong to you or if you wish to refute the claim, we urge you to follow the link [insert link] to provide us with additional information. We take all complaints seriously and will investigate the matter thoroughly. Your cooperation in resolving this issue is greatly appreciated.

If you have any questions or concerns, please do not hesitate to contact us at [insert contact information]. We value your participation in the LinkedIn community and look forward to resolving this matter amicably.

Thank you for your attention to this matter.

Sincerely, [Your Name]

LinkedIn Team

Gone are the days of misspelling and poor punctuation. The generated e-mail is well written and professional. Of course, the links in this e-mail would correspond to the phishing attack rather than anything to do with LinkedIn.

Anti-phishing AI tools typically involve advanced email filters. These programs scan your incoming messages for patterns of phishing attempts and automatically send suspicious emails to your junk folder. As security researchers detect more phishing emails, they can provide these models with more data, making them even more accurate.

So, who is winning?

The hackers use AI to quickly tailor and personalize phishing e-mails while the defenders try to find the patterns that identify phishing. Business E-mail Compromise (BEC) attacks increased 81% between the first and second half of 2022. Of these BEC attacks, employees opened 28% and replied to 15%.

It seems, at least for now, the bad guys’ AI is beating the good guys’ AI when it comes to phishing.

Going forward
The battles between “good AI” and “bad AI” have only just begun. Companies which want to mitigate the ill-effects of “bad AI” must carefully consider their vendors and service providers. Not only are the feeds, speeds, and functions of a vendor’s software important matters to consider but the vendor’s ability to use AI as a means of self-protection must also be considered. Will your vendors be “future proofed” as bad actors improve their use of AI?

 

– Denis McFarlane, CEO

Are you ready to get more value out of your data?