Cybercriminals are supercharging their attacks with the help of large language models such as ChatGPT, and security experts warn that they've only scratched the surface of artificial intelligence's threat-acceleration potential.

At last month's RSA Conference, cybersecurity expert Mikko Hyppönen sounded the alarm that AI tools, long used to help bolster corporate security defenses, are now capable of doing real harm. "We are now actually starting to see attacks using large language models," he said.

In an interview with Information Security Media Group, Hyppönen recounted an email he received from a malware writer boasting that he'd created a "completely new virus" using OpenAI's GPT that can create computer code from instructions written in English.

Continue Reading for Free

Register and gain access to:

  • Breaking credit union news and analysis, on-site and via our newsletters and custom alerts
  • Weekly Shared Accounts podcast featuring exclusive interviews with industry leaders
  • Educational webcasts, white papers, and ebooks from industry thought leaders
  • Critical coverage of the commercial real estate and financial advisory markets on our other ALM sites, GlobeSt.com and ThinkAdvisor.com
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.