Is ChatGPT the newest cybersecurity threat?
The hot topic at almost any workplace meeting or social gathering since its release in November 2022 has been ChatGPT, the revolutionary AI bot that can, it seems, write almost anything.
For those not yet familiar with this new technology, ChatGPT is fundamentally a chatbot that uses AI to answer questions or prompts by way of text. The functionality of this revolutionary AI includes capabilities such as search and code generation to create something as simple as a meal plan with a shopping list, or as complex as writing code for new technology.
As industries around the world deliberate on just how AI is going to influence the way they work in the years to come, cybersecurity experts are ringing alarm bells over the risks that this type of technology might soon pose.
From cybercriminals using ChatGPT to write malware code to a new generation of scam emails that sound totally legitimate, the risk horizon around artificial intelligence is escalating. Here are some things for business owners to look out for as the machines march nearer.
Bots can write now – and some of them are writing very bad things
Like school teachers and college professors around the world who have been astonished by ChatGPT’s ability to write essays that seem legitimate on the surface, cybersecurity managers are waking up to the conceivable risks that AI could also write dangerous scripts and code used to create viruses, malware, and ransomware.
- Open AI, ChatGPT’s parent company, has tried to reassure the public that the product will not create dangerous or harmful text or computer code, but hackers could trick it into doing just that by varying the input that they use and the questions they ask it.
- One of the risks associated with ChatGPT’s ability to write text is that fraudulent emails, including phishing scams, could be produced by the bot using almost perfect English grammar.
- Errors in punctuation and grammar used to be noticeable tell-tale signs of fraudulent emails (especially with those written by non-native English speakers), but with advanced language AI this detection method has gone out the window.
It’s important to note that while ChatGPT itself is not malicious, it has the potential to be used in a sinister manner to create malicious code.
The sheer volume of output that an AI text bot can produce is astounding, and hackers can now rely on automated text generators to create thousands of words in a short period of time, potentially flooding the Internet with fake emails that are almost impossible to filter.
Can cybersecurity software stop AI in its tracks?
Cybersecurity providers will need to develop superior detection tools that can tell if an email has been written by AI or not, and Google has already announced that it will penalise machine generated websites.
Until these applications are reliable and available to the public, the risk of falling victim to ransomware and email scams may increase exponentially – and that calls for a top-level cybersecurity setup in every business.
Stay ahead of the bots with secure cloud storage
At a time when potentially dangerous communications and computer code are flooding the internet, and as cybersecurity experts explore the possibilities of using this revolutionary technology to overcome malicious threats, what can you do to feel secure?
To begin with, start to familiarise yourself and your employees with ChatGPT and use its threat detection capabilities to your own advantage. For example, if you suspect an email to be spam or phishing, ask the chatbot “is this email safe or a scam”. The chatbot has the ability to detect and classify unusual communication patterns and would likely advise you not to respond if it detects malicious language.
You can also stay one step ahead of the hackers and their AI sidekicks by browsing our range of affordable online cloud backup packages for businesses and households today. Ensure peace of mind for yourself and your business. No matter what happens to you data – if it’s backed up online, it’s never lost.