This year the topic of artificial intelligence has been much more common among people who do not necessarily work or operate in technological environments. The creation of intelligent chats like ChatGPT, Bard or others has put the issue on the table. Since the launch of these large language models (LLM), hundreds of popular AI programs have emerged and millions of people already use them daily.
But what about AI when it comes to cybersecurity? First, it's important to know that AI is already being used to deliver new cybersecurity solutions to existing threats. According to a CBInsights research paper titled “Old School vs. New School: How is Artificial Intelligence Transforming Cybersecurity”, AI in cybersecurity can be used to monitor activity in systems and networks in real time, identify patterns and anomalies from internal and external data flows, accelerate detection, free up resources, enable faster remediation and generally help improve ongoing cyber resilience.
At Cyberlat, with our CyberSOC, we have dedicated ourselves to integrating AI modules into our solution and today we have an effective detection platform against new attack vectors.
However, we also understand that AI in cybersecurity is not necessarily a generic or definitive solution. Because, malware-free attacks that do not require software downloads and disguise malicious activity within legitimate cloud services are on the rise and AI is not yet capable of stopping those types of network breaches.
A clear trend is that the exposed gaps will have more and more impact. The case of the recent error of Microsoft researchers who shared access privileges to Terabytes of information through a Github bucket reaffirms that by using multiple instances of Big Data, these violations will have an impact in increasing volume every day.
On the other hand, it is necessary to ensure that AI is used for activities that have a positive impact and not the other way around. Although it is a great tool, it is already used by cybercriminals.
In 2018, a report titled “The Malicious Use of Artificial Intelligence” was published, highlighting three areas of the cyberattack profile that we have already seen change:
"Expansion of existing threats. AI can enable bigger, faster, and broader attacks using AI-enabled techniques.
New attack vectors. AI systems could be used maliciously to complete tasks that are impractical for humans. Additionally, malicious actors could exploit vulnerabilities in AI-assisted cybersecurity platforms.
Change in the typical mode of operation of threats. AI-assisted attacks would be "especially effective, well-targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems.
It has been common to hear of cyberattacks that are facilitated by deepfakes or AI-enabled bots, among others. There have also been documented cases where smart chats are used to generate multiple phishing emails in different languages.
Cybersecurity and artificial intelligence will have to go hand in hand, and it is essential that neither is left behind. The great challenge is how to use and positively adapt this tool that we continue to develop as humanity. At CyberLat we can help you with cybersecurity as a service, assisted by AI. Write to us at contact@cyber.lat and we will create an action plan.