Monday, July 1, 2024

AI and dataset poisoning — are organizations prepared for the latest cyberthreats? [Q&A]

Share

Although governments are issuing new guidelines for businesses to toughen up their cyber protection, cyberattacks remain a major risk, only growing in sophistication with advancements in AI.

With the continued integration of AI into systems, recognizing the threat that dataset poisoning presents is also an emerging concern. We spoke to Andy Swift, cyber security assurance technical director at Six Degrees to discuss the latest threats and how businesses can respond.

BN: How can security teams leverage AI?

AS: Security teams are usually made up of two counterparts — offensive and defensive security, so it is probably no real surprise that there are defensive and offensive advantages to using AI as well.

Often the advantages for the defensive side are the ones that come to mind first when discussing cyber security. They play to the very obvious strengths of AI involving common structured practices like extracting and analysing log files, pattern matching evidence against known attack patterns, disk/network data forensics, and so on, all in the hopes of finding that all important smoking gun. Searching through mountains of data manually is an incredibly laborious task and can lead to things like alert fatigue; this is where AI becomes incredibly helpful. The technology itself is perfectly poised to search through and categorise incredibly large data sets at a rapid rate. As such, security teams can use AI to spot more complex trends and identify signs of certain attacks much faster, leaving security teams with more time to address these incidents earlier on in the kill chain.

The second way AI is helpful for security teams is on the offensive side of the fence. Offensive cyber security can involve using the same tactics and techniques as real attackers to find vulnerabilities, although they can also find themselves in somewhat privileged positions enabling far more aggressive and data-heavy techniques to be employed when stealth is not a primary concern.

Typically, offensive teams will throw huge amounts of data at an organisation and look to detect signs of vulnerability; for example vulnerability scanning numerous, sometimes thousands, of networked endpoints or fuzzing specific applications and functions to identify potential flaws, both of which involve throwing a lot of data at something and waiting for large sets of results to be returned for review. AI, which as previously mentioned is adept at handling large quantities of structured data, not only helps speed up making conclusions from these data sets but also helps analyse responses, making the entire process much faster and more efficient.

BN: What new challenges do you foresee AI bringing for security teams?

AS: AI is improving things across the board for both attackers and IT teams. For the attackers, one of the main ways it’s assisting is by making it easier to build exploit code or malware. But AI isn’t just making it easier for attackers to launch attacks at an incredible pace; it’s also lowering the bar of entry for threat actors, allowing more individuals to take their first steps into the world of cybercrime.

From a generative AI standpoint, things like phishing templates are also becoming much more complex. However, this isn’t just in terms of generating legitimate-looking content, but also in the analysis of people. Given enough publicly available data on a target, generative AI is allowing attackers to recreate emails in the style of the person they’re looking to impersonate by mimicking personal details such as speech patterns, writing styles, nicknames, and so on.

The technology is also making it far easier to create malware, exploits, and vulnerabilities. In fact, even ChatGPT can help with certain nefarious endeavors if you ask in the right way. If you ask it to build a macro of some sort that would open a payload via a native Windows DLL file, it’ll say no. But, if you break down the task and ask ChatGPT things in isolation wrapped in a vaguely innocent soup of context you can eventually get an answer… If you’re patient.

In short, security teams aren’t seeing anything they’ve not seen before (yet), but the volume, speed and complexity is increasing.

BN: What is dataset poisoning — is it something that should be a top priority for organizations?

AS: Every AI model has to be trained on a data set, whether that be Wikipedia, PDF books, or whatever. The AI will then use these sources to answer any questions posed to it.

When it comes to poisoning this data set, there are two different forms this can come in. The first form affects the initial data that’s being fed into it. If that’s poisoned, you’ve got a big problem, because the ‘truth’ the AI then knows is essentially incorrect. The second way to poison a data set is by training it with incorrect answers. AIs are trained on common questions, common answers, and how to interact with these. If you can poison the AI at this stage and train it to answer the questions in a way that it shouldn’t do, that can also lead to issues further down the road.

Both of these forms of poisoning happen in the very early stages of setting up a new AI model. The key to combat this is to ensure that the model is fed with large amounts of data. As such, should any issues arise, the AI model should have enough information to weed out incorrect data.

Although it’s certainly a concerning idea, many organizations are not in the place to start building their own models either for the lack of resources or lack of understanding of the technology, and even when/if they do, for most it will just be outright safer and cheaper for now not to do so and use one or more of well-established vendors. For the shorter term the focus should probably be on training people about using existing models safely and understanding what technology in their current stack is starting to integrate AI and what that means for their data (and its whereabouts!).

BN: What should be some key investments for security teams for those looking to do more with less?

AS: This is a very complex question, as there is no silver bullet that exists in cybersecurity. A lot of organizations I have spoken to during incident response over the last year have been all too focused on acquiring the ‘shiniest toy’ to fix all their cybersecurity woes. Many cybersecurity vendors have incredible marketing teams and as such, it’s easy to get wrapped up in it all and spend huge amounts of money on something that your company doesn’t necessarily need, or your team isn’t necessarily trained to use or get the most out of.

So, in my opinion, the main investment should be people. Good people can make very secure networks out of very little. At the end of the day, at the heart of a lot of preventable breaches are simple misconfigurations; if you know how to configure and harden services correctly before applying the safety blankets such as firewalls, WAFs, and other upstream or local protections, it not only gives massive peace of mind but it also becomes easier to identify anomalies longer term.

Something I see time again in my job is clients with poor configuration at the borders or security misconfigurations even though they have a solution in place that could help. However, the problem is that the tool isn’t being used to its full potential, or hasn’t been configured correctly.

There is a place and time to buy the latest and greatest, but without reviewing your infrastructure first and making sure everything is configured to the best it can be, there’s no point in jumping straight to the newest toy, because the chances are your internal people, who are meant to be managing it, might not even be qualified or understand it.

BN: Any other advice you would like to give IT teams to help them combat upcoming threats?

AS: Firstly, in terms of up-and-coming threats, there’s a lot of security research being done at the moment into gateway devices, VPNs, and so on, largely because they sit on the borders of the network; exploits here can gain you access to what’s underneath with a nice clean access route.

Secondly, I would warn IT teams to be wary of vendor and version lock. From an incident response point of view, it can be very damaging. Recently, in my job, I’ve seen numerous issues that have stemmed from out-of-date products that can’t be upgraded due to compatibility issues with modern operating systems, meaning the organization is then stuck with having to engineer their way around unsupported systems from 15 to 20-years ago just to support one business-critical product, and this can be detrimental to the security of the business. As a shocking heads up, if you’re running Windows 2008 anywhere, that’s now 16 years old and I have seen enough of that OS over the last few weeks alone to make me want to cry.

I recommend having something like a CMDB to track assets among other things, but if that is too scary, even something as simple as a sheet with end-of-life dates for hosts in your project/department/whatever that is reviewed quarterly can help make sure that there’s always a plan to move and migrate systems well ahead of time. Be proactive, know what’s coming, and plan an exit strategy well in advance.

Image credit: NewAfrica/depositphotos.com

Read more

Local News