Blog: Cybersecurity

To mark the 20th year of cybersecurity awareness month in October, America’s Cybersecurity and Infrastructure Security Agency (CISA) announced a new program that they’re coining “Secure Our World” which is focused on four “easy” ways to stay safe online: 

  • Use strong passwords 
  • Turn on multifactor authentication (MFA) 
  • Recognise and report phishing 
  • Update software 

In principle at least, these do all sound easy, but when dealing with human behaviour – as at least two of these areas do – it’s rarely quite so simple. A cynic might say that this helps to feed the narrative that security breaches are more often than not down to human error (which is, of course, a factor)

If the tech and the people interacting with/operating the tech are not in perfect harmony, then something can and will go wrong. 

So, with that overarching theme in mind, and some of the main security conferences of the year now firmly in the rear view, we felt it was time we took stock of what 2023 held for cybersecurity, as well as what might be in store during 2024. 

Breaches, breaches, breaches… 

In amongst all of the noise surrounding generative AI (gen AI) – which we’ll get into later – it did feel as though some significant breaches were gone from mainstream media as quickly as they arrived. 

The UK public sector in particular seemed to take something of a pummelling during 2023, with the Police Service of Northern Ireland (PSNI), Greater Manchester Police (GMP), and the UK’s Electoral Commission all suffering a breach of some description

And while that supports our opening gambit of people and technology being crucial, rather than one or the other, it perhaps points to another key factor…and it’s an old favourite – budgets. Without going too far down the rabbit hole of public sector funding, it does highlight the importance of spending in the right areas, and cybersecurity should certainly be considered among those. 

A common view is that there’s only downside to throwing money at the problem after the event; a sentiment shared by a Vanson Bourne CommunITy member in a recent in-depth interview (IDI).

That’s not to say that money is the sole answer, but it can help to level the playing field. Nation state actors, cyber criminal gangs, and hacktivists, among others, will be spending much of their “hard earned” cash on trying to add to their hitlist.

It’s also worth making the point, that private sector firms have been far from immune to breaches this year – despite their budget ceiling typically being higher. Take Boeing, for example – a huge global brand, falling foul of the LockBit 3.0 ransomware gang, due to a vulnerability in their software supply chain.

LockBit – who operate on a Ransomware-as-a-Service (RaaS) model – have been prolific in recent years. And this breach of Boeing along with others such as that on the US arm of the Industrial and Commercial Bank of China (ICBC) feels like their way of reminding the world that while we’re all looking at gen AI, they’ll be going about their business of taking names and cashing cheques. 

Say what you like about threat actors, but there is a certain brilliance in the way they execute their missions and continuously evolve their tactics, techniques and procedures (TTPs). Take this approach for example: 

  1. BlackCat ransomware gang exfiltrate data from MeridianLink 
  1. MeridianLink decides not to fully engage in negotiations with BlackCat 
  1. BlackCat gets annoyed and reports MeridianLink to the US Securities and Exchange Commission (SEC) 

“Good guy ransomware gang” – this, of course, isn’t designed to glorify these hacker groups in any way, but it does highlight what organisations and the authorities are dealing with. Highly aggressive and innovative approaches, driven, in general, by greed. A dangerous combination. 

So, what does this mean for organisations, how can they combat these threats, and, ultimately, how do they go about increasing the efficacy of their security stack? 

Expansion or consolidation? 

The attack surface that organisations are trying to monitor and mitigate against is growing – no great epiphanies there. 

But with cloud sprawl being a genuine concern, incidents resulting from zero-day vulnerabilities seemingly increasing in prevalence, and the potential rise of shadow AI, among many other factors, it’s apparent that IT security teams must find a way to bolster their cyber defences through the utilisation of a technology stack that suits the specific requirements of their organisation. 

And this leads us nicely into one of the main topics of discussion from security events such as RSA and BlackHat USA this year: Should companies be pursuing a “best of breed” / point solution strategy, or a “consolidation” / platform-based approach? 

Over the years, it seems as though organisations have gravitated towards the former – searching out and implementing the best solutions for specific security needs, regardless of vendor. While this sounds sensible, the approach does have its drawbacks – more tools equates to a more complex security stack, and more potential points of failure that hackers could exploit. 

Not only that, but it creates an integration headache for even the most seasoned of IT security professionals.

And for that reason, it would appear that attitudes are showing signs of shifting as we head into 2024, with security leaders appreciating that a comprehensive cybersecurity platform – meaning fewer tools and vendors in their stack – is likely to give them the best chance of protecting their organisation, from both external threats and insider risk. 

We posed the question of point solutions vs. consolidation to our community of IT and IT security decision makers with a third (33%) saying that in 2024 they believe that their organisation will utilise/invest in a consolidation approach, so that they use as many (or as few) tools from the same vendor as possible. While the majority (59%) say they will utilise/invest in point solutions that solve specific problems, regardless of vendor. 

It won’t be as simple as ripping off the band aid when it comes to migrating towards a new look cybersecurity approach, and it will take careful planning and execution to do it properly (and securely), but the pros do seem to outweigh the cons. 

The ability to ingest data from a range of different sources, investigate and analyse threat levels, and then prioritise and respond to those threats/events, all within a unified platform, is surely going to simplify the lives of (typically) under-resourced security teams. These are the same teams who are monitoring significant numbers of alerts, across a host of security solutions.

Generative AI – risk or opportunity? 

So, here we are…gen AI and large language models (LLMs). What can we say that hasn’t already been said this year…on multiple occasions? Well, in all honesty, probably not an awful lot… 

  • …has it been fear-inducing? Yes 
  • …has it been disruptive? Absolutely 
  • …will it transform how we live and work? Without a doubt 

We live and breathe B2B tech, so despite the recent carnage at OpenAI, in our minds, it is indisputable that this rapidly evolving technology – the explosion of which has been driven by ChatGPT – will provide significant benefits across all industries, and the world economy. 

We’ve already referenced the phenomenon of shadow AI. This feels like something of an inevitability considering the wide-ranging use cases across software development, marketing, data modelling and many others. But, in the long-term, it will probably be viewed as a growing pain – “a necessary evil” – as functions from across the business rush towards gen AI, to ensure that they aren’t seen as the department causing their company to be left behind. 

It is though worth sparing a thought for IT security teams during this settling in phase, as, ultimately, they will still be held accountable if a breach occurs due to a gen AI tool that they might not have approved or had visibility over. To that end, it’s crucial that all areas of the business not only consider how they can best utilise gen AI to support their own objectives, but also how they can work with the IT / IT security department to embed the tools they need in a responsible way. 

When we asked 81 of our community members what they believed would be the biggest challenge and/or transformation in cybersecurity during 2024 (in a verbatim format), just under 60% mentioned AI in some way, shape, or form – with many of them highlighting the potential associated risks, or benefits for cyber criminals. 

Nonetheless, we’re talking about a technology that can be used for good as well as evil. The aforementioned XDR solutions already lean upon AI, so that the data ingestion, threat analysis, and decisioning phases can be expedited. The reduction/removal of these hugely time-consuming tasks will help to ease the burden on IT security teams, as well as benefit the IT security posture of organisations able to implement such a platform. 

We’ve already noted that cyber criminals are just as innovative, if not more so, than the organisations that they’re targeting. And the security community understands that it can often be the simplest attacks that are the most effective. 

Therefore, at this stage, it’s most likely that gen AI will be used by attackers to improve the success levels of their social engineering attacks, primarily through phishing scams, which can now be executed more effectively and on a larger scale.

Which brings us full circle to one of CISA’s core themes – recognise and report phishing. The other themes, of course, cannot be disregarded, but it feels like this one in particular stands out. This seemingly straightforward task will be made all the more difficult now that cyber criminals have gen AI at their disposal. 

As such, organisations must invest in proper training for their employees to reduce the risk of them succumbing to increasingly convincing messages. That, in tandem with settling on a security approach and technology stack that suits their business requirements will give them as good a chance as they can hope for against the flood of rapidly evolving threats coming their way during 2024. 

Cybersecurity for 2024: people, technology, and…?! 

A year is a long time in cybersecurity, and with the developments witnessed in 2023, it begs the question of what on earth will 2024 have in store? The probable answer…more of the same, but on steroids. 

In 2024, we as the pilots of technology cannot afford to let the technology outpace our ability to keep up. People must be at the heart of technology and security transformation to ensure that if something does go wrong, we are able to fix it. It is not just down to the IT / IT security team and the technology when it comes to tackling cybersecurity; it has to be a wider effort. And this is why CISA set out their guidelines in the way that they did. In order for a company’s threat mitigation efforts to be a success, everyone in the workforce must hold themselves accountable as well.

From the ground level up, it’s incumbent upon everyone within the organisation to know what the latest threats are – whether it be teenage hackers, nation state attackers or RaaS gangs – as well as the key trends that are on the rise, such as gen AI, and what this means for them in their day-to-day roles. 

We live in a world that’s driven by technology, regardless of industry or organisation size. Sharing knowledge as we all head into 2024 will enable organisations to tackle their people and technology problems, with their people and technology. 

The survey findings are based on quantitative interviews conducted in November 2023. As a member of the Vanson Bourne Community you’ll gain access exclusive to a variety of insights reports just like this one, based on research with our members.

Blog: Data security perceptions

  • Over half (56%) of IT decision makers surveyed feel that their personal data is less secure now than 5 years ago 
  • Almost 9 in 10 (87%) feel forced to share an increasing amount of personal data 
  • 94% feel that increased regulation is needed to control what voice assistants, such as Google, Siri, and Alexa, are allowed to listen to and collect 

While the Internet of Things (IoT) is now currently the fastest growing data segment, social networks are close on its heels. But where is all the data stored? Who has access to it? And how is it protected? With so many questions, it’s easy to get a little overwhelmed and even paranoid, about how our data is being used. 

Each time we browse the internet, we (perhaps unwittingly) leave behind a unique digital trail that organisations might store and use to make more effective decisions. Or we may consciously be creating and sharing our digital identities; each social media account we create, discussion thread we participate in, application we fill out electronically, and even the latest gadgets we might browse online, all add to our digital footprint. 

We thought we’d take the opportunity to reach out to ourVanson Bourne Community of IT professionals to get their thoughts on data, from both a consumer and ‘insider’ point of view, and whether the aforementioned concerns might be justified. 

Do IT decision makers feel their data is safe, and do they care? 

Perhaps unsurprisingly, IT decision makers feel their data is most secure with their employer, and least secure with social media platforms (such as Facebook and Instagram) as well as websites (such as news, streaming and shopping). With the overwhelmingly lengthy regulations and protocols in place to protect our personal and professional data held by our employers, this is a reassuring finding, and surely one that makes reading all those policies and documents worthwhile! 

Professional networking sites however didn’t go unscathed. Almost 1 in 10 (9%) of the ITDMs we interviewed feel that LinkedIn as a platform needs a complete overhaul of its security process. As one of the top recruitment resources in the UK today, it’s surprising so many feel their data is at risk with LinkedIn but not with their employer, who could easily gain their CVs and other personal details from the platform. 

The latest developments in artificial intelligence are looking set to create a platform shift like that of the cloud, or even the internet itself. There’s a plethora of information about each and every one of us being collected and stored digitally. While most entities that store data have some form of data security procedures in place, the sophistication and level of protection can vary significantly across organisations and, even more so, across borders.

These measures are, in part, aimed at improving our confidence in the privacy and security of our data, yet their impact appears somewhat muted with just over half (56%) of the ITDMs we interviewed feeling that their personal data is less secure now than 5 years ago, while 39% felt this was not the case and 5% didn’t really know. And although only a few data breaches make the headlines – such as Yahoo, LinkedIn and Marriott International which impacted billions of accounts globally – they do sadly remain more commonplace than we might dare to think. 

So, what if the worst were to happen? 

Unsurprisingly, all the ITDMs we interviewed state they’d be concerned if their data were to be leaked. This makes sense from a consumer point of view – after all, these same ITDMs are consumers themselves. But aren’t they also those responsible for securing our data in the first place? So, who is to be held accountable when data is leaked? 

Evidently, the blame shouldn’t be laid solely at the door of said ITDMs, themselves responsible for securing our data. At least not in their eyes. While ITDMs acknowledge their responsibility in protecting the data, and admit to being at fault when data is leaked, short of blaming the attackers themselves 86% of respondents feel the company (i.e., Instagram or Facebook) is at least partially to blame for social media data attacks. Only employers fared slightly more positively, with 79% blaming them for a related data leak. More research might be needed to delve deeper into blame, but is that really the point here? If most of us feel that data breaches are out of our control, either personally or professionally – yet we need to give our data out to survive in the world – do we have any autonomy left? With almost 9 in 10 (87%) of those interviewed feeling forced to share an increasing amount of personal data, we may not like the answer to that question! 

Are humanoid or autonomous robots any safer? 

Like Frankenstein, who was built out of a scientific experiment from a variety of parts from corpses to resemble a god-like human, humanoid robots are robots which look to mirror human behaviour and can sometimes also have human-like facial features and expressions. Are they the modern era Frankenstein? Typically, these robots can perform human like activities such as running, jumping, and carrying objects. An extension of this is autonomous robots who operate independently from human operators, using sensors to perceive the environment around them, such as cleaning bots or hospitality bots etc. Both have seen a growing interest in the last few years with 89% of our ITDMs surveyed reporting an experience with one, 19% of which have interacted or used one. However, only 1% of decision makers feel the information these robots use is stored in a secure location. Additionally, just under half (47%) are hopeful the data is secured protected, but the majority (49%) don’t trust it’s protected safely. 

This begs the question – from an insider’s perspective, why are ITDMs, a tech savvy group of professionals, interacting or exposing themselves to potentially unsafe technology? Perhaps it’s due to the many (62%) who feel these robots are the future. The technological advancement doesn’t stop with robots, through the sophistication of AI, ML, and robots our world is growing in independence and complexity as many roles move away from humans. Three quarters of ITDMs surveyed feel AI technology, such as Copilot by Microsoft, or ChatGPT, will disrupt the administrative job market by replacing human employees with such technology. Is the future of jobs no longer human? 

Perhaps the financial penalties imposed on organisations for data breaches should be imparted to the subjects of the data itself. 91% of the ITDMs we spoke with agree that organisations should be legally forced to financially compensate individuals for breaches involving their personal data. Almost all (94%) feel that increased regulation is needed to control what voice assistants, such as Google, Siri, and Alexa, are allowed to listen to and collect. 

So why do we need so much data? 

The importance of data to an organisation’s success cannot be overstated, but we would say that, wouldn’t we? Well, research conducted by McKinsey  suggests that companies who strategically use data, such as consumer behavioural insights, to inform their business decisions outperform their peers in sales growth by 85%. 

It’s clear that concern around the data organisations hold is at the forefront for professionals and consumers alike. But as we’ve seen from the vast majority of the ITDMs we’ve interviewed, themselves consumers too, there’s an increasing pressure to share more personal data with organisations. With that in mind, organisations would be wise to heed these concerns and regularly review the security policies and procedures implemented to safeguard this data – what is collected, why is it collected, how is it stored, how long for, how is it protected, etc. Such measures would ensure that they are best placed to avoid a leak in the data, and any financial or legal implications which that may bring.

Perhaps more importantly however, is that in doing so organisations might earn the trust of those whose data they are responsible for safeguarding, and in turn increase the amount of data people are willing to share with them.  Organisations need to be accountable and commit to understanding their audience. 

Authentic messaging that aligns with the values of their audience can play a significant part in building trust in a brand. Vanson Bourne has a wealth of experience helping brands refine their messaging through understanding the impact that key individual elements of a message have on audience appeal. In a separate survey of 300 B2B marketers, strategists, and insight professionals, from the US and UK, 95% told us their organisation is conducting (or has plans to conduct) message testing. 

The survey findings are based on quantitative interviews conducted in October 2023. As a member of the Vanson Bourne Community you’ll gain access exclusive to a variety of insights reports just like this one, based on research with our members.

What do our members think?

“Vanson Bourne Community sends me interesting IT related surveys. The rewards I receive are generous and I love that I can send them to a selection of charities.”
Operations Manager, Financial Services
"I have been part of the Vanson Bourne Community for many many years now, and the surveys are always interesting and do make me think about my understanding of topics, its great being part of the group."
Head of Technology, Media
"Vanson Bourne Community surveys are relevant to my job role. The surveys are well designed and not repetitive. The survey incentives are variable, extremely fair and delivered quickly. This is by far my favourite panel."
IT Manager, Financial Services
Interested?
Come and be part of our great community!