Blog: Data security perceptions

  • Over half (56%) of IT decision makers surveyed feel that their personal data is less secure now than 5 years ago 
  • Almost 9 in 10 (87%) feel forced to share an increasing amount of personal data 
  • 94% feel that increased regulation is needed to control what voice assistants, such as Google, Siri, and Alexa, are allowed to listen to and collect 

While the Internet of Things (IoT) is now currently the fastest growing data segment, social networks are close on its heels. But where is all the data stored? Who has access to it? And how is it protected? With so many questions, it’s easy to get a little overwhelmed and even paranoid, about how our data is being used. 

Each time we browse the internet, we (perhaps unwittingly) leave behind a unique digital trail that organisations might store and use to make more effective decisions. Or we may consciously be creating and sharing our digital identities; each social media account we create, discussion thread we participate in, application we fill out electronically, and even the latest gadgets we might browse online, all add to our digital footprint. 

We thought we’d take the opportunity to reach out to ourVanson Bourne Community of IT professionals to get their thoughts on data, from both a consumer and ‘insider’ point of view, and whether the aforementioned concerns might be justified. 

Do IT decision makers feel their data is safe, and do they care? 

Perhaps unsurprisingly, IT decision makers feel their data is most secure with their employer, and least secure with social media platforms (such as Facebook and Instagram) as well as websites (such as news, streaming and shopping). With the overwhelmingly lengthy regulations and protocols in place to protect our personal and professional data held by our employers, this is a reassuring finding, and surely one that makes reading all those policies and documents worthwhile! 

Professional networking sites however didn’t go unscathed. Almost 1 in 10 (9%) of the ITDMs we interviewed feel that LinkedIn as a platform needs a complete overhaul of its security process. As one of the top recruitment resources in the UK today, it’s surprising so many feel their data is at risk with LinkedIn but not with their employer, who could easily gain their CVs and other personal details from the platform. 

The latest developments in artificial intelligence are looking set to create a platform shift like that of the cloud, or even the internet itself. There’s a plethora of information about each and every one of us being collected and stored digitally. While most entities that store data have some form of data security procedures in place, the sophistication and level of protection can vary significantly across organisations and, even more so, across borders.

These measures are, in part, aimed at improving our confidence in the privacy and security of our data, yet their impact appears somewhat muted with just over half (56%) of the ITDMs we interviewed feeling that their personal data is less secure now than 5 years ago, while 39% felt this was not the case and 5% didn’t really know. And although only a few data breaches make the headlines – such as Yahoo, LinkedIn and Marriott International which impacted billions of accounts globally – they do sadly remain more commonplace than we might dare to think. 

So, what if the worst were to happen? 

Unsurprisingly, all the ITDMs we interviewed state they’d be concerned if their data were to be leaked. This makes sense from a consumer point of view – after all, these same ITDMs are consumers themselves. But aren’t they also those responsible for securing our data in the first place? So, who is to be held accountable when data is leaked? 

Evidently, the blame shouldn’t be laid solely at the door of said ITDMs, themselves responsible for securing our data. At least not in their eyes. While ITDMs acknowledge their responsibility in protecting the data, and admit to being at fault when data is leaked, short of blaming the attackers themselves 86% of respondents feel the company (i.e., Instagram or Facebook) is at least partially to blame for social media data attacks. Only employers fared slightly more positively, with 79% blaming them for a related data leak. More research might be needed to delve deeper into blame, but is that really the point here? If most of us feel that data breaches are out of our control, either personally or professionally – yet we need to give our data out to survive in the world – do we have any autonomy left? With almost 9 in 10 (87%) of those interviewed feeling forced to share an increasing amount of personal data, we may not like the answer to that question! 

Are humanoid or autonomous robots any safer? 

Like Frankenstein, who was built out of a scientific experiment from a variety of parts from corpses to resemble a god-like human, humanoid robots are robots which look to mirror human behaviour and can sometimes also have human-like facial features and expressions. Are they the modern era Frankenstein? Typically, these robots can perform human like activities such as running, jumping, and carrying objects. An extension of this is autonomous robots who operate independently from human operators, using sensors to perceive the environment around them, such as cleaning bots or hospitality bots etc. Both have seen a growing interest in the last few years with 89% of our ITDMs surveyed reporting an experience with one, 19% of which have interacted or used one. However, only 1% of decision makers feel the information these robots use is stored in a secure location. Additionally, just under half (47%) are hopeful the data is secured protected, but the majority (49%) don’t trust it’s protected safely. 

This begs the question – from an insider’s perspective, why are ITDMs, a tech savvy group of professionals, interacting or exposing themselves to potentially unsafe technology? Perhaps it’s due to the many (62%) who feel these robots are the future. The technological advancement doesn’t stop with robots, through the sophistication of AI, ML, and robots our world is growing in independence and complexity as many roles move away from humans. Three quarters of ITDMs surveyed feel AI technology, such as Copilot by Microsoft, or ChatGPT, will disrupt the administrative job market by replacing human employees with such technology. Is the future of jobs no longer human? 

Perhaps the financial penalties imposed on organisations for data breaches should be imparted to the subjects of the data itself. 91% of the ITDMs we spoke with agree that organisations should be legally forced to financially compensate individuals for breaches involving their personal data. Almost all (94%) feel that increased regulation is needed to control what voice assistants, such as Google, Siri, and Alexa, are allowed to listen to and collect. 

So why do we need so much data? 

The importance of data to an organisation’s success cannot be overstated, but we would say that, wouldn’t we? Well, research conducted by McKinsey  suggests that companies who strategically use data, such as consumer behavioural insights, to inform their business decisions outperform their peers in sales growth by 85%. 

It’s clear that concern around the data organisations hold is at the forefront for professionals and consumers alike. But as we’ve seen from the vast majority of the ITDMs we’ve interviewed, themselves consumers too, there’s an increasing pressure to share more personal data with organisations. With that in mind, organisations would be wise to heed these concerns and regularly review the security policies and procedures implemented to safeguard this data – what is collected, why is it collected, how is it stored, how long for, how is it protected, etc. Such measures would ensure that they are best placed to avoid a leak in the data, and any financial or legal implications which that may bring.

Perhaps more importantly however, is that in doing so organisations might earn the trust of those whose data they are responsible for safeguarding, and in turn increase the amount of data people are willing to share with them.  Organisations need to be accountable and commit to understanding their audience. 

Authentic messaging that aligns with the values of their audience can play a significant part in building trust in a brand. Vanson Bourne has a wealth of experience helping brands refine their messaging through understanding the impact that key individual elements of a message have on audience appeal. In a separate survey of 300 B2B marketers, strategists, and insight professionals, from the US and UK, 95% told us their organisation is conducting (or has plans to conduct) message testing. 

The survey findings are based on quantitative interviews conducted in October 2023. As a member of the Vanson Bourne Community you’ll gain access exclusive to a variety of insights reports just like this one, based on research with our members.

What do our members think?

“Vanson Bourne Community sends me interesting IT related surveys. The rewards I receive are generous and I love that I can send them to a selection of charities.”
Operations Manager, Financial Services
"I have been part of the Vanson Bourne Community for many many years now, and the surveys are always interesting and do make me think about my understanding of topics, its great being part of the group."
Head of Technology, Media
"Vanson Bourne Community surveys are relevant to my job role. The surveys are well designed and not repetitive. The survey incentives are variable, extremely fair and delivered quickly. This is by far my favourite panel."
IT Manager, Financial Services
Interested?
Come and be part of our great community!