top of page

Personalised security tech: are we willing to use it?

Imagine your computer to know your mental weaknesses better than you do. Creepy? Perhaps, yes. But what if it helps to prevent you from falling for scams? What if it stops you before you do something harmful to yourself or someone else? Would you use this technology?



To some extent, big online advertising companies have already figured out when to target whom with certain content. They are called recommender systems and thrive on heaps of data on what you browse, buy and search online. An anecdotal case from the US showed that Target knew that a woman was pregnant before she knew, based on what products she bought. If a company is able to accurately figure out these things without necessarily informing us, imagine what the possibilities are for marketeers to make us buy things we do not really need.


Of course, this is old news to you if you have read my previous blogs on data and privacy.


In this blog, I want to probe your thoughts about using the very same techniques to improve people's security. There is a striking parallel here with recent developments toward personalised medicine. People already monitor their sleep patterns, their periods, workouts, weight, heart rate and what not through wearable and mobile technologies. The beauty of these applications is that they give people control over their own data and provide them with insights into their own habits. It is not too much of a leap to then start thinking of application in the cybersecurity domain.


My research aims to bridge social science with cybersecurity issues. Why do people fall for scams? Why do people believe fake news? How can we make them better at detecting online deceptions through smart technologies? To do so, we need to understand people, ourselves, even better, through data. And guess what? Platforms like Amazon, Apple and Google already have that data. It is only a matter of time for these data processors to be able to figure out more abstract knowledge about ourselves. What subjects drive us crazy? What biases do we have? What do we have a soft spot for? How do our emotions fluctuate over time?


When you know what people are sensitive to, it is easy to use that knowledge against them. Though equally well, we could give these insights to people themselves and teach them to avoid the scams that they may be targeted with.


Another parallel we can draw here is to recent developments for autonomous vehicles (i.e., self-driving cars). Scientists are looking at how to infer people's mental states from their behaviour in the car. When people seem drowsy or distracted, subtle changes in seating or lighting can alert people to risky traffic situations, so they are ready to take over the autonomous driver if needed. Same thing for our online computer interfaces. We could adjust user interfaces based on how you are feeling or delay showing enraging social media posts based on your likely state of mind, to prevent you from falling for manipulative contents or have you deal better with them.


This would be the start of personalised security technology, where we take human data to the next level and make people aware of their sensitivity to targeted (malicious) content. It may be only a matter of time before someone in the world attempts to do so.

Comments


bottom of page