News

The Security Interviews: Protecting your digital self

The Security Interviews: Protecting your digital self
Written by Techbot

Our digital self – the virtual presence of who we are online – has a pervasive influence in the real world. People make judgements based on these digital depictions, so what can be done to ensure positive representation?

Peter Ray Allison

By

Published: 16 Jan 2023 14:30

With our lives becoming ever more interconnected, and both real-world and online activities being recorded, we are generating an increasingly sophisticated record of what we are like. This digital self is a virtual representation of our lives, which people use as a basis for judgements, in lieu of meeting in person, and which algorithms use for informing their responses to that person.

“Nobody’s life is solely offline these days,” says Ben Graville, founder of Visible. “When we go about our daily lives, whether we like it or not, the side-effect of our conscious use of technology is an unconscious data trail that leaves a digital shadow – a detailed representation in data of who we are, how we think and the things we do. It’s a manifestation of us, but one we didn’t know we were leaving.”

Our digital self is a virtual footprint that acts as a digital trail of our online lives, existing long after we have logged off. It is comparable to physical body language, which comprises over half of how we communicate.

A digital self is generated through social media and online activities, using content (what we post, such as social media, blog posts and playlists) and the associated metadata (where, when and how we post, as well as the frequency). In many cases, when creating a virtual profile, the metadata can be just as powerful as the content itself.

“Social media is the tip of the iceberg,” says Graville. “It’s the most obvious, because that’s where your human interactions are. There are probably more decisions being made about you by algorithms than there are by people.”

The internet never forgets, and it is this permanence that makes it so powerful in generating a complex representation of our lives. Even after a website or service has ended, internet archives ensure nothing is ever truly gone.

Our digital self may not be a true representation of who we are. Due to the anonymising nature of the internet, there can be a temptation to share exaggerated or extreme posts, which may be intended in jest or to generate discussion. However, as these posts remain in perpetuity, the original context may be unclear and posts may not be taken as originally intended.

The permanent nature of the internet means that, over time, we generate vast amounts of online data from which our digital self can be formed. As this virtual footprint is freely shared and distributed, it can be used by companies to assess job applicants for their suitability. Of course, digital selves can also be exploited by criminals, such as for identifying times when people are away from home. It may also be used for research purposes, for example investigative journalism.

Over time, we generate vast amounts of online data from which our digital self can be formed. As this virtual footprint is freely shared and distributed, it can be used by companies to assess job applicants for their suitability

Organisations are also using machine learning algorithms to automate decision-making using publicly available data. For example, the Department for Work and Pensions (DWP) is starting to use machine learning as a tool to help identify fraudulent applicants for Universal Credit.

Social media algorithms also use the digital selves of their users to curate content, by identifying the content (posts, articles and advertisements) they will most likely engage with. This effectively creates echo chambers for their users, reinforcing their world views and political biases. As a result, it may not be the content they need to see to maintain their mental health or to make rational judgements.

“We know algorithms are built to increase engagement and capture your attention and their bias towards confrontation,” says Graville. “One of the main reasons why the centre of politics has disappeared online is because you’ve essentially got algorithms deciding how to tailor your world view, based on your digital self, thinking they’ve been helpful. But actually, for society as a whole, it is probably not that helpful.”

Own your digital self

By taking ownership of our digital selves, people can ensure they are not inadvertently mispresenting themselves online. Learning how they are represented in data enables them to understand the data driving the decisions about them. From there, people can manage how they appear in order to portray their qualities and enhance their online profile.

“Understanding how your digital body language is being interpreted is critical to one’s well-being and success in the real world,” says Graville. “We’ve got more than one sense to be able to judge people, we’ve got sight and sound, but when you’re doing most of your business online, you don’t have the luxury of those other senses to make these decisions in the human context.”

To properly manage a digital self, there first needs to be an understanding of how a person’s virtual representation is currently viewed. This is ideally undertaken by an independent external viewer who does not have any existing preconceptions that may colour their perspective.

“When we go about our daily lives, whether we like it or not, the side-effect of our conscious use of technology is an unconscious data trail that leaves a digital shadow – a detailed representation in data of who we are, how we think and the things we do”

Ben Graville, Visible

There are already tools, such as Visible, being developed to provide an overview of a digital self. As these applications use data from the same publicly accessible algorithms that generate a profile, they can offer an unbiased representation.

“We see ourselves as a deep tech firm, in that it is a completely decentralised and federated AI [artificial intelligence] approach,” explains Graville. “The data on your devices doesn’t leave your devices, other than to talk to the service you’re trying to talk to. It doesn’t come to us. Visible runs locally on your machine; there’s no cloud infrastructure or sharing of personal information.”

An awareness of their digital self, and how it is perceived, enables people to recognise the drivers behind this perception. This helps them to become better able to modify their online behaviour to present a truer image of who they are. Online behaviour is seen not just in terms of content posted to social media, but also the times and frequency of interactions and the devices used.

“Seeing the underlying data that exists – which might be something around your demographic, where you live, your age, your online activity, the way you speak, the way you share things, what you say, the things other people say around you (guilt by association) and those concepts – enables people to understand how their digital self will have formed.”

There is also the option of taking steps to mitigate distortions in the digital self by deleting outlying historical social media posts. While the internet never forgets, the impact of these posts can nonetheless be lessened. All of this will modify the way algorithms perceive people and, in turn, present how they appear online.

One technique that can be useful in managing our virtual representations is comparing a digital self to how others present themselves online. This is an inverse of peer pressure, whereby comparing a digital self to those of the peers will form a baseline expectation of what is to be expected, as well as how to stand out for the right reasons.

The UK’s data protection regulations are currently being reviewed to allow a greater use of user data and thereby enable the country to become a hub for AI and machine learning research. “As we’ve left the EU, the government is taking the chance to relook at our data protection law,” says Graville. “They’re thinking about removing some of the safeguards around data protection and machine decision-making, which would make it easier for AI to flourish within the UK.”

By understanding how they are perceived online, people can take control of their digital selves to ensure that their virtual representation is a true reflection of who they are, which promotes the qualities they most want to exhibit

This will enable a greater sharing of personal data and, in turn, mean people’s digital shadows become an increasingly complex array of data networks.

The net neutrality principle is a powerful foundation of the internet. However, the internet has evolved to become biased for businesses. Personal data can be freely shared, but also exploited. Just as we cannot avoid giving away body language in a physical situation, it is impossible to not share our digital self. However, the psychological ownership and knowing how their data is being used and monetised allows people to change their online behaviour to avoid exploitation of their data.

“To stop us going into a dystopian world in the future, people need to feel empowered to own their identity and digital self, and use that to make the internet a fairer place, where people have as much to benefit from their digital self as businesses do,” says Graville.

The free and open nature of the internet means we cannot avoid sharing our data and still be online. As such, our digital selves will continue to offer a reflection of who we are, regardless of how inaccurate that image may be. By understanding how they are perceived online, people can take control of their digital selves to ensure that their virtual representation is a true reflection of who they are, which promotes the qualities they most want to exhibit.

Read more on Privacy and data protection

Original Article:

About the author

Techbot