Despite the importance of protecting personal data being high on everyone’s minds, valuable information on all of us generated by and stored on our phones has never been more vulnerable. How is our data being used and abused, and can users see some of that value themselves?
California gun owners
, Marriott Hotel guests
, TikTok teens
– no-one is safe from having personal data leaked, and as more and more vital (or otherwise) services demand increasing amounts of personal info, we are more exposed to breaches than ever before.
We named personal data control one of our telecoms trends for 2022
, and later revisited progress being made
so far, noting that as devices and applications extract greater and greater quantities of data, and this is used to make more big decisions
, controls on this valuable resource must become a high priority for all digital enterprises going forward.
Thanks to purposefully byzantine terms and conditions, many companies have run roughshod with user’s personal data, as newer technologies and services extract staggeringly vast data sets, and machine learning unlocks much greater information about us from extant data, identifying trends and refining messaging.
By 2025, 463 billion gigabytes of data will be created
every single day. To put that amount into perspective, it’s estimated that all the words ever spoken by humankind equals just five billion gigabytes
. By the time a digital native child reaches the age of 13, online advertising companies will have generated approximately 72 million data points
for them based on their browsing habits.
Massive breaches, malicious or otherwise, jeopardise the integrity of data held by digital services providers; a stark reminder of this came earlier this year, when the personal details of over one billion Chinese citizens
were stolen from a Shanghai police database, hosted on an Alibaba private cloud, by a hacker demanding 10 bitcoin (approx. $200,000 at the time of writing).
Over 50 million T-Mobile current, former and even prospective customers in the US had their personal details stolen
last year, including dates of birth, driver’s licence information, and Social Security numbers. These customers “now face a substantial, imminent, and ongoing threat of identity theft, scams, and resulting harm,” according to their class action against the telco.
Aside from massive, malicious attacks, smaller instances of data mishandling can be equally as damaging to consumer confidence, such as the case of dating app Grindr, which was found to be sharing users’ HIV status with two US-based companies over an unencrypted connection.
As the number of high-profile data leaks continues to grow, consumers are becoming increasingly wary of giving up personal data, even to access personalised services.
Already this year, Google has banned dozens of data-harvesting mobile apps
from its Play Store, a seemingly unrelated set, including a speed trap alert app and a QR code reader, which were all sending user data to a company registered in Panama with shady links to a US defence contractor.
The US Supreme Court’s recent ruling on abortion rights – itself first revealed to the public via a leak of confidential information – has led Google to automatically delete location data for visits to abortion clinics, should a user’s digital footprints place them in legal trouble.
One app, Flo, went one step further and introduced an anonymous mode
, stripping personal identifiers from accounts so that even data requests from law enforcement wouldn’t be able to link accounts to a particular user.
This move may be too little too late for some, given that it was recently revealed by the FCC that major US providers know where have been tracking customer location data and passing on information stored for months or years
to law enforcement.
Meanwhile, in a “landmark victory,” the UK High Court ruled that the security and intelligence services must acquire “prior independent authorisation” to obtain private communications data from telcos
during investigations. But can mere legislation stop the latest, most advanced government-backed spyware?
Dubbed “Europe’s Watergate,” the nefarious Pegasus
, developed by NSO Group, exploits zero-day, zero-click vulnerabilities to extract any data from a mobile phone without alerting even the most safety-conscious of users. Beginning life, ironically, as a means for support workers to legitimately access customer devices, it has been used by governments to target the devices of over 450 individuals, most notably dissident Saudi journalist Jamal Khashoggi and (allegedly) Jeff Bezos, as well as diplomats, activists and journalists across the world.
Legislation covering how this vast amount of personal data is used nonetheless remains out of step with consumer needs and, increasingly, social mores, forcing many service providers and internet giants to step in and intervene.
But with governments behind much snooping, is there much hope for the law to protect user data when it doesn’t benefit the powers that be?
Method in the Adness
User data is the lifeblood of internet-based advertising (IBA), an industry that generates hundreds of billions of dollars every year.
Meta has recently admonished Apple for costing the company $10 billion in lost revenue thanks to its Privacy Relay
, a new feature exempting iPhone users from tracking, which 62% of iPhone users are reportedly using.
Was this a benevolent act, or a swipe at Facebook’s ad revenues?
When Google deactivates advertising cookies in Chrome (provided they don’t keep delaying the switch-off ad infinitum
), the most popular web and mobile browser in the world, the entire advertising business model will be completely upended, bringing the internet’s main revenue-generating engine grinding to a halt.
The previously proposed replacement, Federated Learning of Cohorts (FLoC), placed users into massive “cohorts” to deliver “interest-based advertising.” Now, their new system is called Topics API
, and works by generating a list of five interests based on websites visited over a given week, stored in your browser for three weeks before being deleted. When you visit a website, the API will share three of the user’s interests from the previous three weeks. These categories, of which there are currently around 350, are selected locally from the user’s device, without storing any data on external servers.
That’s the way the Supercookie crumbles
Meanwhile, Vodafone and Deutsche Telekom are trialling their solution to the cookie’s crumbling: TrustPid
, which assigns a pseudo-anonymous digital token – a so-called “supercookie,” stored in the HTTP header rather than the browser – created by combining a mobile number and IP address, that tracks the user’s browsing habits, generating a profile from which it can serve them relevant ads.
Over in the US, Verizon previously got into trouble for targeting oblivious users with supercookies back in 2016, paying a $1.35 million fine
for its troubles; of major concern was that, since disassociating from supercookies was impossible, it would allow websites to permanently track visitors.
Not to be discouraged by its history of handling data, T-Mobile launched App Insights earlier this year, gathering up anonymised usage data about apps installed on phones, Wi-Fi networks and visited websites, and selling on that info. Customers can only opt-out if they install a separate app – one lacking any T-Mobile branding whatsoever – to manage their data.
The Privacy Paradox
Although protecting their personal info is important to users, most rarely make any concerted effort to do so, and many are more than willing to give it away. Consumers have pretty much given up on security; half of those surveyed share data with so many different companies that they can’t ensure the security integrity of each one.
Why? We don’t really know, hence the privacy paradox
Even now, almost a decade after revelations of warrantless mass surveillance
and the ongoing effects of the Cambridge Analytica scandal, we still choose to prioritise convenience over safety when it comes to handing over data for goods and services.
Of those who share their “dirty little secrets” on messaging services
, 79% admit to discussing topics that could get them in trouble if revealed, including 47% saying it would ruin relationships with friends or family, 14% making sexist, racist or homophobic comments, 12% discussing substance abuse and 10% admitting to infidelity.
In fact, 20% of us don’t care how much data we share online, and 26% believe it’s only “inevitable” that it will be leaked, with a DMA report finding that between 2012 and 2017 the percentage of “data pragmatist” and “data fundamentalist” users was down, while the number of “data unconcerned” users increased, despite a “maturing data economy and a more data literate public.”
As ever more granular personal data is extracted through our phones and shared with ever more third parties, enterprises must ensure that it’s used more wisely, lest users become unwilling to give up personal data, even for personalised services.
Making Data Pay for Everyone
Telcos put a great deal of time, money, and effort into gathering and monetising the hoard of customer data they sit atop, purely for their own enrichment.
Collecting and securing all this data is pointless if isn’t being put to good use. Using this data, telcos can create customer profiles and segments based on granular preferences, and identify the most valuable customers
with personalised offerings addressing their wants and needs, provided they can free up this data into the cloud and away from their silos. However, for now, customers see little benefit themselves from handing over their intimate details to digital enterprises.
Maybe it’s time that they did start being rewarded for how their data is used under a data licensing model? In exchange for their health and lifestyle data personalisation, services such as Vitality
give its users all manner of benefits and rewards, including car insurance, offering money back on insurance for good driving through gamification, with bigger discounts and rewards at higher membership levels.
Rather than giving up their personal data for the benefit of their service providers, it’s important for customers to have digital preference controls for managing what they share and who they share it with.
What would consumers value from their CSP in return for sharing their data? Perhaps users can have their data used and analysed in exchange for early access to new services and features. Would they even pay a premium for their data to not be sold on
? Or would customers give up a fraction of their personal data for tailored alerts and offers? How much of the estimated £84.50 that an email address is worth
should go to the owner? It might take more than a coffee and a pastry to convince them:
There are now several enterprises dedicated to getting users paid for sharing their data, such as the blockchain-powered services Datawallet and Permission.io
, or the Data Dividend Project
, spearheaded by former US Presidential hopeful Andrew Yang, though quite how he intends to accomplish that is unclear so far.
CSPs already have the information in their existing systems, but currently, so little data is collected in a format that permits easy retrieval and analysis, and the more that’s collected, the greater the complexity of the infrastructure needed to keep it safe.
Telcos have a tightrope to walk in protecting customer data while making use of it, and making customers see the value in them doing so. If properly balanced though, data can create a wide array of growth opportunities for customers and services, provided customers have the right controls for managing their data, rather than their data being taken exclusively for profit.