Tuesday, 30th April 2024
The European Commission has today designated Apple with respect to iPadOS, as a gatekeeper under the Digital Markets Act (DMA). iPadOS, despite not meeting the quantitative thresholds laid down in the DMA, constitutes an important gateway for business users to reach end users and therefore should be designated as a gatekeeper.
The Commission’s investigation found that Apple presents the features of a gatekeeper in relation to iPadOS, as among others:
– Apple’s business user numbers exceeded the quantitative threshold elevenfold, while its end user numbers were close to the threshold and are predicted to rise in the near future.
– End users are locked-in to iPadOS. Apple leverages its large ecosystem to disincentivise end users from switching to other operating systems for tablets.
– Business users are locked-in to iPadOS because of its large and commercially attractive user base, and its importance for certain use cases, such as gaming apps.
On the basis of the findings of the investigation, the Commission concluded that iPadOS constitutes an important gateway for business users to reach end users, and that Apple enjoys an entrenched and durable position with respect to iPadOS. Apple has now six months to ensure full compliance with the DMA obligations as applied to iPadOS.
—
The UK Product Security and Telecommunications Infrastructure (Product Security) regime (PSTI) comes into effect today, on the 29th April 2024. This means that manufacturers of phones, TVs and smart doorbells are now legally required to protect internet-connected devices against access by cybercriminals by banning easy to guess passwords like ‘admin’ or ‘12345’.
Brands also have to publish contact details so that bugs and issues can be reported immediately, and must be transparent about timings of security updates.
It is hoped the new measures will help give customers confidence in buying and using products at a time when consumers and businesses have come under attack from hackers, that aim to steal personal data, at a soaring rate.
—
Icelandic SA: Municipality of Reykjavík fined ISK 2,000,000 for the use of Google Workspace for Education
After 2022, the Icelandic SA decided to investigate the use of cloud services in elementary schools. The investigation was limited to the use of Google Workspace for Education, in the five largest municipalities in Iceland.
The Icelandic SA’s investigation revealed that students’ personal data were not only processed on the instructions of the municipality of Reykjavík, but also for Google’s own purposes. The municipality failed to demonstrate how further processing by Google was compatible with the purpose for which students’ personal data were initially collected.
The municipality of Reykjavík infringed multiple Articles of the GDPR with its use of Google’s educational system:
– Failure to ensure and to be able to demonstrate that processing is performed in accordance with the Regulation (Articles 5, 24(1) & 28(1) GDPR)
– Data processing agreement did not meet the minimum requirements (Article 28(3)(a) GDPR)
– Failure to ensure that data is not further processed in a manner that is incompatible with the initial purpose (Articles 5(1)(b) & 6(4) GDPR)
– Failure to ensure data minimisation (Articles 5(1)(c) & 25 GDPR)
– Data protection impact assessment did not meet the minimum requirements (Article 35(7) GDPR)
– Data transferred to the United States without appropriate safeguards (Articles 44 & 46 GDPR)
The municipality of Reykjavik was fined EUR 13,270 (ISK 2,000,000) and ordered to bring their processing in compliance with regulations.
—
Friday, 26th April 2024
The Advocate General of the CJEU issued an opinion claiming that any personal information revealed on social media is, under GDPR law, Article 9(2)(e), ‘manifestly made public,’ and therefore does not receive the same standard of protection as other personal data. The manifestly made public personal data, may not, however, be used by social media companies for targeted ads. The clarification follow an incident whereby Max Schrems filed a complaint against Meta for providing targeted ads based on his sexuality.
—
The Information Commissioner’s Office (ICO) has fined Cardiff-based Outsource Strategies Ltd (OSL) £240,000 and London-based Dr Telemarketing Ltd (DRT) £100,000 after the companies made a total of almost 1.43 million calls to people on the UK’s “do not call” register.
People who filed complaints said the callers were aggressive and used high-pressure sales tactics to persuade them to sign up for products. The ICO investigation also found evidence that both companies were specifically targeting elderly and vulnerable people.
Read More
—
In an effort to made health data more accessible and easy to transfer for people who move within the EU area, the European Parliament has approved an EU-wide health data bank. Patients’ data will be uploaded to an electronic health record, with their medical history and any medical imagery on the portal. Some of the information will be anonymised to aid in health research, and the EU reassures citizens that robust data protection mechanisms are in place.
—
When Kylie Gilroy hit the send button on her rock festival refund application, her screen began to display the names, email addresses, phone numbers and bank account details of more than 100 people. Just an hour later, there were details of over 500 people, including her own.
Another individual, Jenny, received a disturbed email from a stranger who claimed that he had found her personal and bank details online. She quickly removed all funds from that account.
It would take the organisers of Pandemonium Rocks music festival more than 90 minutes to realise what had happened. Later they would post a Facebook message to publicly announce that a data breach had taken place. According to sources, the organisers, One World Entertainment, have not reported the breach to the Office of the Australian Information Commissioner (OAIC) yet.
—
The EDPB has released its Annual Report today for 2023. Here are some of the significant milestones:
—
The US Senate voted 60-34 in favour of extending the Section 702 of the Foreign Intelligence Surveillance Act (FISA) after it was due to expire of the 19th of April 2024. The Act gives power to the US government to monitor the behaviour and communication of non-Americans outside of the USA. It also gives permission to survey the communication happening between these individuals and Americans without a warrant.
The Act was criticised heavily by civil liberty advocates and privacy enthusiasts for a long time. During the discussions about the renewal of this Act, parties from both sides argued that the spying powers granted by this legislation are too broad and demanded protections for Americans’ civil liberties and privacy. FISA supporters maintained that even a brief lapse could have a detrimental effect on preventing terrorism and ensuring national security. The six amendments to the Act, aimed to insure more privacy for individuals, failed to come through. Now the legislation has to be signed by Joe Biden to become law.
Many privacy advocates are concerned that this law, aimed at combating terrorism and various national threats, would be used to spy on Americans and violate their human right to privacy.
—
Friday, 19th April 2024
As the technological landscape becomes more and more innovative, consumer neurotechnologies are becoming increasingly prevalent. Examples include headsets that scan users’ brain signals to provide personal meditation services or to provide users with better matches while they are scrolling through dating apps. Companies have access to the records of people’s brain activity by monitoring the electrical signals that give insight into the thoughts, feelings and intentions of users.
Republican Senator Mark Baisley sponsored a bill that aims to classify neural information as ‘sensitive personal information’, and thus to be covered by the Colorado Privacy Act. The law is aimed at consumer brain technologies – sensitive patient brain data gathered in clinical settings are protected by federal health law already. Under the new law, neural data will be treated the same way as fingerprints, facial images and other biometric data.
There is ample worry about neural tech companies having unregulated access to users’ brain data, being allowed to store it indefinitely and sell it on to third parties. “We’ve never seen anything with this power before — to identify, codify people and bias against people based on their brain waves and other neural information,” said Sean Pauzauskie, a member of the board of directors of the Colorado Medical Society.
The neurotechnology industry is poised to expand rapidly as companies like Meta, Apple and Snapchat become more involved. Investments in this technology are substantial – up to $30 billion in 2021 – and we can only expect them to grow. This expanse calls for quick legislative action to protect the right of privacy for individuals personal brain activity.
—
On the 17th April, the EDPB issued a long-awaited opinion on Meta’s Pay-or-Okay model. Since the binding decision imposed on Meta in late 2023, they were forced to change how they collect data. Meta proposed to install a model whereby people consent to their data being used, or if they do not, they pay a monthly subscription fee.
The Pay-or-Okay model has been widely critiqued by privacy experts, most notably by Max Schrems. The EDPB stated that ‘large platforms’ that present users with a binary choice between having their data processed illegally or paying a fee do not comply with GDPR, ‘in most cases’.
There has been some critique of the released statement, mainly due to vagueness of the expert opinion.
—
Cerebral provides online mental health related services on an automatic basis, meaning consumers are automatically charged until they cancel. Despite promising to “cancel anytime,” this time-consuming procedure, often taking several days of negotiation with company staff, and charging customers for the cancellation, often left users frustrated. In April 2020, the company created an easy cancellation button, but this was removed after only two weeks, under the former CEO Kyle Robertson, after cancellations began to rise.
Additionally, Cerebral failed to disclose that it would be sharing consumers’ sensitive data with third parties for advertising and hid disclaimers about its data sharing practices in dense privacy policies. The company claimed in many instances that it would not share users’ data for marketing purposes without obtaining consumers’ consent first. Allegedly, these practices originated under the direction of Mr Robertson and continued after his tenure.
Cerebral provided sensitive information of nearly 3.2 million consumers to third parties such as LinkedIn, Snapchat and TikTok by integrating various tracking tools. Through the use of these, Cerebral gave third parties personal data about its users including names, medical and prescription histories, home and email addresses, phone numbers, birthdates, demographic information, IP addresses, and pharmacy and health insurance information.
Promotional postcards were also sent out to over 6,000 patients, revealing names, diagnoses and treatment to anyone who saw the postcards. The company failed to restrict access of former employees to medical records, and failed to implement adequate policies and training for staff on how to handle user data.
In response to all this, the FTC has imposed a fine of $7 million on Cerebral, and issued a series of corrections that the company has to implement, which includes orders to stop all of the illegal activities mentioned above. This proposed order must still be approved by a federal court before it can go into effect.
—
On the 15th April 2024, the ICO published a new guide for health and social care organisations. The guide details how these entities can be more transparent with people about how their personal data is being used.
Under the GDPR, people have the right to be informed – to know what information about them is being collected and how it will be used. Since organisations in the health and social care sector routinely collect highly intimate and sensitive information, “being transparent is essential to building public trust in health and social care services,” says Anne Russell, Head of Regulatory Policy Projects.
This guide will help organisations understand the definition of transparency as well as provide concrete steps to develop effective transparency information. The very first Principle of GDPR is Transparency, so this topic is extremely important for organisations to master.
—
Friday, 12th April 2024
The Post Office IT scandal has been main news all over the UK for a while now and particularly, over the last two days.
For over 20 years, the postal company had utilised an accounting system called Horizon, developed by a Japanese company Fujitsu. Since the very beginning of using Horizon, employees complained over bugs and inconsistencies in the accounting – the Post Office, however, ignored and squashed all such comments.
When the system began to show that thousands of dollars were mysteriously disappearing from sub-postmasters (independent individuals who run different Post Office branches) accounting balances, the company began to prosecute, and people were imprisoned for alleged fraud. Essentially, money was simply disappearing from the system, and sub-postmasters had no way of proving the fact that they did not, in fact, steal the money. Many lives were ruined, marriages collapsed and numerous financial destruction ensued as sub-postmasters tried to repay the ‘stolen’ money from their own pockets.
This case proves the importance of complying with Security Principles and Individuals’ Fundamental Rights. (Right to Privacy and Right to Data Protection). Article 28 of the GDPR requires that controllers (in this case the Post Office) should carry out due diligence on processors (in this case Horizon) to make sure that their technical and organisational security measure in place are appropriate. The Post Office’ technology, produced unreliable and incorrect records, thus lacked #integrity, which is a fundamental Security Principle.
Furthermore, the Post Office failed in respecting the Rights and Freedoms of the sub-postmasters, as the way in which the investigations were carried out was unfair and biased.
—
According to the Advocate General Pikamäe, the scope and substance of a corrective measure undertaken by a Supervisory Authority needs to be made on an individual basis, and decided case by case. A data subject cannot demand that the SA issue a decision or a specific fine.
This follows from a complaint made by a data subject in Germany, whose data was accessed unlawfully by an employee at a savings bank. The SA recognised that this was a violation of GDPR, but concluded that there were no grounds for action against the savings bank, which had already taken corrective measures against the employee. The data subject challenged this decision in a German court, after which it was referred to the CJEU.
The opinion of the CJEU representative is that the data subject does not have “the right to require the adoption of a particular measure,” which would include any fines. While this opinion expressed by the Advocate General is legitimate and we stand in agreement with it, the final decision in the case of the German data subject has not been announced yet. We eagerly await the outcome of this case.
—
The US Consumer Financial Protection Bureau (CFPB) released a report recently that raised the issue of the lack of privacy safeguards in virtual reality games. Most gaming companies track the IP address and location data of their users, allowing them to monitor the daily routines of players to form a more accurate profile of their players.
The gaming platforms also allow for users to buy in-game currencies or other virtual tokens/items with real money. Because of lax regulation and the rising value of in-game assets, this area has become a hub for scams, phishing attempts and account thefts. Hackers can break into accounts to access the virtual money, and gaming companies have surprisingly low numbers of complaints being resolved in this area, as the financial wealth in a game is the player’s own responsibility. Apart from financial losses, the data gathered by these companies – and then resold to other companies – includes information about the interactions with other players and how users respond to personalized incentives. The risk that this poses – especially when the players are children – is vast, leading not only to financial losses, online surveillance and monitoring, but also “risks of addiction,” says the CFPB Director Rohit Chopra.
Let us hope that the belated but steady progress the US is making in its privacy laws will positively impact and protect virtual game players too.
—
A report released by NordVPN on Monday states that they have found billions of ad-tracking cookies that were leaked on the dark web. Out of a total of 54 billion, more than 2.5 billion were from Google, 692 million from Youtube and approximately 500 million from Microsoft. The US ranked in fourth place in terms of the number of leaked cookies, outnumbered by Brazil, Indonesia and Vietnam.
The danger of this is the fact that more than 10 billion cookies had assigned IDs – used by websites to identify specific users to provide tailored services. Included were an additional 154 million authentication and 37 million login cookies.
The ramifications of this are vast – with names, email addresses, cities, passwords and addresses leaked the risks posed to individuals are huge. If a cookie is stolen and “is still active, an attacker can potentially login into your account without having your password,” says Adrianus Warmenhoven, security advisor board member at NordVPN.
Friday, 5th April 2024
Microsoft is rolling out “prompt shields” to curate what its AI system, Copilot, can see when a request is put in by a user. This is done for the purpose of preventing harmful content and the AI performing any potentially dangerous tasks that users might request.
These “prompt shields” are designed “to spot suspicious inputs and block them in real time,” says Sarah Bird, Microsoft’s chief product officer. Examples include hackers requesting the AI system to do something malicious, such as stealing user information or hijacking a system. These shield also aim to prevent AI generating weird or harmful responses, as in the case where a suggestion was made that a user commit suicide. Reports of bizarre interactions between users and these generative chatbots have been frequent and one current solution is the instillment of filtering prompt shields by Microsoft.
We can expect more of these technologies to surface, and various companies to follow in Microsoft’s footsteps, as the question about AI safety and security becomes more pertinent.
—
The UK was admitted as an associate member by the Global Cooperation Arrangement for Privacy Enforcement (Global CAPE) on June 2023, and today, 4th of April 2024, they signed a new international agreement with each other. The aim is to cooperate on data protection and enforcement during cross-border data transfers.
To ensure that the personal information of UK residents remains safe as it travels across different countries, the ICO will provide assistance with investigations as part of the Global CAPE. The member countries in this agreement include Singapore, the Philippines, Mexico, Japan, Australia, the Republic of Korea, Chinese Taipei and the USA. The arrangement will allow the ICO to share information with these countries without signing separate privacy agreements with each state.
“The ICO’s association with the Global CAPE is an important step in strengthening our relationship with other countries so we can work together to tackle global data protection and privacy issues.”
—
In a statement released today, the 3rd of April 2024, the Information Commissioner’s Office (ICO) calls on social media companies to take into account, and improve upon, their data protection practices that relate to children.
The new Children’s Code of Practice will have a few focus points for the 2024-25 period, some of which include:
– Default privacy and location settings: these have to be turned off by default, since tracking children’s location data carries additional risks
– Profiling children for advertisements: these setting should also be off by default, as seeing targeted ads may lead kids to make unguarded and unwise financial decisions which they are not equipped to handle by themselves
– Using recommender systems: utilising children’s data to create behavioural profiles and curate content feeds has numerous risks as children then spend more and more time online and become encouraged to share personal information about themselves in excessive manner
– Using data from children under 13 years old: this is forbidden, and companies need to ensure that parental consent is given instead.
The ICO will work closely with UK regulators like Ofcom and social media platforms to ensure that children’s data is protected as much as possible.
—
In a ruling made on the 27th of March, the Court of Justice of the European Union (CJEU) ruled that Amazon must publish details about all its advertisements in Europe.
For context, the EU’s Digital Services Act (DSA), introduced in 2022, aims to make the digital market more transparent and competitive by forcing tech giants, designated as “gatekeepers”, which include Amazon, Apple, Meta, Microsoft, ByteDance and Google, to comply with new rules.
For Amazon, this means sharing details of the advertisements, the campaigns content and profits made from individual ads they host, via a public library. A platform such as this would offer invaluable insights for other companies into how the ad campaigns are structured and what monetisation structure Amazon implements. Amazon contested this and asked for a pau se of the DSA requirement in producing this ads library. While this request was initially granted by the General Court, this permission was suspended by the CJEU on the 27th of March 2024.
“The interests defended by the EU legislature prevail, in the present case, over Amazon’s material interests, with the result that the balancing of interests weighs in favour of rejecting the request for suspension,” said a spokesman of the CJEU.
Amazon will now have to comply in publishing all the ads details and following all the other requirements set forth by the DSA.