FTC fines Amazon $25 million for violating child privacy laws
The U.S. Federal Trade Commission has fined Amazon $25 million for violating the Child Online Privacy Protection Act (COPPA). The FTC has stated that Amazon’s ‘Alexa’ unlawfully stored data, such as children’s voices, on the device. In addition, parents were misled to believe that Alexa had deleted their children’s data upon request when in reality, it was found that the data was not completely deleted. The FTC has also ordered Amazon to employ more stringent privacy practices. Amazon has denied accusations of unlawful practices, stating that its products are built with privacy in mind.
Kuwait aims to collect biometric data from all citizens, residents and visitors
The biometric data of all citizens over the age of 18 will be collected in Kuwait. The biometric data collected will include face and fingerprint scans. Kuwait also aims to collect biometric data from all visitors entering the country. This will be carried out through the installation of devices at country entrances. This data will be shared with the Criminal Evidence Department and specified countries for security threat purposes. The government aims to implement these changes within a year.
20 NHS Trust websites found to share personal data with Facebook
An investigation carried out by the Observer discovered that NHS trusts are sharing personal medical data with Facebook. The data shared included medical conditions and treatments that could be used by Facebook for targeted advertising. The report found that 20 NHS Trust websites collected and shared browsing information with the social media platform. As a result of this investigation, 17 of these Trusts have stopped tracking user information. Meta has stated that they have informed the NHS Trusts of its policy against sending health data.
The UK cracks down on AI companies and their use of personal data
The UK’s Information Commissioner’s Office has stated that AI platforms will be fined if found to process user data without proper consent. This warning has been given due to increasing concerns that AI platforms are scraping large amounts of data without user consent. Regulators are looking to crack down on AI companies and enforce data protection laws more stringently so that artificial intelligence can be used in a safe way. Ofcom is joining these efforts to ensure that new technology is trustworthy and instills public confidence.
Tesla employee and customer data revealed in massive data leak
Tesla has been accused of inadequately protecting the data of customers and employees. Handelsblatt, German news outlet, reported that a whistle-blower publicly leaked 100 gigabytes of personal data such as; phone numbers, emails and employee names. If the leaked documents are verified, the large scale of this data breach could result in Tesla facing a fine equating to 4% of its annual turnover. A legal representative from Tesla has stated that the company plans to pursue legal action against the ex-employee responsible for the leak.
UK’s ICO has released new Subject Access Request guidance for employers
The UK’s Information Commissioner’s Office has released new subject access request (SARs) guidance to help employers avoid common mistakes when dealing with SARs. Many employers do not understand what is required of them when a request is made and how important they are. Last month the ICO reprimanded Plymouth City Council and Norfolk County Council for failure to respond to SARs within the deadline. The guidance informs employers of the significance of SARs, the amount of time that the company has to respond, and the nature of which requests can be submitted both formally and informally.
Google sued by Dutch consumer group for violating user privacy
Dutch consumer group, Consumentenbond and Stichting Bescherming Privacybelangen, is filing a class-action lawsuit against Google. The consumer group has accused Google of collecting and monetizing user location and online activity data, even when the user had opted out. the platform was further accused of the illegal surveillance of Dutch citizens through selling the acquired data to 3rd parties outside of Europe. The consumer group is demanding financial settlements to be paid to the victims of this privacy violation and has urged Dutch citizens to sign up for the lawsuit.
TikTok sues Montana to lift the state TikTok ban
TikTok will sue Montana in an attempt to reverse the state-wide TikTok ban. In January 2024, it will become illegal for the popular app to be available to citizens of Montana on app stores, however, users that already have the app will still be able to use it. TikTok argues that this ban is unconstitutional as it violates the citizen’s rights to freedom of speech. The company also argued that the ban is an example of the government over-involving itself with user data. The State feels that the steps taken have been necessary due to fears of Chinese government surveillance on the app.
Meta fined €1.2bn for unlawful data transfer
Meta, the parent company of Facebook, has been fined €1.2bn by the Irish Data protection Commissioner for unlawfully transferring European Union citizens’ data to the United States. Facebook has been given 5 months to stop data transfers from the EU to the U.S. However without the use of standard contractual clauses (SCCs), Meta has stated that they will not be able to legally provide their platforms to EU users. Facebook will be able to transfer data legally once the new Trans-Atlantic Data Privacy Framework is agreed. Meta has stated that they will appeal this decision, arguing that they are being unfairly singled out when many other companies use SCCs to transfer data.
Google forced to pay $40m for unlawfully collecting user location data
Google has been ordered to pay $39.9 million to Washington State for continuing to collect user location data despite users opting out. It was found that users were made to believe that they had control over what data Google could collect however, the company continued to collect, store, and profit from user location data, against user wishes. The company has also been ordered to introduce reforms that increase transparency about how it collects location data.
Meta likely to face massive privacy fine
Politico has reported that Meta is due to face a record-breaking fine issued by the Irish Data Protection Commissioner. Meta’s company Facebook was found to breach privacy laws when transferring data from European users to the U.S. It is likely that Facebook will be instructed to stop using standard contractual clauses to transfer user data, meaning that the social media platform may become unavailable for European citizens. The sum of the penalty is currently unknown but is said to be more than the €746 million fine Amazon had to pay for a similar privacy breach. The decision will be announced by the DPC on Monday.
EDPB has accepted finalized Guidelines on the use of facial technology
The European Data Protection Board has accepted the final version of Guidelines on when and how facial technology can be used in law enforcement. The Guidelines emphasize that facial recognition technology should only be used in accordance with the Law Enforcement Directive and that all use of the technology should meet necessity and proportionality standards, in line with the Charter of Fundamental Rights.
French data watchdog reveals plans to address AI concerns
The French data protection authority (CNIL) has stated how they will address privacy concerns when using ChatGPT, generative AI, and AI in general in a four-step action plan. The plan details how AI systems can be used in a way that respects the privacy rights of individuals by understanding the technology, and AI ecosystems and guiding and controlling AI systems. CNIL has stated that the plan will make it easier for the European AI Act to be implemented in France. The French data watchdog is influential amongst data protection authorities across Europe, so their approach toward generative AI could influence how the rest of Europe addresses ChatGPT and other generative AI platforms.
75% of GDPR decisions made by Irish DPC are overruled
The Irish Council for Civil Liberties (ICCL) has reported that within 5 years, 75% of Irish DPC rulings on cross-border complaints have been overruled. The European Data Protection Board (EDPB) instructed that tougher action should be taken in these cases, however, under Irish law the DPC tends to opt for the gentler approach concerning 83% of cross-border complaints according to the ICCL. It has been suggested that the DPC is rendered powerless as many big tech companies that violate the GDPR have headquarters in Ireland. This is evidenced by the statistic that reprimands made up more than half of enforcement measures taken by the Irish data watchdog, suggesting that the GDPR is not being enforced as it should.
Toyota Japan found to accidentally leak customer data
The data of 2 million Toyota users was made vulnerable due to an ongoing data leak, lasting over 10 years due to employee error, resulting in Toyota’s cloud system being made public. Data such as car location and identification numbers were publicly available however there have been no reports of suspicious or illegal activity. Nothing had been put in place to alert the company that user data had been made public, however, the company has since stated that they will introduce a system that will alert them and continually monitor the company’s cloud settings. Toyota also stated it will train employees to handle data correctly. This incident has been reported to the Japanese Personal Information Commission.
Facial recognition platform fined €5.2 million by CNIL
Clearview AI has been fined €5.2 million in addition to a €20 million fine also issued by CNIL, due to failure to comply with the French data protection authority. Last year, the tech company was fined 4% of its annual turnover for 3 significant GDPR breaches. It was found that the company developed its facial recognition feature by using pictures from the internet. This resulted in a massive privacy violation, with the company unlawfully processing the data of millions of individuals. Clearview AI’s failure to pay this fine within the 2-month deadline has led to the additional €5.2 million fine. Clearview AI believes that they are in compliance with all privacy laws as the data was collected from a public forum.
Samsung restricts the use of generative AI tools among employees
The lack of privacy regulations governing ChatGPT has caused concern among governments and companies alike. Samsung has taken the step to limit the use of generative AI tools after a major privacy incident occurred. An employee mistakenly entered the company code into ChatGPT which could have resulted in a possible security breach. A survey has revealed that the majority of employees are in favor of the limitation of ChatGPT, with 65% of employees being weary of security breaches when using the chatbot. Many financial institutions as well as other tech companies have also limited or outright ban the use of generative AI due to privacy risks.
Privacy practices you can implement to protect your data when using ChatGPT
In the midst of increasing privacy concerns regarding ChatGPT, the International Business Times have detailed how to best protect your data when using the chatbot and how to implement safe privacy practices. The article states that users should: avoid sharing sensitive data and information with the chatbot and use ChatGPT incognito mode, highlighting the platform’s ‘Chat History and Training’ option. Users should also verify information that the platform provides and avoid using third-party apps and plug-ins for the chatbot. These tips enable users to have control over the information that is shared with ChatGPT and provides users with knowledge on why they should protect their data.
Artificial Intelligence and machine learning are revolutionizing healthcare systems
The use of artificial intelligence and machine learning is significantly changing healthcare systems for the better. Increased use of technology has resulted in innovative developments when diagnosing and treating illnesses, individualizing medicine, improving electronic health records, and creating the possibility of monitoring patients remotely, therefore decreasing the number of hospital admissions. On a broader scale, AI has also proven invaluable when evaluating large amounts of data to identify causes of disease, enabling doctors and patients to take pre-emptive action. Despite the vast benefits of AI in healthcare systems, many concerns remain about the responsible use of AI and data privacy.
Senators reintroduce COPPA 2.0 bill to increase safety for minors online
The Children and Teens’ Online Privacy Protection Act 2.0 has been reintroduced with bipartisan support to combat increased mental health issues amongst children and teenagers. This version of the COPPA would require companies to obtain consent from 13-16-year-olds before their data is processed. The bill would also make it easier for the Federal Trade Commission to enforce action against companies that are collecting data from minors. However, critics have argued that social media platforms themselves need to do more to effectively tackle the misuse of child data and combat the growing mental health crisis.
European Court of Justice rules on compensation for data privacy breaches
The European Court of Justice has ruled on when a GDPR breach can result in compensation. The case brought before the ECJ concerned a claimant who argued that a non-consensual far-right political advertisement caused him to be extremely upset and sought to claim non-material damages. The ECJ has chosen to take a restrictive stance on compensation due to data breaches stating that not every breach gives the claimant a right to compensation. The court detailed 3 criteria that must be met for compensation to be made: GDPR breach, the breach caused non-material/ material damages, and that the breach caused the damage.
Tech companies oppose children’s safety bill
Consumer advocacy and children’s online safety groups are urging legislators to increase child protection online due to the negative impacts of social media on mental health. The children’s safety bill has been proposed to address and prevent social media addictions, protect minors from age-inappropriate content and increase child privacy online. Demands for increased child protection legislation have been opposed by many ‘big tech’ companies that believe too many safeguards and restrictions will limit freedom of speech. NetChoice and Chamber of Progress are among the many tech companies that are opposed to the bill, arguing that stricter laws would create more legal challenges for companies that are actively trying to increase safety on their platform.
FTC make preposition to stop the monetization of children’s data
The Federal Trade Commission has proposed that Meta should no longer monetize minors’ data. This proposition is in response to the growing fear that social media is negatively impacting the mental health of children and teenagers. This fear has increased the desire for regulations to protect the well-being of children. The FTC has accused Meta of numerous privacy breaches and further lack of transparency on their kid’s platform, Messenger Kids. The proposition would extend to all Meta companies. The company has been given 30 days to consider the proposal from the FTC.
WhatsApp refuse to compromise user privacy
The UK government and WhatsApp have reached a standstill concerning end-to-end encryption. The online safety bill seeks to give Ofcom the ability to force WhatsApp, as well as other companies, to find and identify messages that are sexually abusive. The CEO of WhatsApp has stated that the company would not agree to violate user privacy and would accept a potential ban in the UK. Further implying that allowance to read people’s private messages, however legitimate in reason, could be applied in different countries for ‘less legitimate’ reasons, the implications being endless. Critics state that this stance enables crime on WhatsApp with no accountability.
ChatGPT reinstated in Italy after satisfying privacy demands