Connect with us

Latest News

Norway pulls its coronavirus contacts tracing app after privacy watchdog’s warning

Published

on

One of the first national coronavirus contacts tracing apps to be launched in Europe is being suspended in Norway after the country’s data protection authority raised concerns that the software, called ‘Smittestopp’, poses a disproportionate threat to user privacy — including by continuously uploading people’s location.

Following a warning from the watchdog Friday, the Norwegian Institute of Public Health (FHI) said today it will stop uploading data from tomorrow — ahead of a June 23 deadline when the DPA had asked for use of the app to be suspended so that changes could be made. It added that it disagrees with the watchdog’s assessment but will nonetheless delete user data “as soon as possible”.

As of June 3, the app had been downloaded 1.6M times, and had around 600,000 active users, according to the FHI — which is just over 10% of Norway’s population; or around 14% of the population aged over 16 years.

“We do not agree with the Data Protection Agency’s assessment, but now we have to delete all data and pause work as a result of the notification,” said FHI director Camilla Stoltenberg in a statement [translated via Google Translate]. “With this, we weaken an important part of our preparedness for increased spread of infection, because we lose time in developing and testing the app. At the same time, we have a reduced ability to fight the spread of infection that is ongoing.

“The pandemic is not over. We have no immunity in the population, no vaccine, and no effective treatment. Without the Smittestopp app, we will be less equipped to prevent new outbreaks that may occur locally or nationally.”

Europe’s data protection framework allows for personal data to be processed for a pressing public health purpose — and Norway’s DPA had earlier agreed an app could be a suitable tool to combat the coronavirus emergency. Although the agency was not actively consulted during the app’s development, and had expressed reservations — saying it would closely monitor developments.

Developments that have led the watchdog to intervene are a low contagion rate in the country and a low download rate for the app — meaning it now takes the view that Smittestopp is no longer a proportionate intervention.

“We believe that FHI has not demonstrated that it is strictly necessary to use location data for infection detection,” said Bjørn Erik Thon, director of Norway’s DPA, in a statement posted on its website today.

Unlike many of the national coronavirus apps in Europe — which use only Bluetooth signals to estimate user proximity as a means of calculating exposure risk to COVID-19 — Norway’s app also tracks real-time GPS location data.

The country took the decision to track GPS before the European Data Protection Board — which is made up of representatives of DPAs across the EU — had put out guidelines, specifying that contact tracing apps “do not require tracking the location of individual users”; and suggesting the use of “proximity data” instead.

Additionally, Norway opted for a centralized app architecture, meaning user data is uploaded to a central server controlled by the health authority, instead of being stored locally on device — as is the case with decentralized coronavirus contacts tracing apps, such as the app being developed by Germany and one launched recently in Italy. (Apple and Google’s exposure notification API also exclusively supports decentralized app architectures.)

The FHI had been using what it describes as “anonymised” user data from the app to track movement patterns around the country — saying the data would be used to monitor whether restrictions intended to limit the spread of the virus (such as social distancing) were working as intended.

The DPA said today that it’s also unhappy users of the app have no ability to choose to grant permission only for coronavirus contacts tracing — but must also agree to their personal information being used for research purposes, contravening the EU data protection principle of purpose limitation.

Another objection it has is around how the app data was being anonymized and aggregated by the FHI — location data being notoriously difficult to robustly anonymize.

“It is FHI’s choice that they stop all data collection and storage right away. Now I hope they use the time until June 23 well, both to document the usefulness of the app and to make other necessary changes so that they can resume use,” said Thon. “The reason for the notification is the [DPA]’s assessment that Smittestopp can no longer be regarded as a proportionate encroachment on users’ basic privacy rights.”

“Smittestopp is a very privacy-intensive measure, even in an exceptional situation where society is trying to fight a pandemic. We believe that the utility is not present the way it is today, and that is how the technical solution is designed and working now,” he also said.

Commenting on the developments, Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law, suggested the Norway DPA’s decision could lead to similar bans on contacts tracing apps elsewhere in Europe — should contagion levels drop to a similarly low level. (And rates of COVID-19 continue declining across the region, at this stage.)

“To my knowledge, this is the first instance in which a European DPA has imposed a ban on a contact-tracing app already in use in light of national developments regarding contagion levels,” he told us. “It is thus possible that other European DPAs will impose similar bans in the future and demand that contact-tracing apps be changed as soon as contagion levels substantially decrease also in other parts of Europe. Norway has currently one of the lowest contagion levels in Europe.”

“The ban was not only related to the app’s use of GPS data. The latter was probably the most important feature of the app that the Norwegian DPA has criticised, but not the only one to be seen as problematic,” Tosoni added. “Another element that was criticised by the Norwegian DPA was that the app’s users are currently unable to consent only to the use of their their data for infection tracking purposes without consenting to their data being used also for research purposes.

“The DPA also questioned the accuracy of the app in light of the current low level of contagion in Norway, and criticised the absence of an appropriate solution for aggregating and anonymising the data collected.”

Tosoni said the watchdog is expected to reassess the app in the next few weeks, including assessing any changes proposed by the developer, but he takes the view that it’s unlikely the DPA would deem a switch to Bluetooth-only tracing to be sufficient for the app’s use of personal data proportionate.

Even so, the FHI said today it hopes users will suspend the app (by disabling its access to GPS and Bluetooth in settings), rather than deleting it entirely — so the software could be more easily reactivated in future should it be deemed necessary and legal.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Latest News

Teach AIs forgetfulness could make them better at their jobs

Published

on

While modern machine learning systems act with a semblance of artificial intelligence, the truth is they don’t “understand” any of the data they work with — which in turn means they tend to store even trivial items forever. Facebook researchers have proposed structured forgetfulness as a way for AI to clear the decks a bit, improving their performance and inching that much closer to how a human mind works.

The researchers describe the problem by explaining how humans and AI agents might approach a similar problem.

Say there are ten doors of various colors. You’re asked to go through the yellow one, you do so and then a few minutes later have forgotten the colors of the other doors — because it was never important that two were red, one plaid, two walnut, etc, only that they weren’t yellow and that the one you chose was. Your brain discarded that information almost immediately.

But an AI might very well have kept the colors and locations of the other nine doors in its memory. That’s because it doesn’t understand the problem or the data intuitively — so it keeps all the information it used to make its decision.

This isn’t an issue when you’re talking about relatively small amounts of data, but machine learning algorithms, especially during training, now routinely handle millions of data points and ingest terabytes of imagery or language. And because they’re built to constantly compare new data with their accrued knowledge, failing to forget unimportant things means they’re bogged down by constant references to pointless or outdated data points.

The solution hit upon by Facebook researchers is essentially — and wouldn’t we all like to have this ability — to tell itself how long it needs to remember a piece of data when it evaluates it to begin with.

Animation showing 'memories' of an AI disappearing.

Image Credits: Facebook

“Each individual memory is associated with a predicted expiration date, and the scale of the memory depends on the task,” explained Angela Fan, a Facebook AI researcher who worked on the Expire-Span paper. “The amount of time memories are held depends on the needs of the task—it can be for a few steps or until the task is complete.”

So in the case of the doors, the colors of the non-yellow doors are plenty important until you find the yellow one. At that point it’s safe to forget the rest, though of course depending on how many other doors need to be checked, the memory could be held for various amounts of time. (A more realistic example might be forgetting faces that aren’t the one the system is looking for, once it finds it.)

Analyzing a long piece of text, the memory of certain words or phrases might matter until the end of a sentence, a paragraph, or longer — it depends on whether the agent is trying to determine who’s speaking, what chapter the sentence belongs to, or what genre the story is.

This improves performance because at the end, there’s simply less information for the model to sort through. Because the system doesn’t know whether the other doors might be important, that information is kept ready at hand, increasing the size and decreasing the speed of the model.

Fan said the models trained using Expire-Span performed better and were more efficient, taking up less memory and compute time. That’s important during training and testing, which can take up thousands of hours of processing, meaning even a small improvement is considerable, but also at the end user level, where the same task takes less power and happens faster. Suddenly performing an operation on a photo makes sense to do live rather than after the fact.

Though being able to forget does in some ways bring AI processes closer to human cognition, it’s still nowhere near the intuitive and subtle ways our minds operate. Of course, being able to pick what to remember and how long is a major advantage over those of us for whom those parameters are chosen seemingly randomly.

Continue Reading

Latest News

What The Conflict With Israel Looks Like To 2 Palestinians

Published

on

NPR’s Steve Inskeep talks to Omar Shaban, founder of a Gaza-based think tank, and Palestinian lawyer Diana Buttu, about how this cycle of Palestinian-Israeli violence plays out in their neighborhoods.

Continue Reading

Latest News

Echelon exposed riders’ account data, thanks to a leaky API

Published

on

Image Credits: Echelon (stock image)

Peloton wasn’t the only at-home workout giant exposing private account data. Rival exercise giant Echelon also had a leaky API that let virtually anyone access riders’ account information.

Fitness technology company Echelon, like Peloton, offers a range of workout hardware — bikes, rowers, and a treadmill — as a cheaper alternative for members to exercise at home. Its app also lets members join virtual classes without the need for workout equipment.

But Jan Masters, a security researcher at Pen Test Partners, found that Echelon’s API allowed him to access the account data — including name, city, age, sex, phone number, weight, birthday, and workout statistics and history — of any other member in a live or pre-recorded class. The API also disclosed some information about members’ workout equipment, such as its serial number.

Masters, if you recall, found a similar bug with Peloton’s API, which let him make unauthenticated requests and pull private user account data directly from Peloton’s servers without the server ever checking to make sure he (or anyone else) was allowed to request it.

Echelon’s API allows its members’ devices and apps to talk with Echelon’s servers over the internet. The API was supposed to check if the member’s device was authorized to pull user data by checking for an authorization token. But Masters said the token wasn’t needed to request data.

Masters also found another bug that allowed members to pull data on any other member because of weak access controls on the API. Masters said this bug made it easy to enumerate user account IDs and scrape account data from Echelon’s servers. Facebook, LinkedIn, Peloton and Clubhouse have all fallen victim to scraping attacks that abuse access to APIs to pull in data about users on their platforms.

Ken Munro, founder of Pen Test Partners, disclosed the vulnerabilities to Echelon on January 20 in a Twitter direct message, since the company doesn’t have a public-facing vulnerability disclosure process (which it says is now “under review”). But the researchers did not hear back during the 90 days after the report was submitted, the standard amount of time security researchers give companies to fix flaws before their details are made public.

TechCrunch asked Echelon for comment, and was told that the security flaws identified by Masters — which he wrote up in a blog post — were fixed in January.

“We hired an outside service to perform a penetration test of systems and identify vulnerabilities. We have taken appropriate actions to correct these, most of which were implemented by January 21, 2021. However, Echelon’s position is that the User ID is not PII [personally identifiable information,” said Chris Martin, Echelon’s chief information security officer, in an email.

Echelon did not name the outside security company but said while the company said it keeps detailed logs, it did not say if it had found any evidence of malicious exploitation.

But Munro disputed the company’s claim of when it fixed the vulnerabilities, and provided TechCrunch with evidence that one of the vulnerabilities was not fixed until at least mid-April, and another vulnerability could still be exploited as recently as this week.

When asked for clarity, Echelon did not address the discrepancies. “[The security flaws] have been remediated,” Martin reiterated.

Echelon also confirmed it fixed a bug that allowed users under the age of 13 to sign up. Many companies block access to children under the age of 13 to avoid complying with the Children’s Online Privacy Protection Act, or COPPA, a U.S. law that puts strict rules on what data companies can collect on children. TechCrunch was able to create an Echelon account this week with an age less than 13, despite the page saying: “Minimum age of use is 13 years old.”

Continue Reading

Trending

Copyright © 2020 Latin America Business News

en_USEnglish