Summary

Total Articles Found: 13

Top sources:

Top Keywords:

Top Authors

Top Articles:

  • Skip the Surveillance By Opting Out of Face Recognition At Airports
  • The Government’s Indictment of Julian Assange Poses a Clear and Present Danger to Journalism, the Freedom of the Press, and Freedom of Speech
  • Amazon Ring Must End Its Dangerous Partnerships With Police
  • EU Court Again Rules That NSA Spying Makes U.S. Companies Inadequate for Privacy
  • New Bill Would Make Needed Steps Toward Curbing Mass Surveillance
  • Massachusetts Court Blocks Warrantless Access to Real-Time Cell Phone Location Data
  • Human Rights Watch Reverse-Engineers Mass Surveillance App Used by Police in Xinjiang
  • HTTPS Is Actually Everywhere
  • Victory for Users: WhatsApp Fixes Privacy Problem in Group Messaging
  • When Facial Recognition Is Used to Identify Defendants, They Have a Right to Obtain Information About the Algorithms Used on Them, EFF Tells Court

2022 Year in Review

Published: 2022-12-22 16:59:51

Popularity: 24

Author: Cindy Cohn

Keywords:

  • Creativity & Innovation
  • Free Speech
  • Security
  • EFF believes we can create a future where our rights not only follow us online, but are enhanced by new technology. The activists, lawyers, and technologists on EFF’s staff fight for that better future and against the kinds of dystopias best left to speculative fiction. In courts, in legislatures, and in company offices we make sure that the needs of the users are heard. Sometimes we send letters. Sometimes, we send planes. We’ve pushed hard this year and won many hard-fought battles. And in the battles we have not won, we continue on, because it’s important to stand up for what’s right, even if the road is long and rocky.  In 2022, we looked into the apps used by daycare centers that collect and share information about the children in their care with their parents. It turned out that not only are the apps dangerously insecure, but the companies that make them were uninterested in making them safer. We responded by giving parents information that they can use to bring their own pressure, including basic recommendations for these applications like implementing two-factor authentication to ensure that this sensitive information about our kids stays in the right hands. We won big in security this year. After years of pressure, Apple has finally implemented one of our longstanding demands: that cloud backups be encrypted. Apple also announced the final death of its dangerous plan to scan your phone.  We also continued our fight against police surveillance. Williams v. San Francisco, our lawsuit with the ACLU where the San Francisco Police Department illegally accessed surveillance cameras during the Black Lives Matters protests continues on appeal. Since the lawsuit was filed, the San Francisco Police Department has repeatedly tried to change the law to give the police unwarranted access to third-party cameras. Mayor London Breed introduced and then withdrew a proposal to give the police even more power. The San Francisco Board of Supervisors eventually passed a similar change to the law—but we secured a 15 month sunset. Rest assured, we will be fighting this mass surveillance that sweeps in protests and other First Amendment-protected activity when that sunset date approaches. The camera setback was followed by a dramatic turnaround win, again in San Francisco. In one week the Board of Supervisors reversed its position on giving the SFPD the ability to deploy killer robots. (The SFPD would like you to know that they object to our “killer robots” framing. That’s because the robots do not act on their own or have guns. Instead, they have bombs and explode. We stand by our framing.) Make no mistake: this historic reversal would not have happened without the pushback of the activists. And of course our thanks to the many regular residents of the Bay Area who showed up and made good trouble.  Through our representation of the Internet Archive, we also stood up against the four largest publishers who are looking to control how libraries serve their patrons. These publishers want to lock libraries into expensive and restrictive ebook licenses, while claiming, without evidence, that the Internet Archive’s Controlled Digital Lending (CDL) program, is a threat to their business. Libraries give us all knowledge and EFF stands with them.  In the European Union, we lobbied hard for a Digital Markets Act that recognized the value of interoperability and meaningfully restrained the power of “gatekeeper” platforms. Finally, sustained pressure from EFF and its allies—and you—kept Congress from mandating filters or link taxes, protecting free expression online. And Congress did some good this year, too, passing the Safe Connections Act, a bill that EFF pushed to make it easier for survivors of domestic violence to keep their phone number while leaving a family plan. This simple protection can be essential to stop abusers from using access to their victims’ cellphone plans to track and harass. It's impossible to cover everything we’ve done this year in a blog post that doesn’t take the whole new year to read. But rest assured, we did a lot and none of it would be possible without our members, supporters, and all of you who stood up and took action to build a better future.  EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2022 every day between now and the new year. A Roller Coaster for Decentralization Daycare and Early Childhood Education Apps Fighting Tech-Enabled Abuse Lifting the Fog Right to Repair Legislation and Advocacy EFF’s Threat Lab Sharpens Its Knives Pivotal Year for the Metaverse and Extended Reality Raising A Glass with EFF Members Hacking Governments and Government Hacking in Latin America The Adoption of the EU's Digital Services Act: A Landmark Year for Platform Regulation Privacy Shouldn't Clock Out When You Clock In The Battle For Online Speech Moved To U.S. Courts Police Drones and Robots The State of Online Free Expression Worldwide Users Worldwide Said "Stop Scanning Us" An Urgent Year for Interoperability Pushing for Strong Digital Rights in the States Surveillance in San Francisco The Year We Got Serious about Tech Monopolies Ending the Scourge of Redlining in Broadband Access Schools and EdTech Need to Study Up On Student Privacy Reproductive Justice and Digital Rights Seeing Patent Trolls Clearly Fighting for the Digital Future of Books Global Cybercrime and Government Access to User Data Across Borders A Year in Internet Surveillance and Resilience Data Sanctuary for Abortion and Trans Health Care

    ...more

    HTTPS Is Actually Everywhere

    Published: 2021-09-21 18:37:03

    Popularity: 135

    Author: Alexis Hancock

    Keywords:

  • Announcement
  • Security Education
  • For more than 10 years, EFF’s HTTPS Everywhere browser extension has provided a much-needed service to users: encrypting their browser communications with websites and making sure they benefit from the protection of HTTPS wherever possible. Since we started offering HTTPS Everywhere, the battle to encrypt the web has made leaps and bounds: what was once a challenging technical argument is now a mainstream standard offered on most web pages. Now HTTPS is truly just about everywhere, thanks to the work of organizations like Let’s Encrypt. We’re proud of EFF’s own Certbot tool, which is Let’s Encrypt’s software complement that helps web administrators automate HTTPS for free. The goal of HTTPS Everywhere was always to become redundant. That would mean we’d achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode. With these simple settings available, EFF is preparing to deprecate the HTTPS Everywhere web extension as we look to new frontiers of secure protocols like SSL/TLS. After the end of this year, the extension will be in “maintenance mode.” for 2022. We know many different kinds of users have this tool installed, and want to give our partners and users the needed time to transition. We will continue to inform users that there are native HTTPS-only browser options before the extension is fully sunset. Some browsers like Brave have for years used HTTPS redirects provided by HTTPS Everywhere’s Ruleset list. But even with innovative browsers raising the bar for user privacy and security, other browsers like Chrome still hold a considerable share of the browser market. The addition of a native setting to turn on HTTPS in these browsers impacts millions of people. Follow the steps below to turn on these native HTTPS-only features in Firefox, Chrome, Edge, and Safari and celebrate with us that HTTPS is truly everywhere for users. Firefox The steps below apply to Firefox desktop. HTTPS-only for mobile is currently only available in Firefox Developer mode, which advanced users can enable in about:config.  Settings > Privacy & Security > Scroll to Bottom > Enable HTTPS-Only Mode Chrome HTTPS-only in Chrome is available for both desktop and mobile in Chrome 94 (released today!). Settings > Privacy and security > Security > Scroll to bottom > Toggle “Always use secure connections” This feature is also under the flag chrome://flags/#https-only-mode-setting. Edge This is still considered an “experimental feature” in Edge, but is available in Edge 92. Visit edge://flags/#edge-automatic-https and enable Automatic HTTPS Hit the “Restart” button that appears to restart Microsoft Edge. Visit edge://settings/privacy, scroll down, and turn on “Automatically switch to more secure connections with Automatic HTTPS”. Safari HTTPS is upgraded by default when possible in Safari 15, recently released September 20th, for macOS Big Sur and macOS Catalina devices. No setting changes are needed from the user. Updates for Safari 15 This post was updated on 9/27/21 to correct path for Firefox's HTTPS-Only mode setting and provide Chrome's HTTPS-only flag URL.

    ...more

    EU Court Again Rules That NSA Spying Makes U.S. Companies Inadequate for Privacy

    Published: 2020-07-16 22:37:44

    Popularity: 2396

    Author: Danny O'Brien

    Keywords:

  • Commentary
  • Surveillance and Human Rights
  • NSA Spying
  • International
  • EU Policy
  • The European Union’s highest court today made clear—once again—that the US government’s mass surveillance programs are incompatible with the privacy rights of EU citizens. The judgment was made in the latest case involving Austrian privacy advocate and EFF Pioneer Award winner Max Schrems. It invalidated the “Privacy Shield,” the data protection deal that secured the transatlantic data flow, and narrowed the ability of companies to transfer data using individual agreements (Standard Contractual Clauses, or SCCs). Despite the many “we are disappointed” statements by the EU Commission, U.S. government officials, and businesses, it should come as no surprise, since it follows the reasoning the court made in Schrems’ previous case, in 2015. Back then, the EU Court of Justice (CJEU) noted that European citizens had no real recourse in US law if their data was swept up in the U.S. governments’ surveillance schemes. Such a violation of their basic privacy rights meant that U.S. companies could not provide an “adequate level of [data] protection,” as required by EU law and promised by the EU/U.S. “Privacy Safe Harbor” self-regulation regime. Accordingly, the Safe Harbor was deemed inadequate, and data transfers by companies between the EU and the U.S. were forbidden. Since that original decision, multinational companies, the U.S. government, and the European Commission sought to paper over the giant gaps between U.S. spying practices and the EU’s fundamental values. The U.S. government made clear that it did not intend to change its surveillance practices, nor push for legislative fixes in Congress. All parties instead agreed to merely fiddle around the edges of transatlantic data practices, reinventing the previous Safe Harbor agreement, which weakly governed corporate handling of EU citizen’s personal data, under a new name: the EU-U.S. Privacy Shield. EFF, along with the rest of civil society on both sides of the Atlantic, pointed out that this was just shuffling chairs on the Titanic. The Court cited government programs like PRISM and Upstream as its primary reason for ending data flows between Europe and the United States, not the (admittedly woeful) privacy practices of the companies themselves. That meant that it was entirely in the government and U.S. Congress’ hands to decide whether U.S. tech companies are allowed to handle European personal data. The message to the U.S. government is simple: Fix U.S. mass surveillance, or undermine one of the United States’ major industries. Five years after the original iceberg of Schrems 1, Schrems 2 has pushed the Titanic fully beneath the waves. The new judgment explicitly calls out the weaknesses of U.S. law in protecting non-U.S. persons from arbitrary surveillance, highlighting that: Section 702 of the FISA does not indicate any limitations on the power it confers to implement surveillance programmes for the purposes of foreign intelligence or the existence of guarantees for non-US persons potentially targeted by those programmes. and ... neither Section 702 of the FISA, nor E.O. 12333, read in conjunction with PPD‑28, correlates to the minimum safeguards resulting, under EU law, from the principle of proportionality, with the consequence that the surveillance programmes based on those provisions cannot be regarded as limited to what is strictly necessary. The CJEU could not be more blunt in its pronouncements: but it remains unclear how the various actors that could fix this problem will react. Will EU data protection authorities step up their enforcement activities and invalidate SCCs that authorize data flows to the U.S. for failing to protect EU citizens from U.S. mass surveillance programs? And if U.S. corporations cannot confidently rely on either SCCs or the defunct Privacy Shield, will they lobby harder for real U.S. legislative change to protect the privacy rights of Europeans in the U.S.—or just find another temporary stopgap to force yet another CJEU decision? And will the European Commission move from defending the status quo and current corporate practices, to truly acting on behalf of its citizens? Whatever the initial reaction by EU regulators, companies and the Commission, the real solution lies, as it always has, with the United States Congress. Today's decision is yet another significant indicator that the U.S. government's foreign intelligence surveillance practices need a massive overhaul. Congress half-heartedly began the process of improving some parts of FISA earlier this year—a process which now appears to have been abandoned. But this decision shows, yet again, that the U.S. needs much broader, privacy-protective reform, and that Congress’ inaction makes us all less safe, wherever we are.

    ...more

    Amazon Ring Must End Its Dangerous Partnerships With Police

    Published: 2020-06-10 21:12:09

    Popularity: 2567

    Author: Jason Kelley

    Keywords:

  • Call To Action
  • Privacy
  • Digital Rights and the Black-led Movement Against Police Violence
  • Street-Level Surveillance
  • Across the United States, people are taking to the street to protest racist police violence, including the tragic police killings of George Floyd and Breonna Taylor. This is a historic moment of reckoning for law enforcement. Technology companies, too, must rethink how the tools they design and sell to police departments minimize accountability and exacerbate injustice. Even worse, some companies profit directly from exploiting irrational fears of crime that all too often feed the flames of police brutality. So we’re calling on Amazon Ring, one of the worst offenders, to immediately end the partnerships it holds with over 1300 law enforcement agencies. SIGN PETITION TELL AMAZON RING: END POLICE PARTNERSHIPS  One by one, companies that profit off fears of crime have released statements voicing solidarity with those communities that are disproportionately impacted by police violence. Amazon, which owns Ring, announced that they “stand in solidarity with the Black community—[their] employees, customers, and partners — in the fight against systemic racism and injustice.”  Amazon Statement And yet, Amazon and other companies offer a high-speed digital mechanism by which people can make snap judgements about who does, and who does not, belong in their neighborhood, and summon police to confront them. This mechanism also facilitates police access to video and audio footage from massive numbers of doorbell cameras aimed at the public way across the country—a feature that could conceivably be used to identify participants in a protest through a neighborhood. Amazon built this surveillance infrastructure through tight-knit partnerships with police departments, including officers hawking Ring’s cameras to residents, and Ring telling officers how to better pressure residents to share their videos. Ring plays an active role in enabling and perpetuating police harassment of Black Americans. Despite Amazon’s statement that “the inequitable and brutal treatment of Black people in our country must stop,” Ring plays an active role in enabling and perpetuating police harassment of Black Americans. Ring’s surveillance doorbells and its accompanying Neighbors app have inflamed many residents’ worst instincts and urged them to spy on pedestrians, neighbors, and workers. We must tell Amazon Ring to end their police partnerships today.  Ring Threatens Privacy and Communities We’ve written extensively about why Ring is a “Perfect Storm of Privacy Threats,” and we’ve laid out five specific problems with Ring-police partnerships. We also revealed a number of previously-undisclosed trackers sending information from the Ring app to third parties, and critiqued the lackluster changes made in response to security flaws.  To start, Ring sends notifications to a person’s phone every time the doorbell rings or motion near the door is detected. With every notification, Ring turns the pizza delivery person or census-taker innocently standing at the door into a potential criminal. And with the click of a button, Ring allows a user to post video taken from that camera directly to their community, facilitating the reporting of so-called “suspicious” behavior. This encourages racial profiling—take, for example, an African-American real estate agent who was stopped by police because neighbors thought it was “suspicious” for him to ring a doorbell.  Ring Could Be Used to Identify Protesters To make matters worse, Ring continuing to grow partnerships with police departments during the current protests make an arrangement already at risk of enabling racial profiling even more troubling and dangerous. Ring now has relationships with over 1300 police departments around the United States. These partnerships allow police to have a general idea of the location of every Ring camera in town, and to make batch-requests for footage via email to every resident with a camera within an area of interest to police—potentially giving police a one-step process for requesting footage of protests to identify protesters. In some towns, the local government has even offered tiered discount rates for the camera based on how much of the public area on a street the Ring will regularly capture. The more of the public space it captures, the larger the discount.  If a Ring camera captures demonstrations, the owner is at risk of making protesters identifiable to police and vulnerable to retribution. Even if the camera owner refuses to voluntarily share footage of a protest with police, law enforcement can go straight to Amazon with a warrant and thereby circumvent the camera’s owner.  Ring Undermines Public Trust In Police The rapid proliferation of these partnerships between police departments and the Ring surveillance system—without oversight, transparency, or restrictions—poses a grave threat to the privacy and safety of all people in the community. “Fear sells,” Ring posted on their company blog in 2016. Fear also gets people hurt, by inflaming tensions and creating suspicion where none rationally exists.  Consider that Amazon also encourages police to tell residents to install the Ring app and purchase cameras for their homes, in an arrangement that makes salespeople out of what should be impartial and trusted protectors of our civic society. Per Motherboard, for every town resident that downloads Ring’s Neighbors app, the local police department gets credits toward buying cameras it can distribute to residents. This troubling relationship is worse than uncouth: it’s unsafe and diminishes public trust. Some of the “features” Ring has been considering adding would considerably increase the danger it poses. Integrated face recognition software would enable the worst type of privacy invasion of individuals, and potentially force every person approaching a Ring doorbell to have their face scanned and cross-checked against a database of other faces without their consent. License plate scanning could match people’s faces to their cars. Alerting users to local 911 calls as part of the “crime news” alerts on its app, Neighbors, would instill even more fear, and probably sell additional Ring services.  Just today Amazon announced a one-year moratorium on police use of its dangerous "Rekognition" facial recognition tool. This follows an announcement from IBM that it will no longer develop or research face recognition technology, in part because of its use in mass surveillance, policing, and racial profiling. We're glad Amazon has admitted that the unregulated use of face recognition can do harm to vulnerable communities. Now it's time for it to admit the dangers of Ring-police partnerships, and stand behind its statement on police brutality.  SIGN PETITION TELL AMAZON RING: END POLICE PARTNERSHIPS

    ...more

    The EARN IT Act Violates the Constitution

    Published: 2020-03-31 23:17:51

    Popularity: 16

    Author: Sophia Cope

    Keywords:

  • Legislative Analysis
  • Free Speech
  • Privacy
  • Encrypting the Web
  • Section 230 of the Communications Decency Act
  • LLM Says: "Censorship alert"

    Since senators introduced the EARN IT Act (S. 3398) in early March, EFF has called attention to the many ways in which the bill would be a disaster for Internet users’ free speech and security. We’ve explained how the EARN IT Act could be used to drastically undermine encryption. Although the bill doesn’t use the word “encryption” in its text, it gives government officials like Attorney General William Barr the power to compel online service providers to break encryption or be exposed to potentially crushing legal liability. The bill also violates the Constitution’s protections for free speech and privacy. As Congress considers the EARN IT Act—which would require online platforms to comply with to-be-determined “best practices” in order to preserve certain protections from criminal and civil liability for user-generated content under Section 230 (47 U.S.C. § 230)—it’s important to highlight the bill’s First and Fourth Amendment problems. First Amendment As we explained in a letter to Congress, the EARN IT Act violates the First Amendment in several ways. 1. The bill’s broad categories of “best practices” for online service providers amount to an impermissible regulation of editorial activity protected by the First Amendment. The bill’s stated purpose is “to prevent, reduce, and respond to the online sexual exploitation of children.” However, it doesn’t directly target child sexual abuse material (CSAM, also referred to as child pornography) or child sex trafficking ads. (CSAM is universally condemned, and there is a broad framework of existing laws that seek to eradicate it, as we explain in the Fourth Amendment section below). Instead, the bill would allow the government to go much further and regulate how online service providers operate their platforms and manage user-generated content—the very definition of editorial activity in the Internet age. Just as Congress cannot pass a law demanding news media cover specific stories or present the news a certain way, it similarly cannot direct how and whether online platforms host user-generated content. 2. The EARN IT Act’s selective removal of Section 230 immunity creates an unconstitutional condition. Congress created Section 230 and, therefore, has wide authority to modify or repeal the law without violating the First Amendment (though as a policy matter, we don’t support that). However, the Supreme Court has said that the government may not condition the granting of a governmental privilege on individuals or entities doing things that amount to a violation of their First Amendment rights. Thus, Congress may not selectively grant Section 230 immunity only to online platforms that comply with “best practices” that interfere with their First Amendment right to make editorial choices regarding their hosting of user-generated content. 3. The EARN IT Act fails strict scrutiny. The bill seeks to hold online service providers responsible for a particular type of content and the choices they make regarding user-generated content, and so it must satisfy the strictest form of judicial scrutiny. Although the content the EARN IT Act seeks to regulate is abhorrent and the government’s interest in stopping the creation and distribution of that content is compelling, the First Amendment still requires that the law be narrowly tailored to address those weighty concerns. Yet, given the bill’s broad scope, it will inevitably force online platforms to censor the constitutionally protected speech of their users. Fourth Amendment The EARN IT Act violates the Fourth Amendment by turning online platforms into government actors that search users’ accounts without a warrant based on probable cause. The bill states, “Nothing in this Act or the amendments made by this Act shall be construed to require a provider of an interactive computer service to search, screen, or scan for instances of online child sexual exploitation.” Nevertheless, given the bill’s stated goal to, among other things, “prevent” online child sexual exploitation, it’s likely that the “best practices” will effectively coerce online platforms into proactively scanning users’ accounts for content such as CSAM or child sex trafficking ads. Contrast this with what happens today: if an online service provider obtains actual knowledge of an apparent or imminent violation of anti-child pornography laws, it’s required to make a report to the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline. NCMEC then forwards actionable reports to the appropriate law enforcement agencies. Under this current statutory scheme, an influential decision by the U.S. Court of Appeals for the Tenth Circuit, written by then-Judge Neil Gorsuch, held that NCMEC is not simply an agent of the government, it is a government entity established by act of Congress with unique powers and duties that are granted only to the government. On the other hand, courts have largely rejected arguments that online service providers are agents of the government in this context. That’s because the government argues that companies voluntarily scan their own networks for private purposes, namely to ensure that their services stay safe for all users. Thus, courts typically rule that these scans are considered “private searches” that are not subject to the Fourth Amendment’s warrant requirement. Under this doctrine, NCMEC and law enforcement agencies also do not need a warrant to view users’ account content already searched by the companies. However, the EARN IT Act’s “best practices” may effectively coerce online platforms into proactively scanning users’ accounts in order to keep the companies’ legal immunity under Section 230. Not only would this result in invasive scans that risk violating all users’ privacy and security, companies would arguably become government agents subject to the Fourth Amendment. In analogous cases, courts have found private parties to be government agents when the “government knew of and acquiesced in the intrusive conduct” and “the party performing the search intended to assist law enforcement efforts or to further his own ends.” Thus, to the extent that online service providers scan users’ accounts to comply with the EARN IT Act, and do so without a probable cause warrant, defendants would have a much stronger argument that these scans violate the Fourth Amendment. Given Congress’ goal of protecting children from online sexual exploitation, it should not risk the suppression of evidence by effectively coercing companies to scan their networks. Next Steps Presently, the EARN IT Act has been introduced in the Senate and assigned to the Senate Judiciary Committee, which held a hearing on March 11. The next step is for the committee to consider amendments during a markup proceeding (though given the current state of affairs it’s unclear when that will be). We urge you to contact your members of Congress and ask them to reject the bill. Take Action PROTECT OUR SPEECH AND SECURITY ONLINE

    ...more

    New Bill Would Make Needed Steps Toward Curbing Mass Surveillance

    Published: 2020-01-30 01:22:25

    Popularity: 2323

    Author: India McKinney

    The Safeguarding Americans’ Private Records Act is a Strong Bill That Builds on Previous Surveillance Reforms Last week, Sens. Ron Wyden (D–Oregon) and Steve Daines (R–Montana) along with Reps. Zoe Lofgren (D–California), Warren Davidson (R–Ohio), and Pramila Jayapal (D–Washington) introduced the Safeguarding Americans’ Private Records Act (SAPRA), H.R 5675. This bipartisan legislation includes significant reforms to the government’s foreign intelligence surveillance authorities, including Section 215 of the Patriot Act. Section 215 of the PATRIOT Act allows the government to obtain a secret court order requiring third parties, such as telephone providers, Internet providers, and financial institutions, to hand over business records or any other “tangible thing” deemed “relevant” to an international terrorism, counterespionage, or foreign intelligence investigation. If Congress does not act, Section 215 is set to expire on March 15. The bill comes at a moment of renewed scrutiny of the government’s use of the Foreign Intelligence Surveillance Act (FISA). A report from the Department of Justice’s Office of the Inspector General released late last year found significant problems in the government’s handling of surveillance of Carter Page, one of President Trump’s former campaign advisors. This renewed bipartisan interest in FISA transparency and accountability—in combination with the March 15 sunset of Section 215—provides strong incentives for Congress to enact meaningful reform of an all-too secretive and invasive surveillance apparatus. Congress passed the 2015 USA FREEDOM Act in direct response to revelations that the National Security Agency (NSA) had abused Section 215 to conduct a dragnet surveillance program that siphoned up the records of millions of American’s telephone calls. USA FREEDOM was intended to end bulk and indiscriminate collection using Section 215. It also included important transparency provisions aimed at preventing future surveillance abuses, which are often premised on dubious and one-sided legal arguments made by the intelligence community and adopted by the Foreign Intelligence Surveillance Court (FISC)—the federal court charged with overseeing much of the government’s foreign intelligence surveillance. Unfortunately, government disclosures made since USA FREEDOM suggest that the law has not fully succeeded in limiting large-scale surveillance or achieved all of its transparency objectives. While SAPRA, the newest reform bill, does not include all of the improvements we’d like to see, it is a strong bill that would build on the progress made in USA FREEDOM. Here are some of the highlights: Ending the Call Detail Records Program After it was revealed that the NSA relied on Section 215 to collect information on the phone calls of millions of Americans, the USA Freedom Act limited the scope of the government’s authority to prospectively collect these records. But even the more limited Call Detail Records (CDR) program authorized in USA Freedom was later revealed to have collected records outside of its legislative authority. And last year, due to significant “technical irregularities” and other issues, the NSA announced it was shutting down the CDR program entirely. Nevertheless, the Trump administration asked Congress to renew the CDR authority indefinitely. SAPRA, however, would make the much-needed reform of entirely removing the CDR authority and clarifying that Section 215 cannot be used to collect any type of records on an ongoing basis. Ending the authority of the CDR program is a necessary conclusion to a program that could not stay within the law and has already reportedly been discontinued. The bill also includes several amendments intended to prevent the government from using Section 215 for indiscriminate collection of other records. More Transparency into Secret Court Opinions USA FREEDOM included a landmark provision that required declassification of significant FISC opinions.  The language of the law clearly required declassification of all significant opinions, including those issued before the passage of USA Freedom in 2015. However, the government read the law differently: it believed it was only required to declassify significant FISC opinions issued after USA Freedom was passed. This crabbed reading of USA Freedom left classified nearly forty years of significant decisions outlining the scope of the government’s authority under FISA—a result clearly at odds with USA Freedom’s purpose to end secret surveillance law. We are pleased to see that this bill clarifies that all significant FISC opinions, no matter when they were written, must be declassified and released. It also requires that future opinions be released within six months of the date of decision.  “Tangible Things” and the impact of Carpenter v. United States As written, Section 215 allows the government to collect “any tangible thing” if it shows there are “reasonable grounds” to believe those tangible things are “relevant” to a foreign intelligence investigation. This is a much lower standard than a warrant, and we’ve long been concerned that an ambiguous term like “tangible things” could be secretly interpreted to obtain sensitive personal information. We know, for example, that previous requests under Section 215 included cell site location information, which can be used for invasive tracking of individuals’ movements. But the landmark 2018 Supreme Court decision in Carpenter v. United States clarified that individuals maintain a Fourth Amendment expectation of privacy in location data held by third parties, thus requiring a warrant for the government to collect it. Following questioning by Senator Wyden, the intelligence community stated it no longer used Section 215 to collect location data but admitted it hadn’t analyzed how Carpenter applied to Section 215. SAPRA addresses these developments by clarifying that the government cannot warrantlessly collect GPS or cell site location information. It also forbids the government from using Section 215 to collect web browsing or search history, and anything that would “otherwise require a warrant” in criminal investigations. These are important limitations, but more clarification is still needed. Decisions like Carpenter are relatively rare. Even if several lower courts held that collecting a specific category of information requires a warrant, we're concerned that the government might argue that this provision isn’t triggered until the Supreme Court says so. That’s why we’d like to see the law be even clearer about the types of information that are outside of Section 215’s authority. We also want to extend some of USA’s Freedom’s limitations on the scope of collection. Specifically, we’d like to see tighter limits on the that the government have a “specific selection term” for the collection of “tangible things.” Expanding the Role of the FISC Amicus One of the key improvements in USA Freedom was a requirement that the FISC appoint an amicus to provide the court with a perspective independent of the government’s in cases raising novel or significant legal issues. Over time, however, we’ve learned that the amici appointed by the court have faced various obstacles in their ability to make the strongest case, including lack of access to materials relied on by the government. SAPRA includes helpful reforms to grant amici access to the full range of these materials and to allow them to recommend appeal to the FISA Court of Review and the Supreme Court. Reporting USA Freedom requires the intelligence community to publish annual transparency reports detailing the types of surveillance orders it seeks and the numbers of individuals and records affected by this surveillance, but there have been worrying gaps in these reports. A long-standing priority of the civil liberties community has been increased accounting of Americans whose records are collected and searched using warrantless forms of foreign intelligence surveillance, including Section 215 and Section 702. The FBI in particular has refused to count the number of searches of Section 702 databases it conducts using Americans’ personal information, leading to a recent excoriation by the FISC. SAPRA requires that the transparency reports include the number of Americans whose records are collected under 215, as well as the number of US person searches the government does of data collected under Sections 215 and 702. Notice and Disclosure of Surveillance to Criminal Defendants Perhaps the most significant reform needed to the government’s foreign intelligence surveillance authority as a whole is the way in which it uses this surveillance to pursue criminal cases. There are two related issues: government notice to defendants that they were surveilled, and government disclosure to the defense of the surveillance applications. Under so-called “traditional” FISA—targeted surveillance conducted pursuant to a warrant-like process—defendants are supposed to be notified when the government intends to use evidence derived from the surveillance against them. The same is true of warrantless surveillance conducted under Section 702, but we’ve learned that for years the government did not notify defendants as required. This lack of transparency denied defendants basic due process.  Meanwhile, the government currently has no obligation to notify defendants whose information was collected under Section 215. SAPRA partially addresses these problems. First, it requires notification to defendants in cases involving information obtained through Section 215. Second, and more generally, it clarifies that notice to defendants is required whenever the government uses evidence that it would not have otherwise learned had it not used FISA. But this only addresses half of the problem. Even if a criminal defendant receives notice that FISA surveillance was used, that notice is largely meaningless unless the defendant can see—and then directly challenge—the surveillance that led to the charges. This has been one of EFF’s major priorities when it comes to fighting for FISA reform, and we think any bill that tackles FISA reform in addition to addressing Section 215 should make these changes as well. FISA sets up a mechanism through which lawyers for defendants who are notified of surveillance can seek disclosure of the underlying surveillance materials relied on by the government. Disclosure of this sort is both required and routine in traditional criminal cases. It is crucial to test the strength of the government’s case and to effectively point out any violations of the Fourth Amendment or other constitutional rights. But in the FISA context, despite the existence of a disclosure mechanism, it has been completely toothless; the history of the law, no defendant has ever successfully obtained disclosure of surveillance materials. The investigation into surveillance of Carter Page demonstrates why this is a fundamental problem. The Inspector General found numerous defects in the government’s surveillance applications—defects that, had Carter Page been prosecuted, might have led to the suppression of that information in a criminal case against him. But, under the current system, Page and his lawyers never would have seen the applications. And, the government might have been able to obtain a conviction based on potentially illegal and unconstitutional surveillance. It’s important for Congress to take this opportunity to codify additional due process protections. It’s a miscarriage of justice if a person can be convicted on unlawfully acquired evidence, yet can’t challenge the legality of the surveillance in the first place. Attorneys for defendants in these cases need access to the surveillance materials—it’s a fundamental issue of due process. Unfortunately, SAPRA does not include any reforms to the disclosure provision of FISA. We look forward to working with Congress to ensure that the final FISA reform bill tackles this issue of disclosure. In 2015, USA FREEDOM was a good first step in restoring privacy protections and creating necessary oversight and transparency into secret government surveillance programs. But in light of subsequent evidence, it’s clear that much more needs to be done. Though we would like to see a few improvements, SAPRA is a strong bill that includes many necessary reforms. We look forward to working with lawmakers to ensure that these and other provisions are enacted into law before March 15.

    ...more

    The Government’s Indictment of Julian Assange Poses a Clear and Present Danger to Journalism, the Freedom of the Press, and Freedom of Speech

    Published: 2019-05-24 18:33:12

    Popularity: 3428

    Author: David Greene

    Keywords:

  • Free Speech
  • Transparency
  • No Downtime for Free Speech
  • Bloggers' Rights
  • Wikileaks
  • Computer Fraud And Abuse Act Reform
  • The century-old tradition that the Espionage Act not be used against journalistic activities has now been broken. Seventeen new charges were filed yesterday against Wikileaks founder Julian Assange. These new charges make clear that he is being prosecuted for basic journalistic tasks, including being openly available to receive leaked information, expressing interest in publishing information regarding certain otherwise secret operations of government, and then disseminating newsworthy information to the public. The government has now dropped the charade that this prosecution is only about hacking or helping in hacking. Regardless of whether Assange himself is labeled a “journalist,” the indictment targets routine journalistic practices.But the indictment is also a challenge to fundamental principles of freedom of speech. As the Supreme Court has explained, every person has the right to disseminate truthful information pertaining to matters of public interest, even if that information was obtained by someone else illegally. The indictment purports to evade this protection by repeatedly alleging that Assange simply “encouraged” his sources to provide information to him. This places a fundamental free speech right on uncertain and ambiguous footing. A Threat To The Free Press Make no mistake, this not just about Assange or Wikileaks—this is a threat to all journalism, and the public interest. The press stands in place of the public in holding the government accountable, and the Assange charges threaten that critical role. The charges threaten reporters who communicate with and knowingly obtain information of public interest from sources and whistleblowers, or publish that information, by sending a clear signal that they can be charged with spying simply for doing their jobs. And they threaten everyone seeking to educate the public about the operation of government and expose government wrongdoing, whether or not they are professional journalists.Assistant Attorney General John Demers, head of the Department of Justice’s National Security Division, told reporters after the indictment that the department “takes seriously the role of journalists in our democracy and we thank you for it,” and that it’s not the government’s policy to target them for reporting. But it’s difficult to separate the Assange indictment from President Trump’s repeated attacks on the press, including his declarations on Twitter, at White House briefings, and in interviews that the press is “the enemy of the people,” “dishonest,” “out of control,” and “fake news.” Demers’ statement was very narrow—disavowing the “targeting” of journalists, but not the prosecution of them as part of targeting their sources. And contrary to the DOJ’s public statements, the actual text of the Assange Indictment sets a dangerous precedent; by the same reasoning it asserts here, the administration could turn its fervent anti-press sentiments into charges against any other media organization it disfavors for engaging in routine journalistic practices.Most dangerously, the indictment contends that anyone who “counsels, commands, induces” (under 18 USC §2, for aiding and abetting) a source to obtain or attempt to obtain classified information violates the Espionage Act, 18 USC § 793(b). Under the language of the statute, this includes literally “anything connected with the national defense,” so long as there is an  “intent or reason to believe that the information is to be used to the injury of the United States, or to the advantage of any foreign nation.” The indictment relies heavily and repeatedly on allegations that Assange “encouraged” his sources to leak documents to Wikileaks, even though he knew that the documents contained national security information.But encouraging sources and knowingly receiving documents containing classified information are standard journalistic practices, especially among national security reporters. Neither law nor custom has ever required a journalist to be a purely passive, unexpected, or unknowing recipient of a leaked document. And the U.S. government has regularly maintained, in EFF’s own cases and elsewhere, that virtually any release of classified information injures the United States and advantages foreign nations.The DOJ indictment thus raises questions about what specific acts of “encouragement” the department believes cross the bright line between First Amendment protected newsgathering and crime. If a journalist, like then-candidate Trump, had said: "Russia, if you’re listening, I hope you’re able to find the [classified] emails that are missing. I think you will probably be rewarded mightily by our press," would that be a chargeable crime? The DOJ Does Not Decide What Is And Isn’t Journalism Demers said Assange was “no journalist,” perhaps to justify the DOJ’s decision to charge Assange and show that it is not targeting the press. But it is not the DOJ’s role to determine who is or is not a “journalist,” and courts have consistently found that what makes something journalism is the function of the work, not the character of the person. As the Second Circuit once wrote in a case about the reporters’ privilege, the question is whether they intended to “use material—sought, gathered, or received—to disseminate information to the public.”  No government label or approval is necessary, nor is any job title or formal affiliation. Rather than justifying the indictment, Demers’ non-sequitur appears aimed at distracting from the reality of it.Moreover, Demers’ statement is as dangerous as it is irrelevant. None of the elements of the 18 statutory charges (Assange is also facing a charge under the Computer Fraud and Abuse Act) require a determination that Assange is not a journalist. Instead, the charges broadly describe journalism–seeking, gathering and receiving information for dissemination to the public, and then publishing that information–as unlawful espionage when it involves classified information. Of course news organizations routinely publish classified information. This is not considered unusual, nor (previously) illegal. When the government went to the Supreme Court to stop the publication of the classified Pentagon Papers, the Supreme Court refused (though it did not reach the question of whether the Espionage Act could constitutionally be charged against the publishers). Justice Hugo Black, concurring in the judgment, explained why: In the First Amendment, the Founding Fathers gave the free press the protection it must have to fulfill its essential role in our democracy. The press was to serve the governed, not the governors. The Government's power to censor the press was abolished so that the press would remain forever free to censure the Government. The press was protected so that it could bare the secrets of government and inform the people. Only a free and unrestrained press can effectively expose deception in government. And paramount among the responsibilities of a free press is the duty to prevent any part of the government from deceiving the people and sending them off to distant lands to die of foreign fevers and foreign shot and shell. Despite this precedent and American tradition, three of the DOJ charges against Assange specifically focus solely on the purported crime of publication. These three charges are for Wikileaks’ publication of the State Department cables and the Significant Activity Reports (war logs) for Iraq and Afghanistan, documents which were also published in Der Spiegel, The Guardian, The New York Times, Al Jazeera, and Le Monde, and republished by many other news media. For these charges, the government included allegations that Assange failed to properly redact, and thereby endangered sources. This may be another attempt to make a distinction between Wikileaks and other publishers, and perhaps to tarnish Assange along the way. Yet this is not a distinction that makes a difference, as sometimes the media may need to provide unredacted data. For example, in 2017 the New York Times published the name of a CIA official who was behind the CIA program to use drones to kill high-ranking militants, explaining “that the American public has a right to know who is making life-or-death decisions in its name.”While one can certainly criticize the press’ publication of sensitive data, including identities of sources or covert officials, especially if that leads to harm, this does not mean the government must have the power to decide what can be published, or to criminalize publication that does not first get the approval of a government censor. The Supreme Court has justly held the government to a very high standard for abridging the ability of the press to publish, limited to exceptional circumstances like “publication of the sailing dates of transports or the number and location of troops” during wartime. A Threat to Free Speech In a broader context, the indictment challenges a fundamental principle of free speech: that a person has a strong First Amendment right to disseminate truthful information pertaining to matters of public interest, including in situations in which the person’s source obtained the information illegally. In Bartnicki v. Vopper, the Supreme Court affirmed this, explaining: “it would be quite remarkable to hold that speech by a law-abiding possessor of information can be suppressed in order to deter conduct by a non-law-abiding third party. ... [A] stranger's illegal conduct does not suffice to remove the First Amendment shield from speech about a matter of public concern.” While Bartnicki involved an unknown source who anonymously left an illegal recording with Bartnicki, later courts have acknowledged that the rule applies, and perhaps even more strongly, to recipients who knowingly and willfully received material from sources, even when they know the source obtained it illegally. In one such case, the court rejected a claim that the willing acceptance of such material could sustain a charge of conspiracy between the publisher and her source.Regardless of what one thinks of Assange’s personal behavior, the indictment itself will inevitably have a chilling effect on critical national security journalism, and the dissemination in the public interest of available information that the government would prefer to hide. There can be no doubt now that the Assange indictment is an attack on the freedoms of speech and the press, and it must not stand. Related Cases:  Bank Julius Baer & Co v. Wikileaks

    ...more

    Human Rights Watch Reverse-Engineers Mass Surveillance App Used by Police in Xinjiang

    Published: 2019-05-08 00:52:16

    Popularity: 1251

    Author: Gennie Gebhart

    Keywords:

  • Technical Analysis
  • International
  • Surveillance and Human Rights
  • For years, Xinjiang has been a testbed for the Chinese government’s novel digital and physical surveillance tactics, as well as human rights abuses. But there is still a lot that the international human rights community doesn’t know, especially when it comes to post-2016 Xinjiang. Last Wednesday, Human Rights Watch released a report detailing the inner workings of a mass surveillance app used by police and other officials. The application is used by offiicals to communicate with the larger Integrated Joint Operations Platform (IJOP), the umbrella system for collecting mass surveillance data in Xinjiang. This report uncovers what a modern surveillance state looks like, and can inform our work to end them. First, the report demonstrates IJOP’s system of pervasive surveillance targets just about anyone who deviates from an algorithmically-determined norm. Second, as a result, IJOP requires a massive amount of manual labor, all focused towards data entry and translating the physical world into digital relationships. We stand by Human Rights Watch in calling for the end to violations of human rights within Xinjiang, and within China. What’s going on in Xinjiang? Xinjiang is the largest province in China, home to the Uighurs and other Turkic minority groups. Since 2016, the Chinese government has cracked down on the region as a part of the ongoing “Strike Hard” campaign. An estimated 1 million individuals have been detained in “political education centers,” and the IJOP’s surveillance system watches the daily lives of Xinjiang residents. While we fight the introduction and integration of facial recognition and street-level surveillance technologies in the U.S., existing research from Human Rights Watch gives us insight on how facial-recognition-enabled cameras already line the streets in front of schools, markets, and homes in Kashgar. WiFi sniffers log the unique addresses of connected devices, and police gather data from phone inspections, regular intrusive home visits, and mandatory security checkpoints. Human Rights Watch obtained a copy of a mobile app police officers and other officials use to log information about individuals, and released its source code. The primary purpose of the IJOP app is for police officers to record and complete “investigative missions,” which require officers to interrogate certain individuals or investigate vehicles and events, and log the interactions into the app. In addition, the application also contains functionality to search for information about an individual, perform facial recognition via Face++, and detect and log information about WiFi networks within range. Who are they targeting? Well, basically everyone. The application focuses on individuals who fit one of 36 suspicious “Person Types.” These categories, and the nature of these “investigative missions,” reveal a great deal about the types of people IJOP is targeting. When conducting an “investigation,” officers are prompted to create an extensive profile of the individual(s) being investigated. Despite the Chinese government’s claim that their surveillance state is necessary for countering “separatism, terrorism, and extremism,” most of these behavioral personas have nothing to do with any of the above: People who travel. This includes individuals who move in or out of their area of residence often, people who have been abroad, or who have simply left Xinjiang province—even if they do it legally. If an individual has been abroad “for too long,” officials are also prompted to physically check the target’s phone. They’re prompted by the app to search for specific foreign messaging apps (including WhatsApp, Viber, and Telegram), “unusual” software that few people use, VPNs, and whether their browser history contains “harmful URLs.” People with “problematic” content and software on their phones. When “suspicious” software (again, including VPNs or foreign messaging apps like WhatsApp or Telegram) is detected, the IJOP system will send a detailed alert to officials about the target and identifying information about the phone, including a unique device identifier and metadata that can be used to track the phone’s general location. This could be tied to the JingWang spyware app many residents are forced to install. Reverse engineering work from Red Team Lab found that JingWang focuses on inspecting the files stored on the device, and transmits a list of filenames and hashes to a server over an insecure HTTP connection. People, phones, or cars that go “off-the-grid.” This could mean an individual has stopped using a smartphone, or lent a car to a friend. An individual’s ID going “off-grid” typically means they have left Xinjiang and are no longer in IJOP’s jurisdiction of dragnet surveillance, generally due to school, moving (legally), or tourism. People who are related to any of the above. Following the disappearance and subsequent reappearance of poet and musician Abdurehim Heyit, the International Uyghur diaspora started an online activism campaign and reported thousands of missing relatives. The strong focus on relatives and familial ties in the IJOP data profiles confirms Chinese surveillance’s focus on suspecting, interrogating, and even detaining individuals just because they are related to someone who has been deemed “suspicious.” ...And people who are not. The application flags all sorts of people. People who consume too much electricity, people subject to a data entry mishap, people who do not socialize with neighbors, people who have too many children...the list goes on and on. Despite grandiose claims, the process is still manual and labor-intensive Any small deviation from what the IJOP system deems “normal behavior” could be enough to trigger an investigation and prompt a series of intrusive visits from a police officer. As a result, the current surveillance system is extremely labor-intensive due to the broad categorizations of “suspicious persons,” and the raw number of officials needed to keep tabs on all of them. Officers, under severe pressure themselves to perform, overwork themselves feeding data to IJOP. According to Human Rights Watch: These officials are under tremendous pressure to carry out the Strike Hard Campaign. Failure to fulfill its requirements can be dangerous, especially for cadres from ethnic minorities, because the Strike Hard Campaign also targets and detains officials thought to be disloyal. The process of logging all this data is all manual; the app itself uses a simple decision tree to decide what bits of information an official should log. According to Human Rights Watch, although the application itself isn’t as sophisticated as the Chinese government has previously touted, it’s still not exactly clear what sort of analyses IJOP may be doing with this massive trove of personal data and behavior. IJOP’s focus on data entry and translating physical relationships into discrete data points reminds us that digitizing our lives is the first step towards creating a surveillance state. Some parts of the application depend on already-catalogued information: the centralized collection of electricity usage, for instance. Others are intended to collect as much possible to be used elsewhere. In Xinjiang, the police know a huge array of invasive information about you, and it is their job to collect more. And as all behavior is pulled into the state’s orbit, ordinary people can become instant suspects, and innocent actions have to be rigorously monitored. Using certain software becomes, if not a crime, then a reason for suspicion. Wandering from algorithmic expectations targets you for further investigation. Invoking the “slippery slope” is a misnomer, because the privacy violations we predict and fear are already here. Groups like Human Rights Watch, including their brave colleagues within Xinjiang, are doing everyone service by exposing what a modern surveillance state looks like.

    ...more

    Skip the Surveillance By Opting Out of Face Recognition At Airports

    Published: 2019-04-25 04:38:56

    Popularity: 6724

    Author: Jason Kelley

    Keywords:

  • Commentary
  • Biometrics
  • Government agencies and airlines have ignored years of warnings from privacy groups and Senators that using face recognition technology on travelers would massively violate their privacy. Now, the passengers are in revolt as well, and they’re demanding answers. Last week, a lengthy exchange on Twitter between a traveler who was concerned about her privacy and a spokesperson for the airline JetBlue went viral, and many of the questions asked by the traveler and others were the same ones that we’ve posed to Customs and Border Protection (CBP) officials: Where did you get my data? How is it protected? Which airports will use this? Where in the airports will it be used? Most importantly, how do I opt-out? Right now, the key to opting out of face recognition is to be vigilant. How to Opt Out These questions should be simple to answer, but we haven’t gotten simple answers. When we asked CBP for more information, they told us: “visit our website.” We did, and we still have many of the same questions. Representatives for airlines, which partner directly with the government agencies, also seem unable to answer the concerns, as the JetBlue spokesperson made evident. Both agencies and airlines seemed to expect no pushback from passengers when they implemented this boarding-pass-replacing-panopticon. The convenience would win out, they seemed to assume, not expecting people to mind having their face scanned “the same way you unlock your phone.” But now that “your face is your boarding pass” (as JetBlue awkwardly puts it), at least in some airports, the invasive nature of the system is much more clear, and travelers are understandably upset. It might sound trite, but right now, the key to opting out of face recognition is to be vigilant. There’s no single box you can check, and importantly, it may not be possible for non-U.S. persons to opt out of face recognition entirely. For those who can opt out, you’ll need to spot the surveillance when it’s happening. To start, TSA PreCheck, Clear, and other ways of "skipping the line" often require biometric identification, and are often being used as test cases for these sorts of programs. Once you’re at the airport, be on the lookout for any time a TSA, CBP, or airline employee asks you to look into a device, or when there’s a kiosk or signage like those below. That means your biometric data is probably about to be scanned. At the moment, face recognition is most likely to happen at specific airports, including Atlanta, Chicago, Seattle, San Francisco, Las Vegas, Los Angeles, Washington (Dulles and Reagan), Boston, Fort Lauderdale, Houston Hobby, Dallas/Fort Worth, JFK, Miami, San Jose, Orlando, and Detroit; while flying on Delta, JetBlue, Lufthansa, British Airways and American Airlines; and in particular, on international flights. But, that doesn’t mean that other airlines and airports won’t implement it sooner rather than later. To skip the surveillance, CBP says you “should notify a CBP Officer or an airline or airport representative in order to seek an alternative means of verifying [your] identity and documents.” Do the same when you encounter this with an airline. While there should be signage near the face recognition area, it may not be clear. If you’re concerned about creating a slight delay for yourself or other passengers, take note: though CBP has claimed to have a 98% accuracy rating in their pilot programs, the Office of the Inspector General could not verify those numbers, and even a 2% error rate would cause thousands of people to be misidentified every day. Most face recognition technology has significantly lower accuracy ratings than that, so you might actually be speeding things up by skipping the surveillance. The Long And Winding Biometric Pathway Part of the reason for the confusion about how to opt out is that there are actually (at least) three different face recognition checkpoints looming: Airlines want to use your face as your boarding pass, saying “it's about convenience.” CBP, which is part of the Department of Homeland Security (DHS), wants to use your face to check against DHS and State Department databases when you’re entering or exiting the country; and the TSA wants to compare your face against your photo identification throughout the airport. And if people are upset now, they will be furious to know this is just the beginning of the “biometric pathway” program: CBP and TSA want to use face recognition and other biometric data to track everyone from check-in, through security, into airport lounges, and onto flights (PDF). They’re moving fast, too, despite (or perhaps because of) the fact that there are no regulations on this sort of technology: DHS is hoping to use facial recognition on 97 percent of departing air passengers within the next four years and 100 percent of all international passengers in the top 20 U.S. airports by 2021. It’s the customers and passengers who will bear the burden when things go wrong, If the government agencies get their way, new biometric data could be taken from/used against travelers wherever they are in the airport—and much of that collection will be implemented by private companies (even rental car companies are getting in on the action). CBP will store that facial recognition data for two weeks for U.S. citizens and lawful permanent residents, and for 75+ years for non-U.S. persons. In addition, the biometric data collected by at least some of these systems in the future—which can include your fingerprints, the image of your face, and the scan of your iris—will be stored in FBI and DHS databases and will be searched again and again for immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes. Passengers Will Bear the Burden of Privacy Invasion, Not Airlines or Government Agencies It’s easy for companies and agencies to tout the convenience of this sort of massive data collection and sharing scheme. But as we’ve seen in notable privacy fiascos over the last few years—from Facebook’s Cambridge Analytica scandal, to the breaches of the Office of Personnel Management and Equifax in the U.S., to the constant hacking of India’s national biometric database, Aadhar—it’s the customers and passengers who will bear the burden when things go wrong, and they will go wrong. These vast biometric databases will create huge security and privacy risks, with the additional concern that a company leaking your passwords or credit card numbers is nothing compared to it leaking your biometric data. While you can change a password, you can’t easily change your face. Additionally, these systems are notoriously inaccurate, contain out-of-date information, and due to the fact that immigrants and people of color are disproportionately represented in criminal and immigration databases, and that face recognition systems are less capable of identifying people of color, women, and young people, the weight of these inaccuracies will fall disproportionately on them. It will be the passengers who bear the burden when they are stuck watching the flights they paid for take off without them because there was an error with a database or an algorithm, or because they preferred non-biometric options that weren’t in place. It’s time for the government agencies and the airlines to pause these programs until they can clearly and adequately give: Photographs of the signage in-situ in the airports in question, as well as any additional information about the opt-out process. An explanation of the locations where CBP will be providing meaningful and clear opt out notice to travelers (for example, at entry points, point-of-sale, ticket counters, security checkpoints, and boarding gates) as well as the specific language travelers can use to opt out of the biometric data collection program. An up-to-date list of all the airports and airlines that currently participate in the biometric exit program. Information about the algorithm CBP is using to compare photos (provided by NEC), as well as the accuracy information associated with that algorithm. Technological specifications for transferring data from point of collection to DHS and with vendors and airlines. Additional questions—like how data is safeguarded—are laid out in our letter to CBP. Congress must also demand the answers to these questions. And lawmakers must require agencies and airlines to pause this program until they can not only ensure the biometric privacy of travelers is protected but more importantly justify this huge invasion of privacy. Just last month, three Senators released a joint statement calling on DHS to pause the program until there can be “a rulemaking to establish privacy and security rules of the road,” but so far, they’ve been ignored. Trading privacy for convenience is a bad bargain, and it can feel like the deal isn’t always one we have a choice in. DHS has said that the only way we can ensure that our biometric data isn’t collected when we travel is to “refrain from traveling.” That’s ridiculous. The time to regulate and restrict the use of  facial recognition technology is now, before it becomes embedded in our everyday lives. We must keep fighting to make sure that in the future, it gets easier, and not harder, to defend our privacy—biometric or otherwise.

    ...more

    Massachusetts Court Blocks Warrantless Access to Real-Time Cell Phone Location Data

    Published: 2019-04-24 20:20:01

    Popularity: 1456

    Author: Jennifer Lynch

    Keywords:

  • Legal Analysis
  • Privacy
  • Locational Privacy
  • There's heartening news for our location privacy out of Massachusetts this week. The Supreme Judicial Court, the state's highest court, ruled that police access to real-time cell phone location data—whether it comes from a phone company or from technology like a cell site simulator—intrudes on a person’s reasonable expectation of privacy. Absent exigent circumstances, the court held, the police must get a warrant. In Commonwealth of Massachusetts v. Almonor, police had a phone carrier “ping” the cell phone of a suspect in a murder case—surreptitiously accessing GPS functions and causing the phone to send its coordinates back to the phone carrier and the police. This real-time location data pinpointed Mr. Almonor’s phone to a location inside a private home. The state argued it could warrantlessly get cell phone location data to find anyone, anytime, at any place as long as it was less than six hours old. A trial court disagreed and the state appealed. EFF filed an amicus brief in this case in partnership with the ACLU and the Massachusetts Association of Criminal Defense Lawyers. We asked the court to recognize, as the Supreme Court did in U.S. v Carpenter, that people have a constitutional right to privacy in their physical movements. We argued that, because people have their phones with them all the time, and because the location information produced by the phone can reveal our every move—where and with whom we live, socialize, visit, vacation, worship, and much more—the police must get a warrant to access this sensitive information. The Massachusetts court held that “[m]anipulating our phones for the purpose of identifying and tracking our personal location presents an even greater intrusion” than accessing the historical location data at issue in Carpenter. It concluded that “by causing the defendant's cell phone to reveal its real-time location, the Commonwealth intruded on the defendant's reasonable expectation of privacy in the real-time location of his cell phone.” The court recognized both that cell phone use is ubiquitous in our society, and that a phone’s location is a “proxy” for its owner’s location. The court noted that “society's expectation has been that law enforcement could not secretly and instantly identify a person's real-time physical location at will,” and “[a]llowing law enforcement to immediately locate an individual whose whereabouts were previously unknown by compelling that individual's cell phone to reveal its location contravenes that expectation.” Much of the majority’s opinion focuses on the fact that, in this case, law enforcement directed the phone company to “manipulate” the defendant’s phone, causing it to send its location to the phone company. In other words, the phone company wouldn’t have collected the data on its own as part of its normal business practices. But two judges, in a concurring opinion, expressed concern that this focus on law enforcement action—rather than on the collection of location data alone—would result in an exception for searches of real-time location data that providers collect automatically. The concurring justices would hold that the Massachusetts constitution “protects us from pings not because of the right to keep the government from interfering with our cellular telephones, but because of the right to keep the government from finding us.” This is very concerning because, as the concurring justices note, the majority’s focus on government action here could allow the police to “side-step the constitutional protection” by just asking for the data the cell service provider collects on its own. Although the majority denied that would happen, it remains to be seen, both how officers will implement searches after this opinion and how lower courts will apply constitutional law to those searches. We’ve seen the Commonwealth interpret this court’s prior decisions on location tracking very narrowly in the past. Although the defendant raised both federal and state constitutional claims in Almonor, the court based its decision solely on Article 14 of the Massachusetts Declaration of Rights, which was drafted before—and served as one of the models for—our federal Bill of Rights. Article 14, one of the cornerstones of the Massachusetts Constitution, is the state’s equivalent to the Fourth Amendment. As the court notes, it “does, or may, afford more substantive protection to individuals than that which prevails under the Constitution of the United States.” Courts around the country are now being asked to address the scope of the Carpenter ruling. Almonor in Massachusetts and a case called State of Maine v. O’Donnell, in Maine are among the first to deal directly with how Carpenter should be applied when police track and locate people in real-time. We’re heartened that the Massachusetts court took these issues seriously and made clear that the police must get a warrant, whether they access historical cell phone location data or whether they cause a phone to send its real-time location. We’re still waiting for the Maine court’s opinion in O’Donnell, and we’re actively tracking other cases addressing these issues across the country.   Related Cases:  Carpenter v. United States

    ...more

    Victory for Users: WhatsApp Fixes Privacy Problem in Group Messaging

    Published: 2019-04-03 20:23:27

    Popularity: 94

    Author: Rebecca Jeschke

    Issue Was Targeted in EFF’s ‘Fix It Already!’ Campaign San Francisco - In a victory for users, WhatsApp has fixed a long-standing privacy problem in group messaging, where users could be added to a group without their permission. The issue was one of the targets of “Fix It Already!,” a campaign from the Electronic Frontier Foundation (EFF) demanding repair of privacy and security holes that disrespect user control and put us all at risk. “Without this kind of control, an unwanted group invite would expose your phone number to all the members of a group and even have the potential to make you part of someone else’s disinformation campaign,” said EFF Associate Director of Research Gennie Gebhart. Users of WhatApp could always leave a messaging group or block a messaging group after being added to them. But there was no way to control being added to the group in the first place. In changes announced in a blog post today, WhatsApp announced that users can now go to their account settings and choose among three options for group messaging: “Nobody,” where no one can add you to a group automatically without your express consent; “My Contacts,” where only your contacts can add you without express consent; or “Everyone,” where no one needs your consent. These changes will be available to some users as soon as today, but will be available to everyone using the latest version of WhatsApp over the next several weeks. EFF launched “Fix It Already!” on February 28, targeting nine big privacy and security issues with major consumer technology products. The list takes Facebook to task for reusing customers’ phone numbers to advertising—even if the user only provided the number for security purposes. Google was called out for not letting Android phone users to deny and revoke network permissions for apps. Apple, Twitter, Verizon, Microsoft, Slack, and Venmo are also on EFF’s list. “We’re happy to see WhatsApp addressing this problem, and would like to see other messaging apps follow suit,” said Gebhart. “Now it’s time for the eight other products and platforms we called out in Fix It Already! to catch up.” For more on Fix It Already!https://fixitalready.eff.org Contact:  Gennie Gebhart Associate Director of Research gennie@eff.org Eva Galperin Director of Cybersecurity eva@eff.org

    ...more

    Media Alert: EFF Argues Against Forced Unlocking of Phone in Indiana Supreme Court

    Published: 2019-04-16 18:17:15

    Popularity: 32

    Author: Karen Gullo

    LLM Says: "Privacy shielded"

    Justices to Consider Fifth Amendment Right Against Self-Incrimination Wabash, IN—At 10 a.m. on Thursday, April 18, the Electronic Frontier Foundation (EFF) will argue to the Indiana Supreme Court that police cannot force a criminal suspect to turn over a passcode or otherwise decrypt her cell phone. The case is Katelin Seo v. State of Indiana.The Fifth Amendment of the Constitution states that people cannot be forced to incriminate themselves, and it’s well settled that this privilege against self-incrimination covers compelled “testimonial” communications, including physical acts. However, courts have split over how to apply the Fifth Amendment to compelled decryption of encrypted devices.Along with the ACLU, EFF responded to an open invitation from the Indiana Supreme Court to file an amicus brief in this important case. In Thursday’s hearing, EFF Senior Staff Attorney Andrew Crocker will explain that the forced unlocking of a device requires someone to disclose “the contents of his own mind.” That is analogous to written or oral testimony, and is therefore protected under the U.S. Constitution.Thursday’s hearing is in Indiana’s Wabash County to give the public an opportunity to observe the work of the court. Over 750 students are scheduled to attend the argument. It will also be live-streamed.WHAT:Hearing in Katelin Seo v. State of IndianaWHO:EFF Senior Staff Attorney Andrew CrockerWHEN:April 18, 10 a.m.WHERE:Ford TheaterHoneywell Center275 W. Market StreetWabash, Indiana 46992 For more information on attending the argument in Wabash:https://www.in.gov/judiciary/supreme/2572.htmFor more on this case:https://www.eff.org/deeplinks/2019/02/highest-court-indiana-set-decide-if-you-can-be-forced-unlock-your-phone Contact:  Andrew Crocker Senior Staff Attorney andrew@eff.org

    ...more

    When Facial Recognition Is Used to Identify Defendants, They Have a Right to Obtain Information About the Algorithms Used on Them, EFF Tells Court

    Published: 2019-03-12 16:22:40

    Popularity: 37

    Author: Karen Gullo

    Keywords:

  • Privacy
  • Biometrics
  • We urged the Florida Supreme Court yesterday to review a closely-watched lawsuit to clarify the due process rights of defendants identified by facial recognition algorithms used by law enforcement. Specifically, we told the court that when facial recognition is secretly used on people later charged with a crime, those people have a right to obtain information about how the error-prone technology functions and whether it produced other matches.EFF, ACLU, Georgetown Law’s Center on Privacy & Technology, and Innocence Project filed an amicus brief in support of the defendant’s petition for review in Willie Allen Lynch v. State of Florida. Prosecutors in the case didn’t disclose information about how the algorithm worked, that it produced other matches that were never considered, or why Lynch’s photo was targeted as the best match. This information qualifies as “Brady” material—evidence that might exonerate the defendant—and should have been turned over to Lynch.We have written extensively about how facial recognition systems are prone to error and produce false positives, especially when the algorithms are used on African Americans, like the defendant in this case. Researchers at the FBI, MIT, and ProPublica have reported that facial recognition algorithms misidentify black people, young people, and women at higher rates that white people, the elderly, and men.Facial recognition is increasingly being used by law enforcement agencies around the country to identify suspects. It’s unfathomable that technology that could help to put someone in prison is used mostly without question or oversight. In Lynch’s case, facial recognition could help to send him to prison for eight years.Undercover police photographed Lynch using an older-model cell phone at an oblique angle while he was in motion. The photo, which is blurred in places, was run through a facial recognition algorithm to see whether it matched any images of a database of county booking photos. The program returned a list of four possible matches, the first of which was Lynch’s from a previous arrest. His photo was the only one sent on to prosecutors, along with his criminal records.The algorithm used on Lynch is part of the Face Analysis Comparison Examination Systems (FACES), a program operated by the Pinellas County Sheriff’s Office and made available to law enforcement agencies throughout the state. The system can search over 33 million faces from drivers’ licenses and police photos. It doesn’t produce “yes” or “no” responses to matches; it rates matches as likely or less likely matches. Error rates in systems like this can be significant and the condition of Lynch’s photo only exacerbates the possibility of errors.FACES is poorly regulated and shrouded in secrecy. The sheriff said that his office doesn’t audit the system, and there’s no written policy governing its use. The sheriff’s office said it hadn’t been able to validate the system, and “cannot speak to the algorithms and the process by which a match is made.”That he was identified by a facial recognition algorithm wasn’t known by Lynch until just days before his final pretrial hearing, although prosecutors had known for months. Prior to that, prosecutors had never disclosed information about the algorithm to Lynch, including that it produced other possible matches. Neither the crime analyst who operated the system or the detective who accepted the analyst’s conclusion that Lynch’s face was a match knew how the algorithm functioned. The analyst said the first-listed photo in the search results is not necessarily the best match—it could be one further down the list. An Assistant State Attorney doubted the system was reliable enough to meet standards used by courts to assess the credibility of scientific testimony and whether it should be used at trial. Lynch asked for the other matches produced by FACES—the court refused.If a human witness who identified Lynch in a line-up said others in the line-up also looked like the criminal, the state would have had to disclose that information, and Lynch could have investigated those alternate leads. The same principle should have required the state to disclose other people the algorithm produced as matches and information about how the algorithm functions, EFF and ACLU told the Florida Supreme Court.When defendants are facing lengthy prison sentences or even the death penalty, tight controls on the use of facial recognition are crucial. Defendants have a due process right to information about the algorithms used and search results.  The Florida Supreme Court should accept this case for review and provide guidance to law enforcement who use facial recognition to arrest, charge, and deprive people of their liberty. Related Cases:  FBI Facial Recognition Documents

    ...more

    end