×
Electronic Frontier Foundation

EFF Files Amicus Brief In Google v. Oracle, Arguing APIs Are Not Copyrightable (eff.org) 147

Areyoukiddingme writes: EFF has filed an amicus brief with the U.S. Supreme Court in Google v. Oracle, arguing that APIs are not copyrightable. From the press release: "The Electronic Frontier Foundation (EFF) today asked the U.S. Supreme Court to rule that functional aspects of Oracle's Java programming language are not copyrightable, and even if they were, employing them to create new computer code falls under fair use protections. The court is reviewing a long-running lawsuit Oracle filed against Google, which claimed that Google's use of certain Java application programming interfaces (APIs) in its Android operating system violated Oracle's copyrights. The case has far-reaching implications for innovation in software development, competition, and interoperability.

In a brief filed today, EFF argues that the Federal Circuit, in ruling APIs were copyrightable, ignored clear and specific language in the copyright statute that excludes copyright protection for procedures, processes, and methods of operation. 'Instead of following the law, the Federal Circuit decided to rewrite it to eliminate almost all the exclusions from copyright protection that Congress put in the statute,' said EFF Legal Director Corynne McSherry. 'APIs are not copyrightable. The Federal Circuit's ruling has created a dangerous precedent that will encourage more lawsuits and make innovative software development prohibitively expensive. Fortunately, the Supreme Court can and should fix this mess.'" Oral arguments before the U.S. Supreme Court are scheduled for March 2020, and a decision by June.

The Internet

Internet Pioneers Fight For Control of .Org Registry By Forming a Nonprofit Alternative (nytimes.com) 17

Reuters reports that a group of "prominent internet pioneers" now has a plan to block the $1.1 billion sale of the .org internet domain registry to Ethos Capital.

The group has created their own nonprofit cooperative to offer an alternative: "There needs to be a place on the internet that represents the public interest, where educational sites, humanitarian sites, and organizations like Wikipedia can provide a broader public benefit," said Katherine Maher, the CEO of Wikipedia parent Wikimedia Foundation, who signed on to be a director of the new nonprofit.

The crowd-sourced research tool Wikipedia is the most visited of the 10 million .org sites registered worldwide...

Hundreds of nonprofits have already objected to the transaction, worried that Ethos will raise registration and renewal prices, cut back on infrastructure and security spending, or make deals to sell sensitive data or allow censorship or surveillance... "What offended me about the Ethos Capital deal and the way it unfolded is that it seems to have completely betrayed this concept of stewardship," said Andrew McLaughlin, who oversaw the transfer of internet governance from the U.S. Commerce Department to ICANN, completed in 2016.

Maher and others said the idea of the new cooperative is not to offer a competing financial bid for .org, which brings in roughly $100 million in revenue from domain sales. Instead, they hope that the unusual new entity, formally a California Consumer Cooperative Corporation, can manage the domain for security and stability and make sure it does not become a tool for censorship. The advocacy group Electronic Frontier Foundation (EFF), which previously organized a protest over the .org sale that drew in organizations including the YMCA of the United States, Greenpeace, and Consumer Reports, is also supporting the cooperative.

"It's highly inappropriate for it to be turned over to a commercial venture at all, much less one that's going to need to recover $1 billion," said EFF Executive Director Cindy Cohn.

Electronic Frontier Foundation

Brookline Votes To Ban Face Surveillance (eff.org) 32

The town of Brookline, Massachusetts, became the fifth municipality in the nation to ban its government agencies from using face surveillance. The passage of Article 25 comes as a new study from the National Institute of Standards and Technology (NIST) found that many of the world's top facial recognition algorithms are biased along lines of age, race, and ethnicity. The Electronic Frontier Foundation reports: Brookline joins nearby Somerville as the two Massachusetts municipalities to have banned face surveillance. The two Metro-Boston area municipalities have chosen to protect their residents now, rather than wait for the passage of state-level protections. Massachusetts is poised to become the first state in the nation to enact a state-level moratorium on all use of the technology.

Brookline's State Senator Cynthia Stone Creem sponsored a bill (S.1385) that would impose a moratorium on government use of the technology throughout the commonwealth. That moratorium would remain in place until state lawmakers enact an authorizing statute that clearly outlines what agencies are permitted to use the technology, requires audits, protects civil liberties, and establishes minimum accuracy rates to prevent disparate impact against women, people with darker skin, and young people. Polling from the ACLU of Massachusetts has indicated high levels of support for the statewide moratorium, with 79 percent of likely Massachusetts voters voting in favor.

Electronic Frontier Foundation

EFF Challenges Ring's Spokesperson Shaq To A Discussion About Police Surveillance (eff.org) 64

Shaq O'Neal was one of the greatest players in basketball history. But as a spokesperson for Amazon's Ring security cameras, the EFF also calls him the "one man at Ring who might listen to reason," challenging him to go one-on-one with the EFF's privacy experts: In just a year and a half, Amazon's Ring has set up more than 500 partnerships with law enforcement agencies to convince communities to spy on themselves through doorbell cameras and its social app, Neighbors. The company is moving recklessly fast with little regard for the long-term risks of this mass surveillance technology. These partnerships threaten free speech and the well-being of communities, vastly expand police surveillance, undermine trust between police and residents, and enable racial profiling by exacerbating suspicion and paranoia.

So far, Amazon has not committed to making any changes. But we think one person at Ring might listen: basketball legend Shaquille O'Neal.

Shaq has been a spokesperson and co-owner of Ring since 2016, and has been nearly as much a public face of the company as its CEO, Jamie Siminoff. EFF would like to sit down with Shaq to discuss how Ring's partnerships with police can actually end up harming the communities that the company hopes to keep safe. If we wanted to learn how to dunk, we would go to Shaq. Before he promotes the sale of cameras that surveil neighborhoods indiscriminately, Shaq should come to the experts. Shaq, sit down with us and learn how these partnerships turn our neighborhoods into vast, unaccountable surveillance networks.

The Courts

Pennsylvania Supreme Court Rules Police Can't Force You To Tell Them Your Password (eff.org) 73

An anonymous reader quotes a report from the Electronic Frontier Foundation: The Pennsylvania Supreme Court issued a forceful opinion today holding that the Fifth Amendment to the U.S. Constitution protects individuals from being forced to disclose the passcode to their devices to the police. In a 4-3 decision in Commonwealth v. Davis, the court found that disclosing a password is "testimony" protected by the Fifth Amendment's privilege against self-incrimination. EFF filed an amicus brief in Davis, and we were gratified that the court's opinion closely parallels our arguments. The Fifth Amendment privilege prohibits the government from coercing a confession or forcing a suspect to lead police to incriminating evidence. We argue that unlocking and decrypting a smartphone or computer is the modern equivalent of these forms of self-incrimination.

Crucially, the court held that the narrow "foregone conclusion exception" to the Fifth Amendment does not apply to disclosing passcodes. As described in our brief, this exception applies only when an individual is forced to comply with a subpoena for business records and only when complying with the subpoena does not reveal the "contents of his mind," as the U.S. Supreme Court put it. The Pennsylvania Supreme Court agreed with EFF. It wrote: "Requiring the Commonwealth to do the heavy lifting, indeed, to shoulder the entire load, in building and bringing a criminal case without a defendant's assistance may be inconvenient and even difficult; yet, to apply the foregone conclusion rationale in these circumstances would allow the exception to swallow the constitutional privilege. Nevertheless, this constitutional right is firmly grounded in the "realization that the privilege, while sometimes a shelter to the guilty, is often a protection to the innocent."

Privacy

DHS Will Soon Have Biometric Data On Nearly 260 Million People (qz.com) 40

The U.S. Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency's Office of Procurement Operations reviewed by Quartz. From the report: That's about 40 million more than the agency's 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.

A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world's second-largest, behind only India's countrywide biometric ID network in size. The traveler data kept by DHS is shared with other U.S. agencies, state and local law enforcement, as well as foreign governments.

Crime

Are Amazon's 'Ring' Cameras Exacerbating Societal Inequality? (theatlantic.com) 437

In one of America's top cities for property crime, the Atlantic examines the "porch pirate" of San Francisco's Potrero Hill. It's an 8,000-word long read about how one of the neighborhood's troubled long-time residents "entered a vortex of smart cameras, Nextdoor rants, and cellphone surveillance," in a town where the public hospital she was born in is now named after Mark Zuckerberg.

Her story begins when a 30-something product marketing manager at Google received a notification on his iPhone from his home surveillance camera, sharing a recording of a woman stealing a package from his porch. He cruises the neighborhood, spots her boarding a city bus, and calls 911, having her arrested. The article notes that 17% of America's homeowners now own a smart video surveillance device. But it also seems to be trying to bring another perspective to "the citizen surveillance facilitated by porch cams and Nextdoor to the benefit of corporations and venture capitalists."

From the article: Under the reasoning that more surveillance improves public safety, over 500 police departments -- including in Houston and a stretch of Los Angeles suburbs -- have partnered with Ring. Many departments advertise rebates for Ring devices on government social-media channels, sometimes offering up to $125. Ring matches the rebate up to $50. Dave Maass, a senior investigative researcher at the Electronic Frontier Foundation, a nonprofit focused on digital civil liberties, said it's unseemly to use taxpayer money to subsidize the build-out of citizen surveillance. Amazon and other moneyed tech companies competing for market share are "enlisting law enforcement to be their sales force, to have the cops give it their imprimatur of credibility," said Maass, a claim echoed in an open letter to government agencies from more than 30 civil-rights organizations this fall and a petition asking Congress to investigate the Ring partnerships. (Ring disputes this characterization....)

In some cities, the relationship between the police and companies has gone beyond marketing. Amazon is helping police departments run "bait box" operations, in which police place decoy boxes on porches -- often with GPS trackers inside -- to capture anyone who tries to steal them... Amazon sent police free branded boxes, and even heat maps of areas where the company's customers suffer the most thefts...

Stings and porch-pirate footage attract media attention -- but what comes next for the thieves rarely gets the same limelight. Often, perpetrators face punishments whose scale might surprise the amateur smart-cam detectives and Nextdoor sleuths who help nail them... In December, the U.S. attorney for the Eastern District of Arkansas announced an enforcement campaign called Operation Porch Pirate. Two suspects were arrested and charged with federal mail theft. One pleaded guilty to stealing $170.42 worth of goods, including camouflage crew socks and a Call of Duty video game from Amazon, and was sentenced to 14 months of probation. Another pleaded guilty to possession of stolen mail -- four packages, two from Amazon -- and awaits sentencing of up to five years in prison and a $250,000 fine...

While porch cams have been used to investigate cases as serious as homicides, the surveillance and neighborhood social networking typically make a particular type of crime especially visible: those lower-level ones happening out in public, committed by the poorest. Despite the much higher cost of white-collar crime, it seems to cause less societal hand-wringing than what might be caught on a Ring camera, said W. David Ball, a professor at Santa Clara University School of Law. "Did people really feel that crime was 'out of control' after Theranos?" he said. "People lost hundreds of millions of dollars. You would have to break into every single car in San Francisco for the next ten years to amount to the amount stolen under Theranos."

In the article the EFF's investigative researcher also asks if police end up providing more protection to affluent communities than the ones that can't afford Amazon's Ring cameras. But W. David Ball, the law professor, also asks whether locking up low-level criminals is just ignoring the larger issue of poverty in increasingly expensive cities.

"Everyone assumes that jail works to deter people. But I don't know if I were hungry, and had no other way of eating, that that would deter me from stealing."
Privacy

Berkeley City Council Unanimously Votes To Ban Face Recognition (eff.org) 48

An anonymous reader quotes a report from the Electronic Frontier Foundation: Berkeley has become the third city in California and the fourth city in the United States to ban the use of face recognition technology by the government. After an outpouring of support from the community, the Berkeley City Council voted unanimously to adopt the ordinance introduced by Councilmember Kate Harrison earlier this year. Berkeley joins other Bay Area cities, including San Francisco and Oakland, which also banned government use of face recognition. In July 2019, Somerville, Massachusetts became the first city on the East Coast to ban the government's use of face recognition.

The passage of the ordinance also follows the signing of A.B. 1215, a California state law that places a three-year moratorium on police use of face recognition on body-worn cameras, beginning on January 1, 2020. As EFF's Associate Director of Community Organizing Nathan Sheard told the California Assembly, using face recognition technology "in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion."

The Internet

China's Global Reach: Surveillance and Censorship Beyond the Great Firewall (eff.org) 68

An anonymous reader shares a report: Those outside the People's Republic of China (PRC) are accustomed to thinking of the Internet censorship practices of the Chinese state as primarily domestic, enacted through the so-called "Great Firewall" -- a system of surveillance and blocking technology that prevents Chinese citizens from viewing websites outside the country. The Chinese government's justification for that firewall is based on the concept of "Internet sovereignty." The PRC has long declared that "within Chinese territory, the internet is under the jurisdiction of Chinese sovereignty." Hong Kong, as part of the "one country, two systems" agreement, has largely lived outside that firewall: foreign services like Twitter, Google, and Facebook are available there, and local ISPs have made clear that they will oppose direct state censorship of its open Internet.

But the ongoing Hong Kong protests, and mainland China's pervasive attempts to disrupt and discredit the movement globally, have highlighted that China is not above trying to extend its reach beyond the Great Firewall, and beyond its own borders. In attempting to silence protests that lie outside the Firewall, in full view of the rest of the world, China is showing its hand, and revealing the tools it can use to silence dissent or criticism worldwide. Some of those tools -- such as pressure on private entities, including American corporations NBA and Blizzard -- have caught U.S. headlines and outraged customers and employees of those companies. Others have been more technical, and less obvious to the Western observers.

Electronic Frontier Foundation

EFF Wins Access To License Plate Reader Data To Study Law Enforcement Use 62

An anonymous reader quotes a report from the Electronic Frontier Foundation: Electronic Frontier Foundation (EFF) and the American Civil Liberties Union Foundation of Southern California (ACLU SoCal) have reached an agreement with Los Angeles law enforcement agencies under which the police and sheriff's departments will turn over license plate data they indiscriminately collected on millions of law-abiding drivers in Southern California. The data, which has been deidentified to protect drivers' privacy, will allow EFF and ACLU SoCal to learn how the agencies are using automated license plate reader (ALPR) systems throughout the city and county of Los Angeles and educate the public on the privacy risks posed by this intrusive technology. A weeks' worth of data, composed of nearly 3 million data points, will be examined.

ALPR systems include cameras mounted on police cars and at fixed locations that scan every license plate that comes into view -- up to 1,800 plates per minute. They record data on each plate, including the precise time, date, and place it was encountered. The two Los Angeles agencies scan about 3 million plates every week and store the data for years at a time. Using this data, police can learn where we were in the past and infer intimate details of our daily lives such as where we work and live, who our friends are, what religious or political activities we attend, and much more. EFF and ACLU SoCal reached the agreement with the Los Angeles Police and Sheriff's Departments after winning a precedent-setting decision in 2017 from the California Supreme Court in our public records lawsuit against the two agencies. The court held that the data are not investigative records under the California Public Records Act that law enforcement can keep secret.
"After six years of litigation, EFF and ACLU SoCal are finally getting access to millions of ALPR scans that will shed light on how the technology is being used, where it's being used, and how it affects people's privacy," said EFF Surveillance Litigation Director Jennifer Lynch. "We persevered and won a tough battle against law enforcement agencies that wanted to keep this information from the public. We have a right to information about how government agencies are using high-tech systems to track our locations, surveil our neighborhoods, and collect private information without our knowledge and consent."
AT&T

AT&T Says Customers Can't Sue the Company For Selling Location Data To Bounty Hunters (vice.com) 94

An anonymous reader quotes a report from Motherboard: AT&T is arguing that its customers can't sue the company for selling location data to bounty hunters, according to recently filed court records. AT&T says the customers signed contracts that force them into arbitration, meaning consumers have to settle complaints privately with the company rather than in court. The filing is in response to a lawsuit filed by the Electronic Frontier Foundation (EFF). The issue circles around mandatory arbitration; that is, forcing consumers to settle complaints privately with the company rather than in court.

"Each time they entered into a new Wireless Customer Agreement with AT&T, they [the plaintiffs] not only agreed to AT&T's Privacy Policy but also agreed to resolve their disputes with AT&T -- including the claims asserted in this action -- in arbitration on an individual basis," AT&T's filing from last week reads. When the plaintiffs, who are AT&T customers, accepted AT&T's terms and conditions when, say, purchasing a new phone, they also agreed specifically to the arbitration clause, AT&T argues. The Arbitration Agreement on AT&T's website reads, "AT&T and you agree to arbitrate all disputes and claims between us. This agreement to arbitrate is intended to be broadly interpreted."
The class-action lawsuit comes after multiple investigations found that T-Mobile, Sprint, and AT&T were selling access to their customers' location data to bounty hunters and others not authorized to possess it. All of the telecom giants have since stopped selling the data, but that hasn't stopped lawyers from filing class-action lawsuits.
Facebook

Facebook Accused of 'Deliberately Vague' Announcement About Face Recognition (eff.org) 30

Facebook is "bringing" facial recognition to all users, the company announced Tuesday. But the EFF's surveillance litigation director and a senior staff attorney warn that despite media reports, Facebook's announcement "definitely does not say that face recognition is now opt-in for all users." Throughout Facebook's deliberately vague announcement, it takes great pains to note that the change applies only to new Facebook users and people who currently have the "tag suggestions" setting. However, Facebook migrated many, if not most, existing users from "tag suggestions" to "face recognition" in December 2017... That means this safeguard does not apply to the billions of current Facebook users who have already been moved...

Facebook should not subject any of its users to face surveillance, absent their informed opt-in consent. And Facebook should clear up the uncertainties in in its announcement before it gets any more credit than it's due for this change.

Facebook's announcement didn't even include links to the "Settings" menu where users can opt out of Facebook's facial recognition, so the EFF's article helpfully provides both mobile and desktop links. According to Facebook's own help pages, the left-side menu should include a "Face Recognition" choice where users can turn off Facebook's face recognition features.

But three different Facebook users I know have also reported that that menu choice just isn't there...
Google

EFF Warns: 'Don't Play in Google's Privacy Sandbox' (eff.org) 52

An EFF analysis looks at the problems with some of Google's new "Privacy Sandbox" proposals, a few of which it calls "downright dangerous": Perhaps the most fleshed-out proposal in the Sandbox is the conversion measurement API. This is trying to tackle a problem as old as online ads: how can you know whether the people clicking on an ad ultimately buy the product it advertised....? Google's ID field can contain 64 bits of information -- a number between 1 and 18 quintillion. This will allow advertisers to attach a unique ID to each and every ad impression they serve, and, potentially, to connect ad conversions with individual users. If a user interacts with multiple ads from the same advertiser around the web, these IDs can help the advertiser build a profile of the user's browsing habits.

Even worse is Google's proposal for Federated Learning of Cohorts (or "FLoC").... FLoC would use Chrome users' browsing history to do clustering. At a high level, it will study browsing patterns and generate groups of similar users, then assign each user to a group (called a "flock"). At the end of the process, each browser will receive a "flock name" which identifies it as a certain kind of web user. In Google's proposal, users would then share their flock name, as an HTTP header, with everyone they interact with on the web. This is, in a word, bad for privacy. A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate...

If the Privacy Sandbox won't actually help users, why is Google proposing all these changes? Google can probably see which way the wind is blowing. Safari's Intelligent Tracking Prevention and Firefox's Enhanced Tracking Protection have severely curtailed third-party trackers' access to data. Meanwhile, users and lawmakers continue to demand stronger privacy protections from Big Tech. While Chrome still dominates the browser market, Google might suspect that the days of unlimited access to third-party cookies are numbered. As a result, Google has apparently decided to defend its business model on two fronts. First, it's continuing to argue that third-party cookies are actually fine, and companies like Apple and Mozilla who would restrict trackers' access to user data will end up harming user privacy. This argument is absurd. But unfortunately, as long as Chrome remains the most popular browser in the world, Google will be able to single-handedly dictate whether cookies remain a viable option for tracking most users.

At the same time, Google seems to be hedging its bets. The "Privacy Sandbox" proposals for conversion measurement, FLoC, and PIGIN are each aimed at replacing one of the existing ways that third-party cookies are used for targeted ads. Google is brainstorming ways to continue serving targeted ads in a post-third-party-cookie world. If cookies go the way of the pop-up ad, Google's targeting business will continue as usual.

The Sandbox isn't about your privacy. It's about Google's bottom line. At the end of the day, Google is an advertising company that happens to make a browser.

The Courts

Judges Begin Ruling Against Some Porn Purveyors' Use of Copyright Lawsuits (bloombergquint.com) 39

Slashdot reader pgmrdlm quotes Bloomberg: Pornography producers and sellers account for the lion's share of copyright-infringement lawsuits in the U.S. -- and judges may have seen enough. The courts are cracking down on porn vendors that file thousands of lawsuits against people for downloading and trading racy films on home computers, using tactics a judge called a "high tech shakedown." [Alternate link here.] In one case, two men were jailed in a scheme that netted $6 million in settlements.

The pornography companies have "a business model that seeks to profit from litigation and threats of litigation rather than profiting from creative works," said Mitch Stoltz, a senior attorney with the Electronic Frontier Foundation, a San Francisco group that has waged a campaign against companies it thinks abuse the copyright system.

Two companies that make and sell porn are responsible for almost half of the 3,404 copyright lawsuits filed in the U.S. in the first seven months of this year, according to an analysis by Bloomberg Law's Tommy Shen... The companies say they are protecting their movies from piracy and infringement under U.S. copyright law, as major movie studios have done for decades, and suggest that the content of their films is the reason for the wrath of the judges. But some of the tactics used in their infringement suits to identify targets and force settlements have critics -- and some jurists -- up in arms and may require congressional actions to fix.

The suits don't initially name names. They identify the Internet Protocol addresses using peer-to-peer networks like BitTorrent to download or distribute the movies and then file suits against âoeJohn Doesâ and ask the courts to order internet service providers, like Verizon Communications Inc. or Comcast Corp., to identify the account subscribers. Those people are then contacted by the porn company lawyers.

One lawyer notes that the lawsuits target users in wealthier areas, reports Bloomberg, which adds that in December one district judge even refused to grant the request for identities, ruling that the porn company "treats this court not as a citadel of justice, but as an ATM."

And last month a federal judge cited that ruling when refusing to enter a judgment in another case.
Google

Google's Plans for Chrome Extensions 'Won't Really Help Security', Argues EFF (eff.org) 35

Is Google making the wrong response to the DataSpii report on a "catastrophic data leak"? The EFF writes: In response to questions about DataSpii from Ars Technica, Google officials pointed out that they have "announced technical changes to how extensions work that will mitigate or prevent this behavior." Here, Google is referring to its controversial set of proposed changes to curtail extension capabilities, known as Manifest V3.

As both security experts and the developers of extensions that will be greatly harmed by Manifest V3, we're here to tell you: Google's statement just isn't true. Manifest V3 is a blunt instrument that will do little to improve security while severely limiting future innovation... The only part of Manifest V3 that goes directly to the heart of stopping DataSpii-like abuses is banning remotely hosted code. You can't ensure extensions are what they appear to be if you give them the ability to download new instructions after they're installed.

But you don't need the rest of Google's proposed API changes to stop this narrow form of bad extension behavior. What Manifest V3 does do is stifle innovation...

The EFF makes the following arguments Google's proposal:
  • Manifest V3 will still allow extensions to observe the same data as before, including what URLs users visit and the contents of pages users visit
  • Manifest V3 won't change anything about how "content scripts" work...another way to extract user browsing data.
  • Chrome will still allow users to give extensions permission to run on all sites.

In response Google argued to Forbes that the EFF "fails to account for the proposed changes to how permissions work. It is the combination of these two changes, along with others included in the proposal, that would have prevented or significantly mitigated incidents such as this one."

But the EFF's technology projects director also gave Forbes their response. "We agree that Google isn't killing ad-blockers. But they are killing a wide range of security and privacy enhancing extensions, and so far they haven't justified why that's necessary."

And in the same article, security researcher Sean Wright added that Google's proposed change "appears to do little to prevent rogue extensions from obtaining information from loaded sites, which is certainly a privacy issue and it looks as if the V3 changes don't help."

The EFF suggests Google just do a better job of reviewing extensions.


Facebook

Did WhatsApp Backdoor Rumor Come From 'Unanswered Questions ' and 'Leap of Faith' For Closed-Source Encryption Products? (forbes.com) 105

On Friday technologist Bruce Schneier wrote that after reviewing responses from WhatsApp, he's concluded that reports of a pre-encryption backdoor are a false alarm. He also says he got an equally strong confirmation from WhatsApp's Privacy Policy Manager Nate Cardozo, who Facebook hired last December from the EFF. "He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this."

Schneier has also added the words "This story is wrong" to his original blog post. "The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference." But that Forbes contributor has also responded, saying that he'd first asked Facebook three times about when they'd deploy the backdoor in WhatsApp -- and never received a response.

Asked again on July 25th the company's plans for "moderating end to end encrypted conversations such as WhatsApp by using on device algorithms," a company spokesperson did not dispute the statement, instead pointing to Zuckerberg's blog post calling for precisely such filtering in its end-to-end encrypted products including WhatsApp [apparently this blog post], but declined to comment when asked for more detail about precisely when such an integration might happen... [T]here are myriad unanswered questions, with the company declining to answer any of the questions posed to it regarding why it is investing in building a technology that appears to serve little purpose outside filtering end-to-end encrypted communications and which so precisely matches Zuckerberg's call. Moreover, beyond its F8 presentation, given Zuckerberg's call for filtering of its end-to-end encrypted products, how does the company plan on accomplishing this apparent contradiction with the very meaning of end-to-end encryption?

The company's lack of transparency and unwillingness to answer even the most basic questions about how it plans to balance the protections of end-to-end encryption in its products including WhatsApp with the need to eliminate illegal content reminds us the giant leap of faith we take when we use closed encryption products whose source we cannot review... Governments are increasingly demanding some kind of compromise regarding end-to-end encryption that would permit them to prevent such tools from being used to conduct illegal activity. What would happen if WhatsApp were to receive a lawful court order from a government instructing it to insert such content moderation within the WhatsApp client and provide real-time notification to the government of posts that match the filter, along with a copy of the offending content?

Asked about this scenario, Carl Woog, Director of Communications for WhatsApp, stated that he was not aware of any such cases to date and noted that "we've repeatedly defended end-to-end encryption before the courts, most notably in Brazil." When it was noted that the Brazilian case involved the encryption itself, rather than a court order to install a real-time filter and bypass directly within the client before and after the encryption process at national scale, which would preserve the encryption, Woog initially said he would look into providing a response, but ultimately did not respond.

Given Zuckerberg's call for moderation of the company's end-to-end encryption products and given that Facebook's on-device content moderation appears to answer directly to this call, Woog was asked whether its on-device moderation might be applied in future to its other end-to-end encrypted products rather than WhatsApp. After initially saying he would look into providing a response, Woog ultimately did not respond.

Here's the exact words from Zuckerberg's March blog post. It said Facebook is "working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can't see the content of the messages, and we will continue to invest in this work. "
Electronic Frontier Foundation

EFF Warns Proposed Law Could Create 'Life-Altering' Copyright Lawsuits (forbes.com) 117

Forbes reports: In July, members of the federal Senate Judiciary Committee chose to move forward with a bill targeting copyright abuse with a more streamlined way to collect damages, but critics say that it could still allow big online players to push smaller ones around -- and even into bankruptcy.

Known as the Copyright Alternative in Small-Claims Enforcement (or CASE) Act, the bill was reintroduced in the House and Senate this spring by a roster of bipartisan lawmakers, with endorsements from such groups as the Copyright Alliance and the Graphic Artists' Guild. Under the bill, the U.S. Copyright Office would establish a new 'small claims-style' system for seeking damages, overseen by a three-person Copyright Claims Board. Owners of digital content who see that content used without permission would be able to file a claim for damages up to $15,000 for each work infringed, and $30,000 in total, if they registered their content with the Copyright Office, or half those amounts if they did not.

"Easy $5,000 copyright infringement tickets won't fix copyright law," argues the EFF, in an article shared by long-time Slashdot reader SonicSpike: The bill would supercharge a "copyright troll" industry dedicated to filing as many "small claims" on as many Internet users as possible in order to make money through the bill's statutory damages provisions. Every single person who uses the Internet and regularly interacts with copyrighted works (that's everyone) should contact their Senators to oppose this bill...

[I]f Congress passes this bill, the timely registration requirement will no longer be a requirement for no-proof statutory damages of up to $7,500 per work. In other words, nearly every photo, video, or bit of text on the Internet can suddenly carry a $7,500 price tag if uploaded, downloaded, or shared even if the actual harm from that copying is nil. For many Americans, where the median income is $57,652 per year, this $7,500 price tag for what has become regular Internet behavior would result in life-altering lawsuits from copyright trolls that will exploit this new law.

Facebook

Facebook Insists No Security 'Backdoor' Is Planned for WhatsApp (medium.com) 56

An anonymous reader shares a report: Billions of people use the messaging tool WhatsApp, which added end-to-end encryption for every form of communication available on its platform back in 2016. This ensures that conversations between users and their contacts -- whether they occur via text or voice calls -- are private, inaccessible even to the company itself. But several recent posts published to Forbes' blogging platform call WhatsApp's future security into question. The posts, which were written by contributor Kalev Leetaru, allege that Facebook, WhatsApp's parent company, plans to detect abuse by implementing a feature to scan messages directly on people's phones before they are encrypted. The posts gained significant attention: A blog post by technologist Bruce Schneier rehashing one of the Forbes posts has the headline "Facebook Plans on Backdooring WhatsApp." It is a claim Facebook unequivocally denies.

"We haven't added a backdoor to WhatsApp," Will Cathcart, WhatsApp's vice president of product management, wrote in a statement. "To be crystal clear, we have not done this, have zero plans to do so, and if we ever did, it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise, which is why we are opposed to it."

UPDATE: Later Friday technologist Bruce Schneier wrote that after reviewing responses from WhatsApp, he's concluded that reports of a pre-encryption backdoor are a false alarm. He also says he got an equally strong confirmation from WhatsApp's Privacy Policy Manager Nate Cardozo, who Facebook hired last December from EFF. "He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this."
Encryption

Is Facebook Planning on Backdooring WhatsApp? (schneier.com) 131

Bruce Schneier: This article points out that Facebook's planned content moderation scheme will result in an encryption backdoor into WhatsApp: "In Facebook's vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user's device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted. The company even noted. that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service. Facebook's model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once."

Once this is in place, it's easy for the government to demand that Facebook add another filter -- one that searches for communications that they care about -- and alert them when it gets triggered. Of course alternatives like Signal will exist for those who don't want to be subject to Facebook's content moderation, but what happens when this filtering technology is built into operating systems?
Separately The Guardian reports: British, American and other intelligence agencies from English-speaking countries have concluded a two-day meeting in London amid calls for spies and police officers to be given special, backdoor access to WhatsApp and other encrypted communications. The meeting of the "Five Eyes" nations -- the UK, US, Australia, Canada and New Zealand -- was hosted by new home secretary, Priti Patel, in an effort to coordinate efforts to combat terrorism and child abuse.
UPDATE: 8/2/2019 On Friday technologist Bruce Schneier wrote that after reviewing responses from WhatsApp, he's concluded that reports of a pre-encryption backdoor are a false alarm. He also says he got an equally strong confirmation from WhatsApp's Privacy Policy Manager Nate Cardozo, who Facebook hired last December from EFF. "He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this."
Electronic Frontier Foundation

EFF Argues For 'Empowerment, Not Censorship' Online (eff.org) 62

An activism director and a legislative analyst at the EFF have co-authored an essay arguing that the key to children's safetly online "is user empowerment, not censorship," reporting on a recent hearing by the U.S. Senate's Judiciary Commitee: While children do face problems online, some committee members seemed bent on using those problems as an excuse to censor the Internet and undermine the legal protections for free expression that we all rely on, including kids. Don't censor users; empower them to choose... [W]hen lawmakers give online platforms the impossible task of ensuring that every post meets a certain standard, those companies have little choice but to over-censor.

During the hearing, Stephen Balkam of the Family Online Safety Institute provided an astute counterpoint to the calls for a more highly filtered Internet, calling to move the discussion "from protection to empowerment." In other words, tech companies ought to give users more control over their online experience rather than forcing all of their users into an increasingly sanitized web. We agree.

It's foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. But today, social media companies frequently make censorship decisions that affect everyone. Instead, companies should empower users to make their own decisions about what they see online by letting them calibrate and customize the content filtering methods those companies use. Furthermore, tech and media companies shouldn't abuse copyright and other laws to prevent third parties from offering customization options to people who want them.

The essay also argues that Congress "should closely examine companies whose business models rely on collecting, using, and selling children's personal information..."

"We've highlighted numerous examples of students effectively being forced to share data with Google through the free or low-cost cloud services and Chromebooks it provides to cash-strapped schools. We filed a complaint with the FTC in 2015 asking it to investigate Google's student data practices, but the agency never responded."

Slashdot Top Deals