Electronic Frontier Foundation

EFF Is Leaving X (eff.org) 187

After nearly 20 years on the platform, The Electronic Frontier Foundation (EFF) says it is leaving X. "This isn't a decision we made lightly, but it might be overdue," the digital rights group said. "The math hasn't worked out for a while now." From the report: We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. [...]

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members' support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we're here to help you take back control.

Electronic Frontier Foundation

EFF Tells Publishers: Blocking the Internet Archive Won't Stop AI, But It Will Erase The Historical Record (eff.org) 27

"Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper," writes EFF senior policy analyst Joe Mullin.

"That's effectively what's begun happening online in the last few months." The Internet Archive — the world's largest digital library — has preserved newspapers since it went online in the mid-1990s... But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web's traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit...

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several — including the Times — are now suing AI companies over whether training models on copyrighted material violates the law. There's a strong case that such training is fair use. Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response.

Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn't start, and didn't ask for. If publishers shut the Archive out, they aren't just limiting bots. They're erasing the historical record...

Even if courts place limits on AI training, the law protecting search and web archiving is already well established... There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake.

Android

Android, Epic, and What's Really Behind Google's 'Existential' Threat to F-Droid (thenewstack.io) 53

Starting in September, even Android developers not in Google's Play Store will still be required to register with Google to distribute their apps in Brazil, Singapore, Indonesia, and Thailand, with Google continuing "to roll out these requirements globally" four months later. Even developers distributing Android apps on the web for sideloading will be required to register, pay Google a $25 fee, and provide a government ID.

But there's a new theory on what's secretly been motivating Google from an unnamed source in the "Keep Android Open" movement, writes long-time Slashdot reader destinyland: "You can't separate this really from their ongoing interactions with Epic and the settlement that they came to," they argue. Twelve days ago Epic Games and Google announced a new proposal for settling their long-running dispute over the legality of alternative app stores on Android phones. (Rather than agreeing to let third-party app stores into their Play Store, Google wants them to continue being sideloaded, promising in a blog post last week that they'll even offer a "more streamlined" and "simplified" sideloading alternative for rival app stores. "This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.")

So "developer verification" could be Google's fallback plan if U.S. courts fail to approve this. "If the Google Play Store has to allow any third-party repository app store, Google essentially has given up all control of the apps. But if they're able to claw back that control by requiring that all developers, no matter how they distribute their apps, have to register with Google — have to agree to their Terms & Conditions, pay them money, provide identification — then they have a large degree of indirect control over any app that can be developed for the entire platform."

But that plan threatens millions of people using the alternative F/OSS app distributor F-Droid, since Google also wants to have only one signature attached to Android apps. Marc Prud'hommeaux, a member of F-Droid's board of directors, says that "all of a sudden breaks all those versions of the application distributed through F-Droid or any other app store!"

Prud'hommeaux says they've told Google's Android team "You know perfectly well that you're killing F-Droid!" creating an "existential" threat to an app distributor "that has existed happily for over 10 years." But good things started happening when he created the website Keep Android Open: There's now a "huge backlog" of signers for an Open Letter that already includes EFF, the Software Freedom Conservancy, and the Free Software Foundation. He believes Android's existing Play Protect security "is completely sufficient to handle the particular scenarios they claim that developer verification is meant to address"...

The Keep Android Open site urges developers not to sign up for Android's early access program when it launches next week. (Instead, they're asking developers to respond to invites with an email about their concerns — and to spread the word to other developers and organizations in forums and social media posts.) There's also a petition at Change.org currently signed by 64,000 developers — adding 20,000 new signatures in the last 10 days. And "If you have an Android device, try installing F-Droid!" he adds. Google tracks how many people install these alternative app repositories, and a larger user base means greater consequences from any Android policy changes.

Plus, installing F-Droid "might be refreshing!" Prud'hommeaux says. "You don't see all the advertisements and promotions and scam and crapware stuff that you see in the commercial app stores!"

Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

Open Source

Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification' (forbrukerradet.no) 69

The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people.

"It's not just your imagination. Digital services are getting worse," the video concludes — before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends:
  • Stronger rights for consumers to control, adapt, repair, and alter their products and services,
  • Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible,
  • Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage,
  • Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols,
  • Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights,
  • Deterrent and consistent enforcement of other laws, including consumer and data protection law.

The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries."

They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech.

Thanks to Slashdot reader DeanonymizedCoward for sharing the news.


Privacy

With Ring, American Consumers Built a Surveillance Dragnet (404media.co) 71

Ring's Super Bowl ad on Sunday promoted "Search Party," a feature that lets a user post a photo of a missing dog in the Ring app and triggers outdoor Ring cameras across the neighborhood to use AI to scan for a match. 404 Media argues the cheerful premise obscures what the Amazon-owned company has become: a massive, consumer-deployed surveillance network.

Ring founder Jamie Siminoff, who left in 2023 and returned last year, has since moved to re-establish police partnerships and push more AI into Ring cameras. The company has also partnered with Flock, a surveillance firm used by thousands of police departments, and launched a beta feature called "Familiar Faces" that identifies known people at your door. Chris Gilliard, author of the upcoming book Luxury Surveillance, called the ad "a clumsy attempt by Ring to put a cuddly face on a rather dystopian reality: widespread networked surveillance by a company that has cozy relationships with law enforcement."

Further reading: No One, Including Our Furry Friends, Will Be Safer in Ring's Surveillance Nightmare, EFF Says
Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
Open Source

Cory Doctorow: Legalising Reverse Engineering Could End 'Enshittification' (theguardian.com) 90

Scifi author/tech activist Cory Doctorow has decried the "enshittification" of our technologies to extract more profit. But Saturday he also described what could be "the beginning of the end for enshittification" in a new article for the Guardian — "our chance to make tech good again". There is only one reason the world isn't bursting with wildly profitable products and projects that disenshittify the US's defective products: its (former) trading partners were bullied into passing an "anti-circumvention" law that bans the kind of reverse-engineering that is the necessary prelude to modifying an existing product to make it work better for its users (at the expense of its manufacturer)...

Post-Brexit, the UK is uniquely able to seize this moment. Unlike our European cousins, we needn't wait for the copyright directive to be repealed before we can strike article 6 off our own law books and thereby salvage something good out of Brexit... Until we repeal the anti-circumvention law, we can't reverse-engineer the US's cloud software, whether it's a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours — say, the solar inverters and batteries we buy from China.

This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks. Admittedly, the door is only open a crack, but it's been locked tight since the turn of the century. When it comes to a better technology future, "open a crack" is the most exciting proposition I've heard in decades.

Thanks to Slashdot reader Bruce66423 for sharing the article.
Government

More US States Are Preparing Age-Verification Laws for App Stores (politico.com) 57

Yes, a federal judge blocked an attempt by Texas at an app store age-verification law. But this year Silicon Valley giants including Google and Apple "are expected to fight hard against similar legislation," reports Politico, "because of the vast legal liability it imposes on app stores and developers." In Texas, Utah and Louisiana, parent advocates have linked up with conservative "pro-family" groups to pass laws forcing mobile app stores to verify user ages and require parental sign-off. If those rules hold up in court, companies like Google and Apple, which run the two largest app stores, would face massive legal liability... California has taken a different approach, passing its own age-verification law last year that puts liability on device manufacturers instead of app stores. That model has been better received by the tech lobby, and is now competing with the app-based approach in states like Ohio. In Washington D.C., a GOP-led bill modeled off of Texas' law is wending its way through Capitol Hill. And more states are expected to join the fray, including Michigan and South Carolina.

Joel Thayer, president of the conservative Digital Progress Institute and a key architect of the Texas law, said states are only accelerating their push. He explicitly linked the age-verification debate to AI, arguing it's "terrifying" to think companies could build new AI products by scraping data from children's apps. Thayer also pointed to the Trump administration's recent executive order aimed at curbing state regulation of AI, saying it has galvanized lawmakers. "We're gonna see more states pushing this stuff," Thayer said. "What really put fuel in the fire is the AI moratorium for states. I think states have been reinvigorated to fight back on this."

He told Politico that the issue will likely be decided by America's Supreme Court, which in June upheld Texas legislation requiring age verification for online content. Thayer said states need a ruling from America's highest court to "triangulate exactly what the eff is going on with the First Amendment in the tech world.

"They're going to have to resolve the question at some point."
Technology

CES Worst In Show Awards Call Out the Tech Making Things Worse (ifixit.com) 41

Longtime Slashdot reader chicksdaddy writes: CES, the Consumer Electronics Show, isn't just about shiny new gadgets. As AP reports, this year brought back the fifth annual Worst in Show anti-awards, calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards -- including Repair.org, iFixit, EFF, PIRG, Secure Repairs, and others -- put the spotlight on products that miss the point of innovation and make life worse for users.

2026 Worst in Show winners include:

Overall (and Repairability): Samsung's AI-packed Family Hub Fridge -- over-engineered, hard to fix, and trying to do everything but keep food cold.
Privacy: Amazon Ring AI -- expanding surveillance with features like facial recognition and mobile towers.
Security: Merach UltraTread treadmill -- an AI fitness coach that also hoovers up sensitive data with weak security guarantees, including a privacy policy that declares the company "cannot guarantee the security of your personal information" (!!).
Environmental Impact: Lollipop Star -- a single-use, music-playing electronic lollipop that epitomizes needless e-waste.
Enshittification: Bosch eBike Flow App -- pushing lock-in and digital restrictions that make gear worse over time.
"Who Asked For This?": Bosch Personal AI Barista -- a voice-assistant coffee maker that nobody really wanted.
People's Choice: Lepro Ami AI Companion -- an overhyped "soulmate" cam that creeps more than it comforts.

The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window -- and the industry's watchdogs are calling them out.

Open Source

Up Next for Arduino After Qualcomm Acquisition: High-Performance Computing (eetimes.com) 26

Even after its acquisition by Qualcomm, the EFF believes Arduino "isn't imposing any new bans on tinkering with or reverse engineering Arduino boards," (according to Mitch Stoltz, EFF director for competition and IP litigation). While Adafruit's managing editor Phillip Torrone had claimed to 36,000+ followers on LinkedIn that Arduino users were now "explicitly forbidden from reverse engineering," Arduino corrected him in a blog post, noting that clause in their Terms & Conditions was only for Arduino's Software-as-a-Service cloud applications. "Anything that was open, stays open."

And this week EE Times spoke to Guneet Bedi, SVP of Arduino, "who was unequivocal in saying that Arduino's governance structure had remained intact even after the acquisition." "As a business unit within Qualcomm, Arduino continues to make independent decisions on its product portfolio, with no direction imposed on where it should or should not go," Bedi said. "Everything that Arduino builds will remain open and openly available to developers, with design engineers, students and makers continuing to be the primary focus.... Developers who had mastered basic embedded workflows were now asking how to run large language models at the edge and work with artificial intelligence for vision and voice, with an open source mindset," he said. According to Bedi, this was where Qualcomm's technology became relevant. "Qualcomm's chipsets are high performance while also being very low power, which comes from their mobile and Android phone heritage. Despite being great technology, it is not easily accessible to design engineers because of cost and complexity. That made this a strong fit," he said.

The most visible outcome of this acquisition is Uno Q, which Bedi described as being comparable to a mid-tier Android phone in capability, starting at a price of $44. For Arduino, this marked a shift beyond microcontrollers without abandoning them. "At the end of the day, we have not gone away from our legacy," Bedi said. "You still have a real-time microcontroller, and you still write code the way Arduino developers are used to. What we added is compute, without forcing people to change how they work." Uno Q combines a Linux-based compute system with a real-time microcontroller from the STM32 family. "You do not need two different development environments or two different hardware platforms," Bedi added... Rather than introducing a customized operating system, Arduino chose standard Debian upstream. "We are not locking developers into anything," Bedi said. "It is standard Debian, completely open...." Pre-built models covering tasks like object detection and voice recognition run locally on the board....

While the first reference design uses Qualcomm silicon, Bedi was careful to stress that this does not define the roadmap. "There is zero dependency on Qualcomm silicon," he said. "The architecture is portable. Tomorrow, we can run this on something else." That distinction matters, particularly for developers wary of vendor lock-in following the acquisition. Uno Q does compete directly with platforms like Raspberry Pi and Nvidia Jetson, but Bedi framed the difference less in terms of raw performance and more in flexibility. "When you build on those platforms, you are locked to the board," he said. "Here, you can build a prototype, and if you like it, you can also get access to the chip and design your own hardware." With built-in storage removing the need for external components, Uno Q positions itself less as a faster board and more as a way to simplify what had become an increasingly messy development stack...

Looking a year ahead, Bedi believes developers should experience continuity rather than disruption. The familiar Arduino approach to embedded and real-time systems remains unchanged, while extending naturally into more compute-intensive applications... Taken together, Bedi's comments suggest that Arduino's post-acquisition direction is less about changing what Arduino is, and more about expanding what it can realistically be used for, without abandoning the simplicity that made it relevant in the first place.

"We want to redefine prototyping in the age of physical artificial intelligence," Bedi said...
Privacy

Inside Uzbekistan's Nationwide License Plate Surveillance System (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: Across Uzbekistan, a network of about a hundred banks of high-resolution roadside cameras continuously scan vehicles' license plates and their occupants, sometimes thousands a day, looking for potential traffic violations. Cars running red lights, drivers not wearing their seatbelts, and unlicensed vehicles driving at night, to name a few. The driver of one of the most surveilled vehicles in the system was tracked over six months as he traveled between the eastern city of Chirchiq, through the capital Tashkent, and in the nearby settlement of Eshonguzar, often multiple times a week. We know this because the country's sprawling license plate-tracking surveillance system has been left exposed to the internet.

Security researcher Anurag Sen, who discovered the security lapse, found the license plate surveillance system exposed online without a password, allowing anyone access to the data within. It's not clear how long the surveillance system has been public, but artifacts from the system show that its database was set up in September 2024, and traffic monitoring began in mid-2025. The exposure offers a rare glimpse into how such national license plate surveillance systems work, the data they collect, and how they can be used to track the whereabouts of any one of the millions of people across an entire country. The lapse also reveals the security and privacy risks associated with the mass monitoring of vehicles and their owners, at a time when the United States is building up its nationwide array of license plate readers, many of which are provided by surveillance giant Flock.

Crime

Flock Executive Says Their Camera Helped Find Shooting Suspect, Addresses Privacy Concerns (cnn.com) 59

During a search for the Brown shoogin suspect, a law enforcement press conference included a request for "Ring camera footage from residents and businesses near Brown University," according to local news reports.

But in the end it was Flock cameras according to an article in Gizmodo, after a Reddit poster described seeing "odd" behavior of someone who turned out to be the suspect: The original Reddit poster, identified only as John in the affidavit, contacted police the next day and came in for an interview. He told them about his odd encounter with the suspect, noting that he was acting suspiciously by not having appropriate cold-weather clothes on when he saw him in a bathroom at Brown University. That was two hours before the shooting. After spotting him in the bathroom wearing a mask, John actually started following the suspect in what he called a "game of cat and mouse...." Police detectives showed John two images obtained through Flock, the company that's built extensive surveillance infrastructure across the U.S. used by investigators, and he recognized the suspect's vehicle, replying, "Holy shit. That might be it," according to the affidavit. Police were able to track down the license plate of the rental car, which gave them a name, and within 24 hours, they had found Claudio Manuel Neves Valente dead in a storage facility in Salem, New Hampshire, where he reportedly rented a unit.
"We intend to continue using technology to make sure our law enforcement are empowered to do their jobs," Flock's safety CEO Garrett Langley wrote on X.com, pinning the post to the top of his feed.

Though ironically, hours before Providence Police Chief Oscar Perez credited Flock for helping to find the suspect, CNN was interviewing Flock's safety CEO to discuss "his response to recent privacy concerns surrounding Flock's technology." To Langley, the situation underscored the value and importance of Flock's technology, despite mounting privacy concerns that have prompted some jurisdictions to cancel contracts with the company... Langley told me on Thursday that he was motivated to start Flock to keep Americans safer. His goal is to deter crime by convincing would-be criminals they'll be caught... One of Flock's cameras had recently spotted [the suspect's] car, helping police pinpoint Valente's location. Flock turned on additional AI capabilities that were not part of Providence Police's contract with the company to assist in the hunt, a company spokesperson told CNN, including a feature that can identify the same vehicle based on its description even if its license plates have been changed.

The company has faced criticism from some privacy advocates and community groups who worry that its networks of cameras are collecting too much personal information from private citizens and could be misused. Both the Electronic Frontier Foundation and the American Civil Liberties Union have urged communities not to work with Flock. "State legislatures and local governments around the nation need to enact strong, meaningful protections of our privacy and way of life against this kind of AI surveillance machinery," ACLU Senior Policy Analyst Jay Stanley wrote in an August blog post. Flock also drew scrutiny in October when it announced a partnership with Amazon's Ring doorbell camera system... ["Local officers using Flock Safety's technology can now post a request directly in the Ring Neighbors app asking for help," explains Flock's blog post.]

Langley told me it was up to police to reassure communities that the cameras would be used responsibly... "If you don't trust law enforcement to do their job, that's actually what you're concerned about, and I'm not going to help people get over that." Langley added that Flock has built some guardrails into its technology, including audit trails that show when data was accessed. He pointed to a case in Georgia where that audit found a police chief using data from LPR cameras to stalk and harass people. The chief resigned and was arrested and charged in November...

More recently, the company rolled out a "drone as first responder" service — where law enforcement officers can dispatch a drone equipped with a camera, whose footage is similarly searchable via AI, to evaluate the scene of an emergency call before human officers arrive. Flock's drone systems completed 10,000 flights in the third quarter of 2025 alone, according to the company... I asked what he'd tell communities already worried about surveillance from LPRs who might be wary of camera-equipped drones also flying overhead. He said cities can set their own limitations on drone usage, such as only using drones to respond to 911 calls or positioning the drones' cameras on the horizon while flying until they reach the scene. He added that the drones fly at an elevation of 400 feet.

United States

Repeal Section 230 and Its Platform Protections, Urges New Bipartisan US Bill (eff.org) 168

U.S. Senator Sheldon Whitehouse said Friday he was moving to file a bipartisan bill to repeal Section 230 of America's Communications Decency Act.

"The law prevents most civil suits against users or services that are based on what others say," explains an EFF blog post. "Experts argue that a repeal of Section 230 could kill free speech on the internet," writes LiveMint — though America's last two presidents both supported a repeal: During his first presidency, U.S. President Donald Trump called to repeal the law and signed an executive order attempting to curb some of its protections, though it was challenged in court. Subsequently, former President Joe Biden also voiced his opinion against the law.
An EFF blog post explains the case for Section 230: Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms. When harmful speech takes place, it's the speaker that should be held responsible, not the service that hosts the speech... Without Section 230, the Internet is different. In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects. In non-democratic countries, governments can directly censor the internet, controlling the speech of platforms and users. If the law makes us liable for the speech of others, the biggest platforms would likely become locked-down and heavily censored. The next great websites and apps won't even get started, because they'll face overwhelming legal risk to host users' speech.
But "I strongly believe that Section 230 has long outlived its use," Senator Whitehouse said this week, saying Section 230 "a real vessel for evil that needs to come to an end." "The laws that Section 230 protect these big platforms from are very often laws that go back to the common law of England, that we inherited when this country was initially founded. I mean, these are long-lasting, well-tested, important legal constraints that have — they've met the test of time, not by the year or by the decade, but by the century.

"And yet because of this crazy Section 230, these ancient and highly respected doctrines just don't reach these people. And it really makes no sense, that if you're an internet platform you get treated one way; you do the exact same thing and you're a publisher, you get treated a completely different way.

"And so I think that the time has come.... It really makes no sense... [Testimony before the committee] shows how alone and stranded people are when they don't have the chance to even get justice. It's bad enough to have to live through the tragedy... But to be told by a law of Congress, you can't get justice because of the platform — not because the law is wrong, not because the rule is wrong, not because this is anything new — simply because the wrong type of entity created this harm."

Encryption

Cryptologist DJB Criticizes Push to Finalize Non-Hybrid Security for Post-Quantum Cryptography (cr.yp.to) 21

In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National Security Agency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptography standards without a "hybrid" approach that would've also included pre-quantum ECC.

Bernstein is of the opinion that "Given how many post-quantum proposals have been broken and the continuing flood of side-channel attacks, any competent engineering evaluation will conclude that the best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as double encryption: post-quantum cryptography on top of ECC." But he says he's seen it playing out differently: By 2013, NSA had a quarter-billion-dollar-a-year budget to "covertly influence and/or overtly leverage" systems to "make the systems in question exploitable"; in particular, to "influence policies, standards and specification for commercial public key technologies". NSA is quietly using stronger cryptography for the data it cares about, but meanwhile is spending money to promote a market for weakened cryptography, the same way that it successfully created decades of security failures by building up the market for, e.g., 40-bit RC4 and 512-bit RSA and Dual EC. I looked concretely at what was happening in IETF's TLS working group, compared to the consensus requirements for standards-development organizations. I reviewed how a call for "adoption" of an NSA-driven specification produced a variety of objections that weren't handled properly. ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued "last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday.
Bernstein also shares concerns about how the Internet Engineering Task Force is handling the discussion, and argues that the document is even "out of scope" for the IETF TLS working group This document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 have been broken already... often with attacks having sufficiently low cost to demonstrate on readily available computer equipment. Further PQ software has been broken by implementation issues such as side-channel attacks.
He's also concerned about how that discussion is being handled: On 17 October 2025, they posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"...

I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days.

Thanks to alanw (Slashdot reader #1,822) for spotting the blog posts.
Electronic Frontier Foundation

Court Ends Dragnet Electricity Surveillance Program in Sacramento (eff.org) 52

A California judge has shut down a decade-long surveillance program in which Sacramento's utility provider shared granular smart-meter data on 650,000 residents with police to hunt for cannabis grows. The EFF reports: The Sacramento County Superior Court ruled that the surveillance program run by the Sacramento Municipal Utility District (SMUD) and police violated a state privacy statute, which bars the disclosure of residents' electrical usage data with narrow exceptions. For more than a decade, SMUD coordinated with the Sacramento Police Department and other law enforcement agencies to sift through the granular smart meter data of residents without suspicion to find evidence of cannabis growing. EFF and its co-counsel represent three petitioners in the case: the Asian American Liberation Network, Khurshid Khoja, and Alfonso Nguyen. They argued that the program created a host of privacy harms -- including criminalizing innocent people, creating menacing encounters with law enforcement, and disproportionately harming the Asian community.

The court ruled that the challenged surveillance program was not part of any traditional law enforcement investigation. Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation. "[T]he process of making regular requests for all customer information in numerous city zip codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation," the court ruled, finding that SMUD violated its "obligations of confidentiality" under a data privacy statute. [...]

In creating and running the dragnet surveillance program, according to the court, SMUD and police "developed a relationship beyond that of utility provider and law enforcement." Multiple times a year, the police asked SMUD to search its entire database of 650,000 customers to identify people who used a large amount of monthly electricity and to analyze granular 1-hour electrical usage data to identify residents with certain electricity "consumption patterns." SMUD passed on more than 33,000 tips about supposedly "high" usage households to police. [...] Going forward, public utilities throughout California should understand that they cannot disclose customers' electricity data to law enforcement without any "evidence to support a suspicion" that a particular crime occurred.

Electronic Frontier Foundation

ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras (404media.co) 57

An anonymous reader shares a report: Lawyers from the American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) sued the city of San Jose, California over its deployment of Flock's license plate-reading surveillance cameras, claiming that the city's nearly 500 cameras create a pervasive database of residents movements in a surveillance network that is essentially impossible to avoid.

The lawsuit was filed on behalf of the Services, Immigrant Rights & Education Network and Council on American-Islamic Relations, California, and claims that the surveillance is a violation of California's constitution and its privacy laws. The lawsuit seeks to require police to get a warrant in order to search Flock's license plate system. The lawsuit is one of the highest profile cases challenging Flock; a similar lawsuit in Norfolk, Virginia seeks to get Flock's network shut down in that city altogether.

"San Jose's ALPR [automatic license plate reader] program stands apart in its invasiveness," ACLU of Northern California and EFF lawyers wrote in the lawsuit. "While many California agencies run ALPR systems, few retain the locations of drivers for an entire year like San Jose. Further, it is difficult for most residents of San Jose to get to work, pick up their kids, or obtain medical care without driving, and the City has blanketed its roads with nearly 500 ALPRs."

Privacy

Woman Wrongfully Accused by a License Plate-Reading Camera - Then Exonerated By Camera-Equipped Car (electrek.co) 174

CBS News investigates what happened when police thought they'd tracked down a "porch pirate" who'd stolen a package — and accused an innocent woman.

"You know why I'm here," the police sergeant tells Chrisanna Elser. "You know we have cameras in that town..." "It went right into, 'we have video of you stealing a package,'" Elser said... "Can I see the video?" Elser asked. "If you go to court, you can," the officer replied. "If you're going to deny it, I'm not going to extend you any courtesy...." [You can watch a video of the entire confrontation.] On her doorstep, the officer issued a summons, without ever looking at the surveillance video Elser had. "We can show you exactly where we were," she told him. "I already know where you were," he replied.

Her Rivian — equipped with multiple cameras — had recorded her entire route that day... It took weeks of her collecting her own evidence, building timelines, and submitting videos before someone listened. Finally, she received an email from the Columbine Valley police chief acknowledging her efforts in an email saying, "nicely done btw (by the way)," and informing her the summons would not be filed.

Elser also found the theft video (which the police officer refused to show her) on Nextdoor, reports Electrek. "The woman has the same color hair, but different facial and nose shape and apparent age than Elser, which is all reasonably apparent when viewing the video..."

But Elser does drive a green Rivian truck, which police knew had entered the neighborhood 20 times over the course of a month. (Though in the video the officer is told that a male driver in the same household passes through that neighborhood driving to and from work.) The problem may be their certainty — derived from Flock's network of cameras that automatically read license plates, "tracking movements of vehicles wherever they go..." The system has provoked concern from privacy and freedom focused organizations like the Electronic Frontier Foundation and American Civil Liberties Union. Flock also recently announced a partnership with Ring, seeking to use a network of doorbell cameras to track Americans in even more places.... [The police] didn't even have video of the truck in the area — merely tags of it entering... (it also left the area minutes later, indicating a drive through, rather than crawling through neighborhoods looking for packages — but police neglected to check the exit timestamps)... Elser has asked for an apology for [officer] Milliman's aggressive behavior during the encounter, but has heard nothing back from the department despite a call, email, and physical appearance at the police station.
The article points out that Rivian's "Road Cam" feature can be set to record footage of everything happening around it using the car's built in cameras for driver-assist features. But if you want to record footage all the time, you'll need to plug in a USB-C external drive to store it. (It's ironic how different cameras recorded every part of this story — the theft, the police officer accusing the innocent woman, and that innocent woman's actual whereabouts.)

Electrek's take? "Citizens should not need to own a $70k+ truck, or even a $100 external hard drive, to keep track of everything they do in order to prove to power-tripping officers that they didn't commit a crime."
Electronic Frontier Foundation

California 'Privacy Protection Agency' Targets Tractor Supply's Tricky Tracking (eff.org) 19

California's Privacy Protection Agency "issued a record fine earlier this month to Tractor Supply," according to an EFF Deeplinks blog post — for "apparently ducking its responsibilities under the California Consumer Privacy Act." Under that law, companies are required to respect California customers' and job applicants' rights to know, delete, and correct information that businesses collect about them, and to opt-out of some types of sharing and use. The law also requires companies to give notice of these rights, along with other information, to customers, job applicants, and others. The CPPA said that Tractor Supply failed several of these requirements. This is the first time the agency has enforced this data privacy law to protect job applicants...

Tractor Supply, which has 2,500 stores in 49 states, will pay for their actions to the tune of $1,350,000 — the largest fine the agency has issued to date. Specifically, the agency said, Tractor Supply violated the law by:

- Failing to maintain a privacy policy that notified consumers of their rights;

- Failing to notify California job applicants of their privacy rights and how to exercise them;

- Failing to provide consumers with an effective mechanism to opt-out of the selling and sharing of their personal information, including through opt-out preference signals such as Global Privacy Control; and

- Disclosing personal information to other companies without entering into contracts that contain privacy protections.


In addition to the fine, the company also must take an inventory of its digital properties and tracking technologies and will have to certify its compliance with the California privacy law for the next four years.

The agency's web site says it "continues to actively enforce California's cutting-edge privacy laws." It's recently issued decisions (and fines) against American Honda Motor Company and clothing retailer Todd Snyder. Other recent actions include:
  • Securing a settlement agreement requiring data broker Background Alert — which promoted its ability to dig up "scary" amounts of information about people — to shut down or pay a steep fine.
  • Partnering with the data protection authorities in Korea, France, and the United Kingdom to share information and advance privacy protections for Californians.

Microsoft

Microsoft's OneDrive Begins Testing Face-Recognizing AI for Photos (for Some Preview Users) (microsoft.com) 62

I uploaded a photo on my phone to Microsoft's "OneDrive" file-hosting app — and there was a surprise waiting under Privacy and Permissions. "OneDrive uses AI to recognize faces in your photos..."

And...

"You can only turn off this setting 3 times a year."

*

If I moved the slidebar for that setting to the left (for "No"), it moved back to the right, and said "Something went wrong while updating this setting." (Apparently it's not one of those three times of the year.)

The feature is already rolling out to a limited number of users in a preview, a Microsoft publicist confirmed to Slashdot. (For the record, I don't remember signing up for this face-recognizing "preview".) But there's a link at the bottom of the screen for a "Microsoft Privacy Statement" that leads to a Microsoft support page, which says instead that "This feature is coming soon and is yet to be released." And in the next sentence it's been saying "Stay tuned for more updates" for almost two years...

A Microsoft publicist agreed to answer Slashdot's questions...

Slashdot Top Deals