Youtube

YouTube Pulls Tech Creator's Self-Hosting Tutorial as 'Harmful Content' (jeffgeerling.com) 77

YouTube pulled a popular tutorial video from tech creator Jeff Geerling this week, claiming his guide to installing LibreELEC on a Raspberry Pi 5 violated policies against "harmful content." The video, which showed viewers how to set up their own home media servers, had been live for over a year and racked up more than 500,000 views. YouTube's automated systems flagged the content for allegedly teaching people "how to get unauthorized or free access to audio or audiovisual content."

Geerling says his tutorial covered only legal self-hosting of media people already own -- no piracy tools or copyright workarounds. He said he goes out of his way to avoid mentioning popular piracy software in his videos. It's the second time YouTube has pulled a self-hosting content video from Geerling. Last October, YouTube removed his Jellyfin tutorial, though that decision was quickly reversed after appeal. This time, his appeal was denied.
AI

Anthropic Co-founder on Cutting Access To Windsurf: 'It Would Be Odd For Us To Sell Claude To OpenAI' (techcrunch.com) 5

Anthropic cut AI coding assistant Windsurf's direct access to its Claude models after media reported that rival OpenAI plans to acquire the startup for $3 billion. Anthropic co-founder Jared Kaplan told TechCrunch that "it would be odd for us to be selling Claude to OpenAI," explaining the decision to cut access to Claude 3.5 Sonnet and Claude 3.7 Sonnet models.
China

OpenAI Says Significant Number of Recent ChatGPT Misuses Likely Came From China (wsj.com) 19

OpenAI said it disrupted several attempts [non-paywalled source] from users in China to leverage its AI models for cyber threats and covert influence operations, underscoring the security challenges AI poses as the technology becomes more powerful. From a report: The Microsoft-backed company on Thursday published its latest report on disrupting malicious uses of AI, saying its investigative teams continued to uncover and prevent such activities in the three months since Feb. 21.

While misuse occurred in several countries, OpenAI said it believes a "significant number" of violations came from China, noting that four of 10 sample cases included in its latest report likely had a Chinese origin. In one such case, the company said it banned ChatGPT accounts it claimed were using OpenAI's models to generate social media posts for a covert influence operation. The company said a user stated in a prompt that they worked for China's propaganda department, though it cautioned it didn't have independent proof to verify its claim.

Media

WHIP Muxer Merged To FFmpeg For Sub-Second Latency Streaming (phoronix.com) 7

FFmpeg has added support for WHIP (WebRTC-HTTP Ingestion Protocol), enabling sub-second latency live streaming by leveraging WebRTC's fast, secure video delivery capabilities. It's a major update that introduces a new WHIP muxer to make FFmpeg more powerful for real-time broadcasting applications. Phoronix's Michael Larabel reports: WHIP uses HTTP for exchanging initial information and capabilities and then uses STUN binding to establish a UDP session. Encryption is supported -- and due to WebRTC, mandatory -- with WHIP and audio/video frames are split into RTP packets. WebRTC-HTTP Ingestion Protocol is an IETF standard for ushering low-latency communication over WebRTC to help with streaming/broadcasting uses. With this FFmpeg commit introducing nearly three thousand lines of new code, an initial WHIP muxer has been introduced. You can learn more about WebRTC WHIP in this presentation by Millicast (PDF).
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
The Courts

Reddit Sues AI Startup Anthropic For Breach of Contract, 'Unfair Competition' (cnbc.com) 44

Reddit is suing AI startup Anthropic for what it's calling a breach of contract and for engaging in "unlawful and unfair business acts" by using the social media company's platform and data without authority. From a report: The lawsuit, filed in San Francisco on Wednesday, claims that Anthropic has been training its models on the personal data of Reddit users without obtaining their consent. Reddit alleges that it has been harmed by the unauthorized commercial use of its content.

The company opened the complaint by calling Anthropic a "late-blooming" AI company that "bills itself as the white knight of the AI industry." Reddit follows by saying, "It is anything but."

Facebook

Meta's Going To Revive an Old Nuclear Power Plant (theverge.com) 47

Meta has struck a 20-year deal with energy company Constellation to keep the Clinton Clean Energy Center nuclear plant in Illinois operational, the social media giant's first nuclear power purchase agreement as it seeks clean energy sources for AI data centers. The aging facility, which was slated to close in 2017 after years of financial losses and currently operates under a state tax credit reprieve until 2027, will receive undisclosed financial support that enables a 30-megawatt capacity expansion to 1,121 MW total output.

The arrangement preserves 1,100 local jobs while generating electricity for 800,000 homes, as Meta purchases clean energy certificates to offset a portion of its growing carbon footprint driven by AI operations.
United States

Texas Right To Repair Bill Passes (theverge.com) 36

Texas is poised to become the first state with a Republican-controlled government to pass a right to repair law, as its Senate unanimously approved HB 2963. The bill requires manufacturers to provide parts, manuals, and tools for equipment sold or used in the state. The Verge reports: A press release from the United States Public Interest Research Group (PIRG), which has pushed for repairability laws nationwide, noted that this would make Texas the ninth state with a right to repair rule, and the seventh with a version that includes consumer electronics. It follows New York, Colorado, Minnesota, California, Oregon, Maine, and most recently, Washington [...]. "More repair means less waste. Texas produces some 621,000 tons of electronic waste per year, which creates an expensive and toxic mess. Now, thanks to this bipartisan win, Texans can fix that," said Environment Texas executive director Luke Metzger.
AI

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75

An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."

The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote.
"Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."

Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
Privacy

North Korean Smartphones Automatically Capture Screenshots Every 5 Minutes For State Surveillance 74

A smartphone smuggled out of North Korea automatically captures screenshots every five minutes and stores them in a hidden folder inaccessible to users, according to analysis by the BBC. Authorities can later review these images to monitor citizen activity on the device. The phone, obtained by Seoul-based media outlet Daily NK, resembles a Huawei or Honor device but runs state-approved software designed for surveillance and control. The device also automatically censors text, replacing "South Korea" with "puppet state" and Korean terms of endearment with "comrade."
AI

Business Insider Recommended Nonexistent Books To Staff As It Leans Into AI (semafor.com) 23

An anonymous reader shares a report: Business Insider announced this week that it wants staff to better incorporate AI into its journalism. But less than a year ago, the company had to quietly apologize to some staff for accidentally recommending that they read books that did not appear to exist but instead may have been generated by AI.

In an email to staff last May, a senior editor at Business Insider sent around a list of what she called "Beacon Books," a list of memoirs and other acclaimed business nonfiction books, with the idea of ensuring staff understood some of the fundamental figures and writing powering good business journalism.

Many of the recommendations were well-known recent business, media, and tech nonfiction titles such as Too Big To Fail by Andrew Ross Sorkin, DisneyWar by James Stewart, and Super Pumped by Mike Isaac. But a few were unfamiliar to staff. Simply Target: A CEO's Lessons in a Turbulent Time and Transforming an Iconic Brand by former Target CEO Gregg Steinhafel was nowhere to be found. Neither was Jensen Huang: the Founder of Nvidia, which was supposedly published by the company Charles River Editors in 2019.

Space

Six More Humans Successfully Carried to the Edge of Space by Blue Origin (space.com) 74

An anonymous reader shared this report from Space.com: Three world travelers, two Space Camp alums and an aerospace executive whose last name aptly matched their shared adventure traveled into space and back Saturday, becoming the latest six people to fly with Blue Origin, the spaceflight company founded by billionaire Jeff Bezos.

Mark Rocket joined Jaime Alemán, Jesse Williams, Paul Jeris, Gretchen Green and Amy Medina Jorge on board the RSS First Step — Blue Origin's first of two human-rated New Shepard capsules — for a trip above the Kármán Line, the 62-mile-high (100-kilometer) internationally recognized boundary between Earth and space...

Mark Rocket became the first New Zealander to reach space on the mission. His connection to aerospace goes beyond his apt name and today's flight; he's currently the CEO of Kea Aerospace and previously helped lead Rocket Lab, a competing space launch company to Blue Origin that sends most of its rockets up from New Zealand. Alemán, Williams and Jeris each traveled the world extensively before briefly leaving the planet today. An attorney from Panama, Alemán is now the first person to have visited all 193 countries recognized by the United Nations, traveled to the North and South Poles, and now, have been into space. For Williams, an entrepreneur from Canada, Saturday's flight continued his record of achieving high altitudes; he has summitted Mt. Everest and five of the other six other highest mountains across the globe.

"For about three minutes, the six NS-32 crewmates experienced weightlessness," the article points out, "and had an astronaut's-eye view of the planet..."

On social media Blue Origin notes it's their 12th human spaceflight, "and the 32nd flight of the New Shepard program."
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
NASA

America's Next NASA Administrator Will Not Be Former SpaceX Astronaut Jared Isaacman (arstechnica.com) 42

In December it looked like NASA's next administrator would be the billionaire businessman/space enthusiast who twice flew to orbit with SpaceX.

But Saturday the nomination was withdrawn "after a thorough review of prior associations," according to an announcement made on social media. The Guardian reports: His removal from consideration caught many in the space industry by surprise. Trump and the White House did not explain what led to the decision... In [Isaacman's] confirmation hearing in April, he sought to balance Nasa's existing moon-aligned space exploration strategy with pressure to shift the agency's focus on Mars, saying the US can plan for travel to both destinations. As a potential leader of Nasa's 18,000 employees, Isaacman faced a daunting task of implementing that decision to prioritize Mars, given that Nasa has spent years and billions of dollars trying to return its astronauts to the moon...

Some scientists saw the nominee change as further destabilizing to Nasa as it faces dramatic budget cuts without a confirmed leader in place to navigate political turbulence between Congress, the White House and the space agency's workforce.

"It was unclear whom the administration might tap to replace Isaacman," the article adds, though "One name being floated is the retired US air force Lt Gen Steven Kwast, an early advocate for the creation of the US Space Force..."

Ars Technica notes that Kwast, a former Lieutenant General in the U.S. Air Force, has a background that "seems to be far less oriented toward NASA's civil space mission and far more focused on seeing space as a battlefield — decidedly not an arena for cooperation and peaceful exploration."
Government

Brazil Tests Letting Citizens Earn Money From Data in Their Digital Footprint (restofworld.org) 15

With over 200 million people, Brazil is the world's fifth-largest country by population. Now it's testing a program that will allow Brazilians "to manage, own, and profit from their digital footprint," according to RestOfWorld.org — "the first such nationwide initiative in the world."

The government says it's partnering with California-based data valuation/monetization firm DrumWave to create "data savings account" to "transform data into economic assets, with potential for monetization and participation in the benefits generated by investing in technologies such as AI LLMs." But all based on "conscious and authorized use of personal information." RestOfWorld reports: Today, "people get nothing from the data they share," Brittany Kaiser, co-founder of the Own Your Data Foundation and board adviser for DrumWave, told Rest of World. "Brazil has decided its citizens should have ownership rights over their data...." After a user accepts a company's offer on their data, payment is cashed in the data wallet, and can be immediately moved to a bank account. The project will be "a correction in the historical imbalance of the digital economy," said Kaiser. Through data monetization, the personal data that companies aggregate, classify, and filter to inform many aspects of their operations will become an asset for those providing the data...

Brazil's project stands out because it brings the private sector and the government together, "so it has a better chance of catching on," said Kaiser. In 2023, Brazil's Congress drafted a bill that classifies data as personal property. The country's current data protection law classifies data as a personal, inalienable right. The new legislation gives people full rights over their personal data — especially data created "through use and access of online platforms, apps, marketplaces, sites and devices of any kind connected to the web." The bill seeks to ensure companies offer their clients benefits and financial rewards, including payment as "compensation for the collecting, processing or sharing of data." It has garnered bipartisan support, and is currently being evaluated in Congress...

If approved, the bill will allow companies to collect data more quickly and precisely, while giving users more clarity over how their data will be used, according to Antonielle Freitas, data protection officer at Viseu Advogados, a law firm that specializes in digital and consumer laws. As data collection becomes centralized through regulated data brokers, the government can benefit by paying the public to gather anonymized, large-scale data, Freitas told Rest of World. These databases are the basis for more personalized public services, especially in sectors such as health care, urban transportation, public security, and education, she said.

This first pilot program involves "a small group of Brazilians who will use data wallets for payroll loans," according to the article — although Pedro Bastos, a researcher at Data Privacy Brazil, sees downsides. "Once you treat data as an economic asset, you are subverting the logic behind the protection of personal data," he told RestOfWorld. The data ecosystem "will no longer be defined by who can create more trust and integrity in their relationships, but instead, it will be defined by who's the richest."

Thanks to Slashdot reader applique for sharing the news.
Privacy

Developer Builds Tool That Scrapes YouTube Comments, Uses AI To Predict Where Users Live (404media.co) 34

An anonymous reader quotes a report from 404 Media: If you've left a comment on a YouTube video, a new website claims it might be able to find every comment you've ever left on any video you've ever watched. Then an AI can build a profile of the commenter and guess where you live, what languages you speak, and what your politics might be. The service is called YouTube-Tools and is just the latest in a suite of web-based tools that started life as a site to investigate League of Legends usernames. Now it uses a modified large language model created by the company Mistral to generate a background report on YouTube commenters based on their conversations. Its developer claims it's meant to be used by the cops, but anyone can sign up. It costs about $20 a month to use and all you need to get started is a credit card and an email address.

The tool presents a significant privacy risk, and shows that people may not be as anonymous in the YouTube comments sections as they may think. The site's report is ready in seconds and provides enough data for an AI to flag identifying details about a commenter. The tool could be a boon for harassers attempting to build profiles of their targets, and 404 Media has seen evidence that harassment-focused communities have used the developers' other tools. YouTube-Tools also appears to be a violation of YouTube's privacy policies, and raises questions about what YouTube is doing to stop the scraping and repurposing of peoples' data like this. "Public search engines may scrape data only in accordance with YouTube's robots.txt file or with YouTube's prior written permission," it says.

AI

Gemini Can Now Watch Google Drive Videos For You 36

Google's Gemini AI can now analyze and summarize video files stored in Google Drive, letting users ask questions about content like meeting takeaways or product updates without watching the footage. The Verge reports: The Gemini in Drive feature provides a familiar chatbot interface that can provide quick summaries describing the footage or pull specific information. For example, users can ask Gemini to list action items mentioned in recorded meetings or highlight the biggest updates and new products in an announcement video, saving time spent on manually combing through and taking notes.

The feature requires captions to be enabled for videos, and can be accessed using either Google Drive's overlay previewer or a new browser tab window. It's available in English for Google Workspace and Google One AI Premium users, and anyone who has previously purchased Gemini Business or Enterprise add-ons, though it may take a few weeks to fully roll out.
You can learn more about the update in Google's blog post.
Security

ASUS Router Backdoors Affect 9,000 Devices, Persists After Firmware Updates 23

An anonymous reader quotes a report from SC Media: Thousands of ASUS routers have been compromised with malware-free backdoors in an ongoing campaign to potentially build a future botnet, GreyNoise reported Wednesday. The threat actors abuse security vulnerabilities and legitimate router features to establish persistent access without the use of malware, and these backdoors survive both reboots and firmware updates, making them difficult to remove.

The attacks, which researchers suspect are conducted by highly sophisticated threat actors, were first detected by GreyNoise's AI-powered Sift tool in mid-March and disclosed Thursday after coordination with government officials and industry partners. Sekoia.io also reported the compromise of thousands of ASUS routers in their investigation of a broader campaign, dubbed ViciousTrap, in which edge devices from other brands were also compromised to create a honeypot network. Sekoia.io found that the ASUS routers were not used to create honeypots, and that the threat actors gained SSH access using the same port, TCP/53282, identified by GreyNoise in their report.
The backdoor campaign affects multiple ASUS router models, including the RT-AC3200, RT-AC3100, GT-AC2900, and Lyra Mini.

GreyNoise advises users to perform a full factory reset and manually reconfigure any potentially compromised device. To identify a breach, users should check for SSH access on TCP port 53282 and inspect the authorized_keys file for unauthorized entries.
Movies

There's More Film and Television For You To Watch Than Ever Before - Good Luck Finding It (salon.com) 99

The entertainment industry has achieved an unprecedented milestone: more film and television content exists today than at any point in human history. The technical infrastructure to deliver this content directly to consumers' homes works flawlessly. The problem? Actually finding something to watch has become a user experience nightmare that would make early-2000s software developers cringe.

Multiple streaming platforms are suffering from fundamental interface design failures that actively prevent users from discovering content. Cameron Nudleman, an Austin-based user, told Salon that scrolling through streaming service landing pages feels "like a Herculean task," while his Amazon Fire Stick setup -- designed to consolidate multiple services -- delivers consistent crashes across Paramount+ and Max, with Peacock terminating randomly "for no discernible reason."

The technical problems extend beyond stability issues to basic functionality failures. Max automatically enables closed captions despite user preferences, while Paramount+ crashes during show transitions. Chicago media writer Tim O'Reilly describes "every single interface" as "complete garbage except for Netflix's," though even Netflix has recently implemented changes that degrade user experience.

The industry eliminated simple discovery mechanisms like newspaper listings and Moviefone's telephone service in favor of algorithm-driven interfaces that Tennessee attorney Claire Tuley says have "turned art into work," transforming what was supposed to "democratize movies" into "a system that requires so many subscriptions, searching and effort."
The Media

Linux Format Ceases Publication (mastodon.social) 28

New submitter salyavin writes: The final issue of Linux Format has been released. After 25 years the magazine is going out with a bang. Interviewing the old staff members, and looking back at old Linux distros [...] The last 10-15 years have been absolutely brutal to computer hobbyist magazines -- (or magazines and media at large, in general).

Slashdot Top Deals