Facebook

Meta's Customer Service is So Bad, Users Are Suing in Small Claims Court To Resolve Issues 69

Facebook and Instagram users are increasingly turning to small claims courts to regain access to their accounts or seek damages from Meta, amid frustrations with the company's customer support. In several cases across multiple states, Engadget reports, plaintiffs have successfully restored account access or won financial compensation. Meta often responds by contacting litigants before court dates, attempting to resolve issues out of court.

The trend, popularized on social media forums, highlights ongoing customer service issues at the tech giant. Some users report significant financial losses due to inaccessible business-related accounts. While small claims court offers a more accessible legal avenue, Meta typically deploys legal resources to respond to these claims.
AMD

AMD Is Investigating Claims That Company Data Was Stolen In Hack (hackread.com) 6

AMD said on Tuesday it was looking into claims that company data was stolen in a hack by a cybercriminal organization called "Intelbroker". "The alleged intrusion, which took place in June 2024, reportedly resulted in the theft of a significant amount of sensitive information, spanning across various categories," reports Hackread. From the report: In a recent post on Breach Forums, IntelBroker detailed the extent of the compromised data. The hacker claims to have accessed information related to the following records: ROMs, Firmware, Source code, Property files, Employee databases, Customer databases, Financial information, Future AMD product plans, and Technical specification sheets. The hacker is selling the data exclusively for XMR (Monero) cryptocurrency, accepting a middleman for transactions. He advises interested buyers to message him with their offers.

The reputation of IntelBroker in the cybersecurity community is one of significant concern, given the scale and sensitivity of the targeted entities in previous hacks. The hacker's past exploits include breaches of: Europol, Tech in Asia, Space-Eyes, Home Depot, Facebook Marketplace, U.S. contractor Acuity Inc., Staffing giant Robert Half, Los Angeles International Airport, and Alleged breaches of HSBC and Barclays Bank. Although the hacker's origins and affiliates are unknown, according to the United States government, IntelBroker is alleged to be the perpetrator behind one of the T-Mobile data breaches.

Facebook

Meta Accused of Trying To Discredit Ad Researchers (theregister.com) 18

Thomas Claburn reports via The Register: Meta allegedly tried to discredit university researchers in Brazil who had flagged fraudulent adverts on the social network's ad platform. Nucleo, a Brazil-based news organization, said it has obtained government documents showing that attorneys representing Meta questioned the credibility of researchers from NetLab, which is part of the Federal University of Rio de Janeiro (UFRJ). NetLab's research into Meta's ads contributed to Brazil's National Consumer Secretariat (Senacon) decision in 2023 to fine Meta $1.7 million (9.3 million BRL), which is still being appealed. Meta (then Facebook) was separately fined of $1.2 million (6.6 million BRL) related to Cambridge Analytica.

As noted by Nucleo, NetLab's report showed that Facebook, despite being notified about the issues, had failed to remove more than 1,800 scam ads that fraudulently used the name of a government program that was supposed to assist those in debt. In response to the fine, attorneys representing Meta from law firm TozziniFreire allegedly accused the NetLab team of bias and of failing to involve Meta in the research process. Nucleo says that it obtained the administrative filing through freedom of information requests to Senacon. The documents are said to date from December 26 last year and to be part of the ongoing case against Meta. A spokesperson for NetLab, who asked not to be identified by name due to online harassment directed at the organization's members, told The Register that the research group was aware of the Nucleo report. "We were kind of surprised to see the account of our work in this law firm document," the spokesperson said. "We expected to be treated with more fairness for our work. Honestly, it comes at a very bad moment because NetLab particularly, but also Brazilian science in general, is being attacked by far-right groups."

On Thursday, more than 70 civil society groups including NetLab published an open letter decrying Meta's legal tactics. "This is an attack on scientific research work, and attempts at intimidation of researchers and researchers who are performing excellent work in the production of knowledge from empirical analysis that have been fundamental to qualify the public debate on the accountability of social media platforms operating in the country, especially with regard to paid content that causes harm to consumers of these platforms and that threaten the future of our democracy," the letter says. "This kind of attack and intimidation is made even more dangerous by aligning with arguments that, without any evidence, have been used by the far right to discredit the most diverse scientific productions, including NetLab itself." The claim, allegedly made by Meta's attorneys, is that the ad biz was "not given the opportunity to appoint a technical assistant and present questions" in the preparation of the NetLabs report. This is particularly striking given Meta's efforts to limit research into its ad platform.
A Meta spokesperson told The Register: "We value input from civil society organizations and academic institutions for the context they provide as we constantly work toward improving our services. Meta's defense filed with the Brazilian Consumer Regulator questioned the use of the NetLab report as legal evidence, since it was produced without giving us prior opportunity to contribute meaningfully, in violation of local legal requirements."
Businesses

OpenAI Adds Former NSA Chief To Its Board (cnbc.com) 31

Paul M. Nakasone, a retired U.S. Army general and former NSA director, is now OpenAI's newest board member. Nakasone will join the Safety and Security Committee and contribute to OpenAI's cybersecurity efforts. CNBC reports: The committee is spending 90 days evaluating the company's processes and safeguards before making recommendations to the board and, eventually, updating the public, OpenAI said. Nakasone joins current board members Adam D'Angelo, Larry Summers, Bret Taylor and Sam Altman, as well as some new board members the company announced in March: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former executive vice president and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart.

OpenAI on Monday announced the hiring of two top executives as well as a partnership with Apple that includes a ChatGPT-Siri integration. The company said Sarah Friar, previously CEO of Nextdoor and finance chief at Square, is joining as chief financial officer. Friar will "lead a finance team that supports our mission by providing continued investment in our core research capabilities, and ensuring that we can scale to meet the needs of our growing customer base and the complex and global environment in which we are operating," OpenAI wrote in a blog post. OpenAI also hired Kevin Weil, an ex-president at Planet Labs, as its new chief product officer. Weil was previously a senior vice president at Twitter and a vice president at Facebook and Instagram. Weil's product team will focus on "applying our research to products and services that benefit consumers, developers, and businesses," the company wrote.
Edward Snowden, a former NSA contractor who leaked classified documents in 2013 that exposed the massive scope of government surveillance programs, is wary of the appointment. In a post on X, Snowden wrote: "They've gone full mask-off: Do not ever trust OpenAI or its products (ChatGPT etc). There is only one reason for appointing an NSA director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned."
Facebook

Meta Pauses Plans To Train AI Using European Users' Data, Bowing To Regulatory Pressure 22

Meta has confirmed that it will pause plans to start training its AI systems using data from its users in the European Union and U.K. From a report: The move follows pushback from the Irish Data Protection Commission (DPC), Meta's lead regulator in the EU, which is acting on behalf of several data protection authorities across the bloc. The U.K.'s Information Commissioner's Office (ICO) also requested that Meta pause its plans until it could satisfy concerns it had raised. "The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA," the DPC said in a statement Friday. "This decision followed intensive engagement between the DPC and Meta. The DPC, in cooperation with its fellow EU data protection authorities, will continue to engage with Meta on this issue."

While Meta is already tapping user-generated content to train its AI in markets such as the U.S., Europe's stringent GDPR regulations has created obstacles for Meta -- and other companies -- looking to improve their AI systems, including large language models with user-generated training material. However, Meta last month began notifying users of an upcoming change to its privacy policy, one that it said will give it the right to use public content on Facebook and Instagram to train its AI, including content from comments, interactions with companies, status updates, photos and their associated captions. The company argued that it needed to do this to reflect "the diverse languages, geography and cultural references of the people in Europe."
AI

Clearview AI Used Your Face. Now You May Get a Stake in the Company. (nytimes.com) 40

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

Social Networks

A Growing Number of Americans Are Getting Their News From TikTok (theverge.com) 197

According to a new survey from the Pew Research Center, TikTok is the second most popular source of news for Americans after X, "though most TikTok users don't primarily think of the shortform video app as a news source," notes The Verge. The survey looked at how Facebook, Instagram, TikTok and X play a role in Americans' news diets. From the report: Among TikTok users, only 15 percent say keeping up with the news is a major reason they use the app. Still, 35 percent of those surveyed said they wouldn't have seen the news they get on TikTok elsewhere. And unlike other apps, the news users see on TikTok is just as likely to come from influencers or celebrities as it is from journalists -- and it's far more likely to come from total strangers. (Meanwhile, most Facebook and Instagram users say the news that pops up on their feeds is posted by friends, relatives, or other people they know; on X, users are more likely to see news posted by media outlets or reporters.)
United States

Louisiana Becomes 10th US State to Make CS a High School Graduation Requirement (linkedin.com) 89

Long-time Slashdot reader theodp writes: "Great news, Louisiana!" tech-backed Code.org exclaimed Wednesday in celebratory LinkedIn, Facebook, and Twitter posts. Louisiana is "officially the 10th state to make computer science a [high school] graduation requirement. Huge thanks to Governor Jeff Landry for signing the bill and to our legislative champions, Rep. Jason Hughes and Sen. Thomas Pressly, for making it happen! This means every Louisiana student gets a chance to learn coding and other tech skills that are super important these days. These skills can help them solve problems, think critically, and open doors to awesome careers!"

Representative Hughes, the sponsor of HB264 — which calls for each public high school student to successfully complete a one credit CS course as a requirement for graduation and also permits students to take two units of CS instead of studying a Foreign Language — tweeted back: "HUGE thanks @codeorg for their partnership in this effort every step of the way! Couldn't have done it without [Code.org Senior Director of State Government Affairs] Anthony [Owen] and the Code.org team!"

Code.org also on Wednesday announced the release of its 2023 Impact Report, which touted its efforts "to include a requirement for every student to take computer science to receive a high school diploma." Since its 2013 launch, Code.org reports it's spent $219.8 million to push coding into K-12 classrooms, including $19 million on Government Affairs (Achievements: "Policies changed in 50 states. More than $343M in state budgets allocated to computer science.").

In Code.org by the Numbers, the nonprofit boasts that 254,683 students started Code.org's AP CS Principles course in the academic year (2025 Goal: 400K), while 21,425 have started Code.org's new Amazon-bankrolled AP CS A course. Estimates peg U.S. public high school enrollment at 15.5M students, annual K-12 public school spending at $16,080 per pupil, and an annual high school student course load at 6-8 credits...

AI

NewsBreak, Most Downloaded US News App, Caught Sharing 'Entirely False' AI-Generated Stories 98

An anonymous reader quotes a report from Reuters: Last Christmas Eve, NewsBreak, a free app with roots in China that is the most downloaded news app in the United States, published an alarming piece about a small town shooting. It was headlined "Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns." The problem was, no such shooting took place. The Bridgeton, New Jersey police department posted a statement on Facebook on December 27 dismissing the article -- produced using AI technology -- as "entirely false." "Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers." NewsBreak, which is headquartered in Mountain View, California and has offices in Beijing and Shanghai, told Reuters it removed the article on December 28, four days after publication.

The company said "the inaccurate information originated from the content source," and provided a link to the website, adding: "When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content." As local news outlets across America have shuttered in recent years, NewsBreak has filled the void. Billing itself as "the go-to source for all things local," Newsbreak says it has over 50 million monthly users. It publishes licensed content from major media outlets, including Reuters, Fox, AP and CNN as well as some information obtained by scraping the internet for local news or press releases which it rewrites with the help of AI. It is only available in the U.S. But in at least 40 instances since 2021, the app's use of AI tools affected the communities it strives to serve, with Newsbreak publishing erroneous stories; creating 10 stories from local news sites under fictitious bylines; and lifting content from its competitors, according to a Reuters review of previously unreported court documents related to copyright infringement, cease-and-desist emails and a 2022 company memo registering concerns about "AI-generated stories."
Five of the seven former NewsBreak employees Reuters spoke to said most of the engineering work behind the app's algorithm is carried out in its China-based offices. "The company launched in the U.S. in 2015 as a subsidiary of Yidian, a Chinese news aggregation app," notes Reuters. "Both companies were founded by Jeff Zheng, the CEO of Newsbreak, and the companies share a U.S. patent registered in 2015 for an 'Interest Engine' algorithm, which recommends news content based on a user's interests and location."

"NewsBreak is a privately held start-up, whose primary backers are private equity firms San Francisco-based Francisco Partners, and Beijing-based IDG Capital."
Social Networks

Israel Reportedly Uses Fake Social Media Accounts To Influence US Lawmakers On Gaza War (nytimes.com) 146

An anonymous reader quotes a report from the New York Times: Israel organized and paid for an influence campaign last year targeting U.S. lawmakers and the American public with pro-Israel messaging, as it aimed to foster support for its actions in the war with Gaza, according to officials involved in the effort and documents related to the operation. The covert campaign was commissioned by Israel's Ministry of Diaspora Affairs, a government body that connects Jews around the world with the State of Israel, four Israeli officials said. The ministry allocated about $2 million to the operation and hired Stoic, a political marketing firm in Tel Aviv, to carry it out, according to the officials and the documents. The campaign began in October and remains active on the platform X. At its peak, it used hundreds of fake accounts that posed as real Americans on X, Facebook and Instagram to post pro-Israel comments. The accounts focused on U.S. lawmakers, particularly ones who are Black and Democrats, such as Representative Hakeem Jeffries, the House minority leader from New York, and Senator Raphael Warnock of Georgia, with posts urging them to continue funding Israel's military.

ChatGPT, the artificial intelligence-powered chatbot, was used to generate many of the posts. The campaign also created three fake English-language news sites featuring pro-Israel articles. The Israeli government's connection to the influence operation, which The New York Times verified with four current and former members of the Ministry of Diaspora Affairs and documents about the campaign, has not previously been reported. FakeReporter, an Israeli misinformation watchdog, identified the effort in March. Last week, Meta, which owns Facebook and Instagram, and OpenAI, which makes ChatGPT, said they had also found and disrupted the operation. The secretive campaign signals the lengths Israel was willing to go to sway American opinion on the war in Gaza.

Facebook

Meta Withheld Information on Instagram, WhatsApp Deals, FTC Says (yahoo.com) 9

Meta Platforms withheld information from federal regulators during their original reviews of the Instagram and WhatsApp acquisitions, the US Federal Trade Commission said in a court filing as part of a lawsuit seeking to break up the social networking giant. From a report: In its filing Tuesday, however, the FTC said the case involves "information Meta had in its files and did not provide" during the original reviews. "At Meta's request the FTC undertook only a limited review" of the deals, the agency said. "The FTC now has available vastly more evidence, including pre-acquisition documents Meta did not provide in 2012 and 2014."

Meta said that it met all of its legal obligations during the Instagram and WhatsApp merger reviews. The FTC has failed to provide evidence to support its claims, a spokesperson said. "The evidence instead shows that Meta faces fierce competition and that Meta's significant investment of time and resources in Instagram and WhatsApp has benefited consumers by making the apps into the services millions of users enjoy today for free," spokesperson Chris Sgro said in a statement. "The FTC has done nothing to build its case over the past four years, while Meta has invested billions to build quality products."

China

The Chinese Internet Is Shrinking (nytimes.com) 88

An anonymous reader shares a report: Chinese people know their country's internet is different. There is no Google, YouTube, Facebook or Twitter. They use euphemisms online to communicate the things they are not supposed to mention. When their posts and accounts are censored, they accept it with resignation. They live in a parallel online universe. They know it and even joke about it. Now they are discovering that, beneath a facade bustling with short videos, livestreaming and e-commerce, their internet -- and collective online memory -- is disappearing in chunks.

A post on WeChat on May 22 that was widely shared reported that nearly all information posted on Chinese news portals, blogs, forums, social media sites between 1995 and 2005 was no longer available. "The Chinese internet is collapsing at an accelerating pace," the headline said. Predictably, the post itself was soon censored. It's impossible to determine exactly how much and what content has disappeared. [...] In addition to disappearing content, there's a broader problem: China's internet is shrinking. There were 3.9 million websites in China in 2023, down more than a third from 5.3 million in 2017, according to the country's internet regulator.

Facebook

Meta, Activision Sued By Parents of Children Killed in Last Year's School Shooting (msn.com) 153

Exactly one year after the fatal shooting of 19 elementary school students in Texas, their parents filed a lawsuit against the publisher of the videogame Call of Duty, against Meta, and against the manufacturer of the AR-15-style weapon used in the attack, Daniel Defense.

The Washington Post says the lawsuits "may be the first of their kind to connect aggressive firearms marketing tactics on social media and gaming platforms to the actions of a mass shooter." The complaints contend the three companies are responsible for "grooming" a generation of "socially vulnerable" young men radicalized to live out violent video game fantasies in the real world with easily accessible weapons of war...

Several state legislatures, including California and Hawaii, passed consumer safety laws specific to the sale and marketing of firearms that would open the industry to more civil liability. Texas is not one of them. But it's just one vein in the three-pronged legal push by Uvalde families. The lawsuit against Activision and Meta, which is being filed in California, accuses the tech companies of knowingly promoting dangerous weapons to millions of vulnerable young people, particularly young men who are "insecure about their masculinity, often bullied, eager to show strength and assert dominance."

"To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters," the lawsuit states...

The lawsuit alleges that Meta, which owns Instagram, easily allows gun manufacturers like Daniel Defense to circumvent its ban on paid firearm advertisements to reach scores of young people. Under Meta's rules, gunmakers are not allowed to buy advertisements promoting the sale of or use of weapons, ammunition or explosives. But gunmakers are free to post promotional material about weapons from their own account pages on Facebook and Instagram — a freedom the lawsuit alleges Daniel Defense often exploited.

According to the complaint, the Robb school shooter downloaded a version of "Call of Duty: Modern Warfare," in November 2021 that featured on the opening title page the DDM4V7 model rifle [shooter Salvador] Ramos would later purchase. Drawing from the shooter's social media accounts, Koskoff argued he was being bombarded with explicit marketing and combat imagery from the company on Instagram... The complaint cites Meta's practice, first reported by The Washington Post in 2022, of giving gun sellers wide latitude to knowingly break its rules against selling firearms on its websites. The company has allowed buyers and sellers to violate the rule 10 times before they are kicked off, The Post reported.

The article adds that the lawsuit against Meta "echoes some of the complaints by dozens of state attorneys general and school districts that have accused the tech giant of using manipulative practices to hook... while exposing them to harmful content." It also includes a few excerpts from the text of the lawsuit.
  • It argues that both Meta and Activision "knowingly exposed the Shooter to the weapon, conditioned him to see it as the solution to his problems, and trained him to use it."
  • The lawsuit also compares their practices to another ad campaign accused of marketing harmful products to children: cigarettes. "Over the last 15 years, two of America's largest technology companies — Defendants Activision and Meta — have partnered with the firearms industry in a scheme that makes the Joe Camel campaign look laughably harmless, even quaint."

Meta and Daniel Defense didn't respond to the reporters' requests for comment. But they did quote a statement from Activision expressing sympathy for the communities and families impacted by the "horrendous and heartbreaking" shooting.

Activision also added that "Millions of people around the world enjoy video games without turning to horrific acts."


Facebook

Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council (qz.com) 17

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta's management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

"I've come to deeply respect this group of people and their achievements in their respective areas, and I'm grateful that they're willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse," Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta's 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council.
TechCrunch notes that the council features "only white men on it." This "differs from Meta's actual board of directors and its Oversight Board, which is more diverse in gender and racial representation," reports TechCrunch.

"It's telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. ... it's been proven time and time again that AI isn't like other products. It's a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups."
AI

Meta AI Chief Says Large Language Models Will Not Reach Human Intelligence (ft.com) 78

Meta's AI chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create "superintelligence" in machines. From a report: Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had "very limited understanding of logicâ... do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot planâ...âhierarchically."

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, "intrinsically unsafe." Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve. Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet's Google.

Transportation

Some People Who Rented a Tesla from Hertz Were Still Charged for Gas (thedrive.com) 195

"Last week, we reported on a customer who was charged $277 for gasoline his rented Tesla couldn't have possibly used," writes the automotive blog The Drive.

"And now, we've heard from other Hertz customers who say they've been charged even more." Hertz caught attention last week for how it handled a customer whom it had charged a "Skip the Pump" fee, which allows renters to pay a premium for Hertz to refill the tank for them. But of course, this customer's rented Tesla Model 3 didn't use gas — it draws power from a battery — and Hertz has a separate, flat fee for EV recharges. Nevertheless, the customer was charged $277.39 despite returning the car with the exact same charge they left with, and Hertz refused to refund it until after our story ran. It's no isolated incident either, as other customers have written in to inform us that it happened to them, too....

Evan Froehlich returned the rental at 21 percent charge, expecting to pay a flat $25 recharge fee. (It's ordinarily $35, but Hertz's loyalty program discounts it.) To Froehlich's surprise, he was hit with a $340.97 "Skip the Pump" fee, which can be applied after returning a car if it's not requested beforehand. He says Hertz's customer service was difficult to reach, and that it took making a ruckus on social media to get Hertz's attention. In the end, a Hertz representative was able to review the charge and have it reversed....

A March 2023 Facebook post documenting a similar case indicates this has been happening for more than a year.

After renting a Tesla Model 3, another customer even got a $475.19 "fuel charge," according to the article — in addition to a $25 charging fee: They also faced a $125.01 "rebill" for using the Supercharger network during their rental, which other Hertz customers have expressed surprise and frustration with. Charging costs can vary, but a 75-percent charge from a Supercharger will often cost in the region of just $15.
Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 63

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
Operating Systems

NetBSD Bans AI-Generated Code (netbsd.org) 64

Seven Spirals writes: NetBSD committers are now banned from using any AI-generated code from ChatGPT, CoPilot, or other AI tools. Time will tell how this plays out with both their users and core team. "If you commit code that was not written by yourself, double check that the license on that code permits import into the NetBSD source repository, and permits free distribution," reads NetBSD's updated commit guidelines. "Check with the author(s) of the code, make sure that they were the sole author of the code and verify with them that they did not copy any other code. Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core."
EU

EU Opens Child Safety Probes of Facebook and Instagram, Citing Addictive Design Concerns (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: Facebook and Instagram are under formal investigation in the European Union over child protection concerns, the Commission announced Thursday. The proceedings follow a raft of requests for information to parent entity Meta since the bloc's online governance regime, the Digital Services Act (DSA), started applying last August. The development could be significant as the formal proceedings unlock additional investigatory powers for EU enforcers, such as the ability to conduct office inspections or apply interim measures. Penalties for any confirmed breaches of the DSA could reach up to 6% of Meta's global annual turnover.

Meta's two social networks are designated as very large online platforms (VLOPs) under the DSA. This means the company faces an extra set of rules -- overseen by the EU directly -- requiring it to assess and mitigate systemic risks on Facebook and Instagram, including in areas like minors' mental health. In a briefing with journalists, senior Commission officials said they suspect Meta of failing to properly assess and mitigate risks affecting children. They particularly highlighted concerns about addictive design on its social networks, and what they referred to as a "rabbit hole effect," where a minor watching one video may be pushed to view more similar content as a result of the platforms' algorithmic content recommendation engines.

Commission officials gave examples of depression content, or content that promotes an unhealthy body image, as types of content that could have negative impacts on minors' mental health. They are also concerned that the age assurance methods Meta uses may be too easy for kids to circumvent. "One of the underlying questions of all of these grievances is how can we be sure who accesses the service and how effective are the age gates -- particularly for avoiding that underage users access the service," said a senior Commission official briefing press today on background. "This is part of our investigation now to check the effectiveness of the measures that Meta has put in place in this regard as well." In all, the EU suspects Meta of infringing DSA Articles 28, 34, and 35. The Commission will now carry out an in-depth investigation of the two platforms' approach to child protection.

Facebook

Meta Will Shut Down Workplace, Its Business Chat Tool (axios.com) 21

Meta is shutting down Workplace, the tool it sold to businesses that combined social and productivity features, according to messages to customers obtained by Axios and confirmed by Meta. From the report:Meta has been cutting jobs and winnowing its product line for the last few years while investing billions first in the metaverse and now in AI. Micah Collins, Meta's senior director of product management, sent a message to customers alerting them of the shutdown.

Collins said customers can use Workplace through September 2025, when it will become available only to download or read existing data. The service will shut down completely in 2026. Workplace was formerly Facebook at Work, and launched in its current form in 2016. In 2021 the company reported it had 7 million paid subscribers.

Slashdot Top Deals