×
Security

Crooks Threaten To Leak 3 Billion Personal Records 'Stolen From Background Firm' (theregister.com) 19

An anonymous reader quotes a report from The Register: Billions of records detailing people's personal information may soon be dumped online after being allegedly obtained from a Florida firm that handles background checks and other requests for folks' private info. A criminal gang that goes by the handle USDoD put the database up for sale for $3.5 million on an underworld forum in April, and rather incredibly claimed the trove included 2.9 billion records on all US, Canadian, and British citizens. It's believed one or more miscreants using the handle SXUL was responsible for the alleged exfiltration, who passed it onto USDoD, which is acting as a broker. The pilfered information is said to include individuals' full names, addresses, and address history going back at least three decades, social security numbers, and people's parents, siblings, and relatives, some of whom have been dead for nearly 20 years. According to USDoD, this info was not scraped from public sources, though there may be duplicate entries for people in the database.

Fast forward to this month, and the infosec watchers at VX-Underground say they've not only been able to view the database and verify that at least some of its contents are real and accurate, but that USDoD plans to leak the trove. Judging by VX-Underground's assessment, the 277.1GB file contains nearly three billion records on people who've at least lived in the United States -- so US citizens as well as, say, Canadians and Brits. This info was allegedly stolen or otherwise obtained from National Public Data, a small information broker based in Coral Springs that offers API lookups to other companies for things like background checks. There is a small silver lining, according to the VX team: "The database DOES NOT contain information from individuals who use data opt-out services. Every person who used some sort of data opt-out service was not present." So, we guess this is a good lesson in opting out.

Social Networks

New York Set to Restrict Social-Media Algorithms for Teens (cnbc.com) 30

Lawmakers in New York have reached a tentative agreement to "prohibit social-media companies from using algorithms to steer content to children without parental consent (source paywalled; alternative source)," according to the Wall Street Journal. "The legislation is aimed at preventing social-media companies from serving automated feeds to minors. The bill, which is still being completed but expected to be voted on this week, also would prohibit platforms from sending minors notifications during overnight hours without parental consent."

Meanwhile, the results of New York's first mental health report were released today, finding that depression and anxiety are rampant among NYC's teenagers, "with nearly half of them experiencing symptoms from one of both in recent years," reports NBC New York. "In a recent survey conducted last year, 48% of teenagers reported feeling depressive symptoms ranging from mild to severe. The vast majority, however, reported feeling high levels of resilience. Frequent coping mechanisms include listening to music and using social media."
Microsoft

Is the New 'Recall' Feature in Windows a Security and Privacy Nightmare? (thecyberexpress.com) 126

Slashdot reader storagedude shares a provocative post from the cybersecurity news blog of Cyble Inc. (a Ycombinator-backed company promising "AI-powered actionable threat intelligence").

The post delves into concerns that the new "Recall" feature planned for Windows (on upcoming Copilot+ PCs) is "a security and privacy nightmare." Copilot Recall will be enabled by default and will capture frequent screenshots, or "snapshots," of a user's activity and store them in a local database tied to the user account. The potential for exposure of personal and sensitive data through the new feature has alarmed security and privacy advocates and even sparked a UK inquiry into the issue. In a long Mastodon thread on the new feature, Windows security researcher Kevin Beaumont wrote, "I'm not being hyperbolic when I say this is the dumbest cybersecurity move in a decade. Good luck to my parents safely using their PC."

In a blog post on Recall security and privacy, Microsoft said that processing and storage are done only on the local device and encrypted, but even Microsoft's own explanations raise concerns: "Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry." Security and privacy advocates take issue with assertions that the data is stored securely on the local device. If someone has a user's password or if a court orders that data be turned over for legal or law enforcement purposes, the amount of data exposed could be much greater with Recall than would otherwise be exposed... And hackers, malware and infostealers will have access to vastly more data than they would without Recall.

Beaumont said the screenshots are stored in a SQLite database, "and you can access it as the user including programmatically. It 100% does not need physical access and can be stolen.... Recall enables threat actors to automate scraping everything you've ever looked at within seconds."

Beaumont's LinkedIn profile and blog say that starting in 2020 he worked at Microsoft for nearly a year as a senior threat intelligence analyst. And now Beaumont's Mastodon post is also raising other concerns (according to Cyble's blog post):
  • "Sensitive data deleted by users will still be saved in Recall screenshots... 'If you or a friend use disappearing messages in WhatsApp, Signal etc, it is recorded regardless.'"
  • "Beaumont also questioned Microsoft's assertion that all this is done locally."

The blog post also notes that Leslie Carhart, Director of Incident Response at Dragos, had this reaction to Beaumont's post. "The outrage and disbelief are warranted."


Government

Did the US Government Ignore a Chance to Make TikTok Safer? (yahoo.com) 56

"To save itself, TikTok in 2022 offered the U.S. government an extraordinary deal," reports the Washington Post. The video app, owned by a Chinese company, said it would let federal officials pick its U.S. operation's board of directors, would give the government veto power over each new hire and would pay an American company that contracts with the Defense Department to monitor its source code, according to a copy of the company's proposal. It even offered to give federal officials a kill switch that would shut the app down in the United States if they felt it remained a threat.

The Biden administration, however, went its own way. Officials declined the proposal, forfeiting potential influence over one of the world's most popular apps in favor of a blunter option: a forced-sale law signed last month by President Biden that could lead to TikTok's nationwide ban. The government has never publicly explained why it rejected TikTok's proposal, opting instead for a potentially protracted constitutional battle that many expect to end up before the Supreme Court... But the extent to which the United States evaluated or disregarded TikTok's proposal, known as Project Texas, is likely to be a core point of dispute in court, where TikTok and its owner, ByteDance, are challenging the sale-or-ban law as an "unconstitutional assertion of power."

The episode raises questions over whether the government, when presented with a way to address its concerns, chose instead to back an effort that would see the company sold to an American buyer, even though some of the issues officials have warned about — the opaque influence of its recommendation algorithm, the privacy of user data — probably would still be unresolved under new ownership...

A senior Biden administration official said in a statement that the administration "determined more than a year ago that the solution proposed by the parties at the time would be insufficient to address the serious national security risks presented. While we have consistently engaged with the company about our concerns and potential solutions, it became clear that divestment from its foreign ownership was and remains necessary."

"Since federal officials announced an investigation into TikTok in 2019, the app's user base has doubled to more than 170 million U.S. accounts," according to the article.

It also includes this assessment from Anupam Chander, a Georgetown University law professor who researches international tech policy. "The government had a complete absence of faith in [its] ability to regulate technology platforms, because there might be some vulnerability that might exist somewhere down the line."
AI

Could AI Replace CEOs? (msn.com) 132

'"As AI programs shake up the office, potentially making millions of jobs obsolete, one group of perpetually stressed workers seems especially vulnerable..." writes the New York Times.

"The chief executive is increasingly imperiled by A.I." These employees analyze new markets and discern trends, both tasks a computer could do more efficiently. They spend much of their time communicating with colleagues, a laborious activity that is being automated with voice and image generators. Sometimes they must make difficult decisions — and who is better at being dispassionate than a machine?

Finally, these jobs are very well paid, which means the cost savings of eliminating them is considerable...

This is not just a prediction. A few successful companies have begun to publicly experiment with the notion of an A.I. leader, even if at the moment it might largely be a branding exercise... [The article gives the example of the Chinese online game company NetDragon Websoft, which has 5,000 employees, and the upscale Polish rum company Dictador.]

Chief executives themselves seem enthusiastic about the prospect — or maybe just fatalistic. EdX, the online learning platform created by administrators at Harvard and M.I.T. that is now a part of publicly traded 2U Inc., surveyed hundreds of chief executives and other executives last summer about the issue. Respondents were invited to take part and given what edX called "a small monetary incentive" to do so. The response was striking. Nearly half — 47 percent — of the executives surveyed said they believed "most" or "all" of the chief executive role should be completely automated or replaced by A.I. Even executives believe executives are superfluous in the late digital age...

The pandemic prepared people for this. Many office workers worked from home in 2020, and quite a few still do, at least several days a week. Communication with colleagues and executives is done through machines. It's just a small step to communicating with a machine that doesn't have a person at the other end of it. "Some people like the social aspects of having a human boss," said Phoebe V. Moore, professor of management and the futures of work at the University of Essex Business School. "But after Covid, many are also fine with not having one."

The article also notes that a 2017 survey of 1,000 British workers found 42% saying they'd be "comfortable" taking orders from a computer.
Advertising

How Misinformation Spreads? It's Funded By 'The Hellhole of Programmatic Advertising' (wired.com) 66

Journalist Steven Brill has written a new book called The Death of Truth. Its subtitle? "How Social Media and the Internet Gave Snake Oil Salesmen and Demagogues the Weapons They Needed to Destroy Trust and Polarize the World-And What We Can Do."

An excerpt published by Wired points out that last year around the world, $300 billion was spent on "programmatic advertising", and $130 billion was spent in the United States alone in 2022. The problem? For over a decade there's been "brand safety" technology, the article points out — but "what artificial intelligence could not do was spot most forms of disinformation and misinformation..."

The end result... In 2019, other than the government of Vladimir Putin, Warren Buffett was the biggest funder of Sputnik News, the Russian disinformation website controlled by the Kremlin... Geico, the giant American insurance company and subsidiary of Buffett's Berkshire Hathaway, was the leading advertiser on the American version of Sputnik News' global website network... No one at Geico or its advertising agency had any idea its ads would appear on Sputnik, let alone what anti-American content would be displayed alongside the ads. How could they? Which person or army of people at Geico or its agency could have read 44,000 websites?

Geico's ads had been placed through a programmatic advertising system that was invented in the late 1990s as the internet developed. It exploded beginning in the mid 2000s and is now the overwhelmingly dominant advertising medium. Programmatic algorithms, not people, decide where to place most of the ads we now see on websites, social media platforms, mobile devices, streaming television, and increasingly hear on podcasts... If Geico's advertising campaign were typical of programmatic campaigns for broad-based consumer products and services, each of its ads would have been placed on an average of 44,000 websites, according to a study done for the leading trade association of big-brand advertisers.

Geico is hardly the only rock-solid American brand to be funding the Russians. During the same period that the insurance company's ads appeared on Sputnik News, 196 other programmatic advertisers bought ads on the website, including Best Buy, E-Trade, and Progressive insurance. Sputnik News' sister propaganda outlet, RT.com (it was once called Russia Today until someone in Moscow decided to camouflage its parentage), raked in ad revenue from Walmart, Amazon, PayPal, and Kroger, among others... Almost all advertising online — and even much of it on television (through streaming TV), or on podcasts, radio, mobile devices, and electronic billboards — is now done programmatically, which means the machine, not a planner, makes those placement decisions. Unless the advertiser uses special tools, such as what are called exclusion or inclusion lists, the publishers and content around which the ad appears, and which the ad is financing, are no longer part of the decision.

"What I kept hearing as the professionals explained it to me was that the process is like a stock exchange, except that the buyer doesn't know what stock he is buying... the advertiser and its ad agency have no idea where among thousands of websites its ad will appear."
AI

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic (arstechnica.com) 99

Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.."

Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI.

Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Social Networks

TikTok Preparing a US Copy of the App's Core Algorithm (reuters.com) 57

An anonymous reader quotes a report from Reuters: TikTok is working on a clone of its recommendation algorithm for its 170 million U.S. users that may result in a version that operates independently of its Chinese parent and be more palatable to American lawmakers who want to ban it, according to sources with direct knowledge of the efforts. The work on splitting the source code ordered by TikTok's Chinese parent ByteDance late last year predated a bill to force a sale of TikTok's U.S. operations that began gaining steam in Congress this year. The bill was signed into law in April. The sources, who were granted anonymity because they are not authorized to speak publicly about the short-form video sharing app, said that once the code is split, it could lay the groundwork for a divestiture of the U.S. assets, although there are no current plans to do so. The company has previously said it had no plans to sell the U.S. assets and such a move would be impossible. [...]

In the past few months, hundreds of ByteDance and TikTok engineers in both the U.S. and China were ordered to begin separating millions of lines of code, sifting through the company's algorithm that pairs users with videos to their liking. The engineers' mission is to create a separate code base that is independent of systems used by ByteDance's Chinese version of TikTok, Douyin, while eliminating any information linking to Chinese users, two sources with direct knowledge of the project told Reuters. [...] The complexity of the task that the sources described to Reuters as tedious "dirty work" underscores the difficulty of splitting the underlying code that binds TikTok's U.S. operations to its Chinese parent. The work is expected to take over a year to complete, these sources said. [...] At one point, TikTok executives considered open sourcing some of TikTok's algorithm, or making it available to others to access and modify, to demonstrate technological transparency, the sources said.

Executives have communicated plans and provided updates on the code-splitting project during a team all-hands, in internal planning documents and on its internal communications system, called Lark, according to one of the sources who attended the meeting and another source who has viewed the messages. Compliance and legal issues involved with determining what parts of the code can be carried over to TikTok are complicating the work, according to one source. Each line of code has to be reviewed to determine if it can go into the separate code base, the sources added. The goal is to create a new source code repository for a recommendation algorithm serving only TikTok U.S. Once completed, TikTok U.S. will run and maintain its recommendation algorithm independent of TikTok apps in other regions and its Chinese version Douyin. That move would cut it off from the massive engineering development power of its parent company in Beijing, the sources said. If TikTok completes the work to split the recommendation engine from its Chinese counterpart, TikTok management is aware of the risk that TikTok U.S. may not be able to deliver the same level of performance as the existing TikTok because it is heavily reliant on ByteDance's engineers in China to update and maintain the code base to maximize user engagement, sources added.

AI

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity' (reuters.com) 16

An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others.

The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet.
In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.
United States

New York Governor To Launch Bill Banning Smartphones in Schools (theguardian.com) 112

The New York governor, Kathy Hochul, plans to introduce a bill banning smartphones in schools, the latest in a series of legislative moves aimed at online child safety by New York's top official. From a report: "I have seen these addictive algorithms pull in young people, literally capture them and make them prisoners in a space where they are cut off from human connection, social interaction and normal classroom activity," she said. Hochul said she would launch the bill later this year and take it up in New York's next legislative session, which begins in January 2025. If passed, schoolchildren will be allowed to carry simple phones that cannot access the internet but do have the capability to send texts, which has been a sticking point for parents. She did not offer specifics on enforcing the prohibition. "Parents are very anxious about mass shootings in school," she said. "Parents want the ability to have some form of connection in an emergency situation." The smartphone-ban bill will follow two others Hochul is pushing that outline measures to safeguard children's privacy online and limit their access to certain features of social networks.
Earth

Corporations Invested in Carbon Offsets That Were 'Likely Junk', Analysis Says (theguardian.com) 48

Some of the world's most profitable -- and most polluting corporations -- have invested in carbon offset projects that have fundamental failings and are "probably junk," suggesting industry claims about greenhouse gas reductions were likely overblown, according to new analysis. From a report: Delta, Gucci, Volkswagen, ExxonMobil, Disney, easyJet and Nestle are among the major corporations to have purchased millions of carbon credits from climate friendly projects that are "likely junk" or worthless when it comes to offsetting their greenhouse gas emissions, according to a classification system developed by Corporate Accountability, a non-profit, transnational corporate watchdog. Some of these companies no longer use CO2 offsets amid mounting evidence that carbon trading do not lead to the claimed emissions cuts -- and in some cases may even cause environmental and social harms.

However, the multibillion-dollar voluntary carbon trading industry is still championed by many corporations including oil and gas majors, airlines, automakers, tourism, fast-food and beverage brands, fashion houses, banks and tech firms as the bedrock of climate action -- a way of claiming to reduce their greenhouse gas footprint while continuing to rely on fossil fuels and unsustainable supply chains. Yet, for 33 of the top 50 corporate buyers, more than a third of their entire offsets portfolio is "likely junk" -- suggesting at least some claims about carbon neutrality and emission reductions have been exaggerated according to the analysis. The fundamental failings leading to a "likely junk" ranking include whether emissions cuts would have happened anyway, as is often the case with large hydroelectric dams, or if the emissions were just shifted elsewhere, a common issue in forestry offset projects.

Google

Google's AI Feeds People Answers From The Onion (avclub.com) 125

An anonymous reader shares a report: As denizens of the Internet, we have all often seen a news item so ridiculous it caused us to think, "This seems like an Onion headline." But as real human beings, most of us have the ability to discern between reality and satire. Unfortunately, Google's newly launched "AI Overview" lacks that crucial ability. The feature, which launched less than two weeks ago (with no way for users to opt-out), provides answers to certain queries at the top of the page above any other online resources. The artificial intelligence creates its answers from knowledge it has synthesized from around the web, which would be great, except not everything on the Internet is true or accurate. Obviously.

Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.

AI

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs (cnn.com) 289

In the future, "Probably none of us will have a job," Elon Musk said Thursday, speaking remotely to the VivaTech 2024 conference in Paris. Instead, jobs will be optional — something we'd do like a hobby — "But otherwise, AI and the robots will provide any goods and services that you want."

CNN reports that Musk added this would require "universal high income" — and "There would be no shortage of goods or services." In a job-free future, though, Musk questioned whether people would feel emotionally fulfilled. "The question will really be one of meaning — if the computer and robots can do everything better than you, does your life have meaning?" he said. "I do think there's perhaps still a role for humans in this — in that we may give AI meaning."
CNN accompanied their article with this counterargument: In January, researchers at MIT's Computer Science and Artificial Intelligence Lab found workplaces are adopting AI much more slowly than some had expected and feared. The report also said the majority of jobs previously identified as vulnerable to AI were not economically beneficial for employers to automate at that time. Experts also largely believe that many jobs that require a high emotional intelligence and human interaction will not need replacing, such as mental health professionals, creatives and teachers.
CNN notes that Musk "also used his stage time to urge parents to limit the amount of social media that children can see because 'they're being programmed by a dopamine-maximizing AI'."
Communications

American Radio Relay League Confirms Cyberattack Disrupted Operations (bleepingcomputer.com) 32

Roughly 160,000 U.S.-based amateur radio enthusiasts belong to the American Radio Relay League, a nonprofit with 100 full-time and part-time staff members.

Nine days ago it announced "that it suffered a cyberattack that disrupted its network and systems," reports BleepingComputer, "including various online services hosted by the organization." "We are in the process of responding to a serious incident involving access to our network and headquarters-based systems. Several services, such as Logbook of The World and the ARRL Learning Center, are affected," explained ARRL in a press release... [T]he ARRL took steps to allay members' concerns about the security of their data, confirming that they do not store credit card information or collect social security numbers.

However, the organization confirmed that its member database contains some private information, including names, addresses, and call signs. While they do not specifically state email addresses are stored in the database, one is required to become a member of the organization.

"The ARRL has not specifically said that its member database has been accessed by hackers," Security Week points out, "but its statement suggests it's possible."

The site adds that it has also "reached out to ARRL to find out if this was a ransomware attack and whether the attackers made any ransom demand."

Thanks to Slashdot reader AzWa Snowbird for sharing the news.
Facebook

Meta, Activision Sued By Parents of Children Killed in Last Year's School Shooting (msn.com) 153

Exactly one year after the fatal shooting of 19 elementary school students in Texas, their parents filed a lawsuit against the publisher of the videogame Call of Duty, against Meta, and against the manufacturer of the AR-15-style weapon used in the attack, Daniel Defense.

The Washington Post says the lawsuits "may be the first of their kind to connect aggressive firearms marketing tactics on social media and gaming platforms to the actions of a mass shooter." The complaints contend the three companies are responsible for "grooming" a generation of "socially vulnerable" young men radicalized to live out violent video game fantasies in the real world with easily accessible weapons of war...

Several state legislatures, including California and Hawaii, passed consumer safety laws specific to the sale and marketing of firearms that would open the industry to more civil liability. Texas is not one of them. But it's just one vein in the three-pronged legal push by Uvalde families. The lawsuit against Activision and Meta, which is being filed in California, accuses the tech companies of knowingly promoting dangerous weapons to millions of vulnerable young people, particularly young men who are "insecure about their masculinity, often bullied, eager to show strength and assert dominance."

"To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters," the lawsuit states...

The lawsuit alleges that Meta, which owns Instagram, easily allows gun manufacturers like Daniel Defense to circumvent its ban on paid firearm advertisements to reach scores of young people. Under Meta's rules, gunmakers are not allowed to buy advertisements promoting the sale of or use of weapons, ammunition or explosives. But gunmakers are free to post promotional material about weapons from their own account pages on Facebook and Instagram — a freedom the lawsuit alleges Daniel Defense often exploited.

According to the complaint, the Robb school shooter downloaded a version of "Call of Duty: Modern Warfare," in November 2021 that featured on the opening title page the DDM4V7 model rifle [shooter Salvador] Ramos would later purchase. Drawing from the shooter's social media accounts, Koskoff argued he was being bombarded with explicit marketing and combat imagery from the company on Instagram... The complaint cites Meta's practice, first reported by The Washington Post in 2022, of giving gun sellers wide latitude to knowingly break its rules against selling firearms on its websites. The company has allowed buyers and sellers to violate the rule 10 times before they are kicked off, The Post reported.

The article adds that the lawsuit against Meta "echoes some of the complaints by dozens of state attorneys general and school districts that have accused the tech giant of using manipulative practices to hook... while exposing them to harmful content." It also includes a few excerpts from the text of the lawsuit.
  • It argues that both Meta and Activision "knowingly exposed the Shooter to the weapon, conditioned him to see it as the solution to his problems, and trained him to use it."
  • The lawsuit also compares their practices to another ad campaign accused of marketing harmful products to children: cigarettes. "Over the last 15 years, two of America's largest technology companies — Defendants Activision and Meta — have partnered with the firearms industry in a scheme that makes the Joe Camel campaign look laughably harmless, even quaint."

Meta and Daniel Defense didn't respond to the reporters' requests for comment. But they did quote a statement from Activision expressing sympathy for the communities and families impacted by the "horrendous and heartbreaking" shooting.

Activision also added that "Millions of people around the world enjoy video games without turning to horrific acts."


The Almighty Buck

Best Buy and Geek Squad Were Most Impersonated Orgs By Scammers In 2023 (theregister.com) 20

An anonymous reader quotes a report from The Register: The Federal Trade Commission (FTC) has shared data on the most impersonated companies in 2023, which include Best Buy, Amazon, and PayPal in the top three. The federal agency detailed the top ten companies scammers impersonate and how much they make depending on the impersonation. By far the most impersonated corp was Best Buy and its repair business Geek Squad, with a total of 52k reports. Amazon impersonators came in second place with 34k reports, and PayPal a distant third with 10,000. Proportionally, the top three made up roughly 72 percent of the reports among the top ten, and Best Buy and Geek Squad scam reports were about 39 percent on their own. Though, high quantity doesn't necessarily translate to greater success for scammers, as the FTC also showed how much scammers made depending on what companies they impersonated. Best Buy and Geek Squad, Amazon, and PayPal scams made about $15 million, $19 million, and $16 million respectively, but that's nothing compared to the $60 million that Microsoft impersonators were able to fleece. [...]

The FTC also reported the vectors scammers use to contact their victims. Phone and email are still the most common means, but social media is becoming increasingly important for scamming and features the most costly scams. The feds additionally disclosed the kinds of payment methods scammers use for all sorts of frauds, including company and individual impersonation scams, investment scams, and romance scams. Cryptocurrency and bank transfers were popular for investment scammers, who are the most prolific on social media, while gift cards were most common for pretty much every other type of scam. However, not all scammers ask for digital payment, as the Federal Bureau of Investigation says that even regular old mail is something scammers are relying on to get their ill-gotten gains.

Transportation

Feds Add Nine More Incidents To Waymo Robotaxi Investigation (techcrunch.com) 36

Nine more accidents have been discovered by federal safety regulators during their safety investigation of Waymo's self-driving vehicles in Phoenix and San Francisco. TechCrunch reports: The National Highway Traffic Safety Administration Office of Defects Investigation (ODI) opened an investigation earlier this month into Waymo's autonomous vehicle software after receiving 22 reports of robotaxis making unexpected moves that led to crashes and potentially violated traffic safety laws. The investigation, which has been designated a "preliminary evaluation," is examining the software and its ability to avoid collisions with stationary objects and how well it detects and responds to "traffic safety control devices" like cones. The agency said Friday it has added (PDF) another nine incidents since the investigation was opened.

Waymo reported some of these incidents. The others were discovered by regulators via public postings on social media and forums like Reddit, YouTube and X. The additional nine incidents include reports of Waymo robotaxis colliding with gates, utility poles, and parked vehicles, driving in the wrong lane with nearby oncoming traffic and into construction zones. The ODI said it's concerned the robotaxis "exhibiting such unexpected driving behaviors may increase the risk of crash, property damage, and injury." The agency said that while it's not aware of any injuries from these incidents, several involved collisions with visible objects that "a competent driver would be expected to avoid." The agency also expressed concern that some of these occurred near pedestrians. NHTSA has given Waymo until June 11 to respond to a series of questions regarding the investigation.

Encryption

Signal Slams Telegram's Security (techcrunch.com) 33

Messaging app Signal's president Meredith Whittaker criticized rival Telegram's security on Friday, saying Telegram founder Pavel Durov is "full of s---" in his claims about Signal. "Telegram is a social media platform, it's not encrypted, it's the least secure of messaging and social media services out there," Whittaker told TechCrunch in an interview. The comments come amid a war of words between Whittaker, Durov and Twitter owner Elon Musk over the security of their respective platforms. Whittaker said Durov's amplification of claims questioning Signal's security was "incredibly reckless" and "actually harms real people."

"Play your games, but don't take them into my court," Whittaker said, accusing Durov of prioritizing being "followed by a professional photographer" over getting facts right about Signal's encryption. Signal uses end-to-end encryption by default, while Telegram only offers it for "secret chats." Whittaker said many in Ukraine and Russia use Signal for "actual serious communications" while relying on Telegram's less-secure social media features. She said the "jury is in" on the platforms' comparative security and that Signal's open source code allows experts to validate its privacy claims, which have the trust of the security community.
AI

Meta AI Chief Says Large Language Models Will Not Reach Human Intelligence (ft.com) 78

Meta's AI chief said the large language models that power generative AI products such as ChatGPT would never achieve the ability to reason and plan like humans, as he focused instead on a radical alternative approach to create "superintelligence" in machines. From a report: Yann LeCun, chief AI scientist at the social media giant that owns Facebook and Instagram, said LLMs had "very limited understanding of logicâ... do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot planâ...âhierarchically."

In an interview with the Financial Times, he argued against relying on advancing LLMs in the quest to make human-level intelligence, as these models can only answer prompts accurately if they have been fed the right training data and are, therefore, "intrinsically unsafe." Instead, he is working to develop an entirely new generation of AI systems that he hopes will power machines with human-level intelligence, although he said this vision could take 10 years to achieve. Meta has been pouring billions of dollars into developing its own LLMs as generative AI has exploded, aiming to catch up with rival tech groups, including Microsoft-backed OpenAI and Alphabet's Google.

Slashdot Top Deals