×
AI

Apple Explores AI Deals With News Publishers (macrumors.com) 6

Apple is in negotiations with major news and publishing organizations (source paywalled; alternative source), "seeking permission to use their material in the company's development of generative artificial intelligence systems," reports the New York Times. From the report: The technology giant has floated multiyear deals worth at least $50 million to license the archives of news articles [...]. The news organizations contacted by Apple include Conde Nast, publisher of Vogue and The New Yorker; NBC News; and IAC, which owns People, The Daily Beast and Better Homes and Gardens. The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I., which allows computers to create images and chat like a human. [...]

Some of the publishers contacted by Apple were lukewarm on the overture. After years of on-again-off-again commercial deals with tech companies like Meta, the owner of Facebook, publishers have grown wary of jumping into business with Silicon Valley. Several publishing executives were concerned that Apple's terms were too expansive, according to three people familiar with the negotiations. The initial pitch covered broad licensing of publishers' archives of published content, with publishers potentially on the hook for any legal liabilities that could stem from Apple's use of their content.

Apple was also vague about how it intended to apply generative A.I. to the news industry, the people said, a potential competitive risk given Apple's substantial audience for news on its devices. Still, some news executives were optimistic that Apple's approach might eventually lead to a meaningful partnership. Two people familiar with the discussions struck a positive note on the long-term prospects of a deal, contrasting Apple's approach of asking for permission with behavior from other artificial intelligence-enabled companies, which have been accused of seeking licensing deals with news organizations after they had already used their content to train generative models.
Further reading: Apple's AI Research Signals Ambition To Catch Up With Big Tech Rivals
Social Networks

The Rise and Fall of Usenet (zdnet.com) 130

An anonymous reader quotes a report from ZDNet: Long before Facebook existed, or even before the Internet, there was Usenet. Usenet was the first social network. Now, with Google Groups abandoning Usenet, this oldest of all social networks is doomed to disappear. Some might say it's well past time. As Google declared, "Over the last several years, legitimate activity in text-based Usenet groups has declined significantly because users have moved to more modern technologies and formats such as social media and web-based forums. Much of the content being disseminated via Usenet today is binary (non-text) file sharing, which Google Groups does not support, as well as spam." True, these days, Usenet's content is almost entirely spam, but in its day, Usenet was everything that Twitter and Reddit would become and more.

In 1979, Duke University computer science graduate students Tom Truscott and Jim Ellis conceived of a network of shared messages under various topics. These messages, also known as articles or posts, were submitted to topic categories, which became known as newsgroups. Within those groups, messages were bound together in threads and sub-threads. [...] In 1980, Truscott and Ellis, using the Unix to Unix Copy Protocol (UUCP), hooked up with the University of North Carolina to form the first Usenet nodes. From there, it would rapidly spread over the pre-Internet ARPANet and other early networks. These messages would be stored and retrieved from news servers. These would "peer" to each other so that messages to a newsgroup would be shared from server to server and to user to user so that within hours, your messages would reach the entire networked world. Usenet would evolve its own network protocol, Network News Transfer Protocol (NNTP), to speed the transfer of these messages. Today, the social network Mastodon uses a similar approach with the ActivityPub protocol, while other social networks, such as Threads, are exploring using ActivityPub to connect with Mastodon and the other social networks that support ActivityPub. As the saying goes, everything old is new again.

[...] Usenet was never an organized social network. Each server owner could -- and did -- set its own rules. Mind you, there was some organization to begin with. The first 'mainstream' Usenet groups, comp, misc, news, rec, soc, and sci hierarchies, were widely accepted and disseminated until 1987. Then, faced with a flood of new groups, a new naming plan emerged in what was called the Great Renaming. This led to a lot of disputes and the creation of the talk hierarchy. This and the first six became known as the Big Seven. Then the alt groups emerged as a free speech protest. Afterward, fewer Usenet sites made it possible to access all the newsgroups. Instead, maintainers and users would have to decide which one they'd support. Over the years, Usenet began to decline as discussions were replaced both by spam and flame wars. Group discussions were also overwhelmed by flame wars.
"If, going forward, you want to keep an eye on Usenet -- things could change, miracles can happen -- you'll need to get an account from a Usenet provider," writes ZDNet's Steven Vaughan-Nichols. "I favor Eternal September, which offers free access to the discussion Usenet groups; NewsHosting, $9.99 a month with access to all the Usenet groups; EasyNews, $9.98 a month with fast downloads, and a good search engine; and Eweka, 9.50 Euros a month and EU only servers."

"You'll also need a Usenet client. One popular free one is Mozilla's Thunderbird E-Mail client, which doubles as a Usenet client. EasyNews also offers a client as part of its service. If you're all about downloading files, check out SABnzbd."
Canada

Meta's News Ban In Canada Remains As Online News Act Goes Into Effect (bbc.com) 147

An anonymous reader quotes a report from the BBC: A bill that mandates tech giants pay news outlets for their content has come into effect in Canada amid an ongoing dispute with Facebook and Instagram owner Meta over the law. Some have hailed it as a game-changer that sets out a permanent framework that will see a steady drip of funds from wealthy tech companies to Canada's struggling journalism industry. But it has also been met with resistance by Google and Meta -- the only two companies big enough to be encompassed by the law. In response, over the summer, Meta blocked access to news on Facebook and Instagram for Canadians. Google looked set to follow, but after months of talks, the federal government was able to negotiate a deal with the search giant as the company has agreed to pay Canadian news outlets $75 million annually.

No such agreement appears to be on the horizon with Meta, which has called the law "fundamentally flawed." If Meta is refusing to budge, so is the government. "We will continue to push Meta, that makes billions of dollars in profits, even though it is refusing to invest in the journalistic rigor and stability of the media," Prime Minister Justin Trudeau told reporters on Friday.
According to a study by the Media Ecosystem Observatory, the views of Canadian news on Facebook dropped 90% after the company blocked access to news on the platform. Local news outlets have been hit particularly hard.

"The loss of journalism on Meta platforms represents a significant decline in the resiliency of the Canadian media ecosystem," said Taylor Owen, a researcher at McGill and the co-author of the study. He believes it also hurts Meta's brand in the long run, pointing to the fact that the Canada's federal government, as well as that of British Columbia, other municipalities and a handful of large Canadian corporations, have all pulled their advertising off Facebook and Instagram in retaliation.
AI

Imran Khan Deploys AI Clone To Campaign From Behind Bars in Pakistan (theguardian.com) 7

AI allowed Pakistan's former prime minister Imran Khan to campaign from behind bars on Monday, with a voice clone of the opposition leader giving an impassioned speech on his behalf. From a report: Khan has been locked up since August and is being tried for leaking classified documents, allegations he says have been trumped up to stop him contesting general elections due in February. His Pakistan Tehreek-e-Insaf (PTI) party used artificial intelligence to make a four-minute message from the 71-year-old, headlining a "virtual rally" hosted on social media overnight on Sunday into Monday despite internet disruptions that monitor NetBlocks said were consistent with previous attempts to censor Khan.

PTI said Khan sent a shorthand script through lawyers that was fleshed out into his rhetorical style. The text was then dubbed into audio using a tool from the AI firm ElevenLabs, which boasts the ability to create a "voice clone" from existing speech samples. "My fellow Pakistanis, I would first like to praise the social media team for this historic attempt," the voice mimicking Khan said. "Maybe you all are wondering how I am doing in jail," the stilted voice adds. "Today, my determination for real freedom is very strong." The audio was broadcast at the end of a five-hour live-stream of speeches by PTI supporters on Facebook, X and YouTube, and was overlaid with historic footage of Khan and still images.

Facebook

Does Meta's New Face Camera Herald a New Age of Surveillance? Or Distraction... (seattletimes.com) 74

"For the past two weeks, I've been using a new camera to secretly snap photos and record videos of strangers in parks, on trains, inside stores and at restaurants," writes a reporter for the New York Times. They were testing the recently released $300 Ray-Ban Meta glasses — "I promise it was all in the name of journalism" — which also includes microphones (and speakers, for listening to audio).

They call the device "part of a broader ambition in Silicon Valley to shift computing away from smartphone and computer screens and toward our faces." Meta, Apple and Magic Leap have all been hyping mixed-reality headsets that use cameras to allow their software to interact with objects in the real world. On Tuesday, Zuckerberg posted a video on Instagram demonstrating how the smart glasses could use AI to scan a shirt and help him pick out a pair of matching pants. Wearable face computers, the companies say, could eventually change the way we live and work... While I was impressed with the comfortable, stylish design of the glasses, I felt bothered by the implications for our privacy...

To inform people that they are being photographed, the Ray-Ban Meta glasses include a tiny LED light embedded in the right frame to indicate when the device is recording. When a photo is snapped, it flashes momentarily. When a video is recording, it is continuously illuminated. As I shot 200 photos and videos with the glasses in public, including on BART trains, on hiking trails and in parks, no one looked at the LED light or confronted me about it. And why would they? It would be rude to comment on a stranger's glasses, let alone stare at them... [A] Meta spokesperson, said the company took privacy seriously and designed safety measures, including a tamper-detection technology, to prevent users from covering up the LED light with tape.

But another concern was how smart glasses might impact our ability to focus: Even when I wasn't using any of the features, I felt distracted while wearing them... I had problems concentrating while driving a car or riding a scooter. Not only was I constantly bracing myself for opportunities to shoot video, but the reflection from other car headlights emitted a harsh, blue strobe effect through the eyeglass lenses. Meta's safety manual for the Ray-Bans advises people to stay focused while driving, but it doesn't mention the glare from headlights. While doing work on a computer, the glasses felt unnecessary because there was rarely anything worth photographing at my desk, but a part of my mind constantly felt preoccupied by the possibility...

Ben Long, a photography teacher in San Francisco, said he was skeptical about the premise of the Meta glasses helping people remain present. "If you've got the camera with you, you're immediately not in the moment," he said. "Now you're wondering, Is this something I can present and record?"

The reporter admits they'll fondly cherish its photos of their dog [including in the original article], but "the main problem is that the glasses don't do much we can't already do with phones... while these types of moments are truly precious, that benefit probably won't be enough to convince a vast majority of consumers to buy smart glasses and wear them regularly, given the potential costs of lost privacy and distraction."
Social Networks

Threads Launches In the European Union (macrumors.com) 27

Meta CEO Mark Zuckerberg announced that Threads is now available to users in the European Union. "Today we're opening Threads to more countries in Europe," wrote Zuckerberg in a post on the platform. "Welcome everyone." MacRumors reports: The move comes five months after the social media network launched in most markets around the world, but remained unavailable to EU-based users due to regulatory hurdles. [...] In addition to creating a Threads profile for posting, users in the EU can also simply browse Threads without having an Instagram account, an option likely introduced to comply with legislation surrounding online services.

The expansion into a market of 448 million people should see Threads' user numbers get a decent boost. Meta CEO Mark Zuckerberg said on a company earnings call in October that Threads now has "just under" 100 million monthly users. Since its launch earlier this year it has gained a web app, an ability to search for posts, and a post editing feature.

Youtube

More Than 15% of Teens Say They're On YouTube or TikTok 'Almost Constantly' (cnbc.com) 70

Nearly 1 in 5 teenagers in the U.S. say they use YouTube and TikTok "almost constantly," according to a Pew Research Center survey. CNBC reports: The survey showed that YouTube was the most "widely used platform" for U.S.-based teenagers, with 93% of survey respondents saying they regularly use Google's video-streaming service. Of that 93% figure, about 16% of the teenage respondents said they "almost constantly visit or use" YouTube, underscoring the video app's immense popularity with the youth market. TikTok was the second-most popular app, with 63% of teens saying they use the ByteDance-owned short-video service, followed by Snapchat and Meta's Instagram, which had 60% and 59%, respectively. About 17% of the 63% of respondents who said they use TikTok indicated they access the short-video service "almost constantly," the report noted.

Meanwhile, Facebook and Twitter, now known as X, are not as popular with U.S.-based teenagers as they were a decade ago, the Pew Research study detailed. Regarding Facebook in particular, the Pew Research authors wrote that the share of teens who use the Meta-owned social media app "has dropped from 71% in 2014-2015 to 33% today." During the same period, Meta-owned Instagram's usage has not made up the difference in share, increasing from 52% in 2014-15 to a peak of 62% last year, then dropping to 59% in 2023, according to the firm.

Education

Harvard Accused of Bowing to Meta By Ousted Disinformation Scholar in Whistleblower Complaint (cjr.org) 148

The Washington Post reports: A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.

Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg's charitable arm. As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods. Last year, the school's dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position.

As one of the first researchers with access to "the Facebook papers" leaked by Frances Haugen, Donovan was asked to speak at a meeting of the Dean's Council, a group of the university's high-profile donors, remembers The Columbia Journalism Review : Elliot Schrage, then the vice president of communications and global policy for Meta, was also at the meeting. Donovan says that, after she brought up the Haugen leaks, Schrage became agitated and visibly angry, "rocking in his chair and waving his arms and trying to interrupt." During a Q&A session after her talk, Donovan says, Schrage reiterated a number of common Meta talking points, including the fact that disinformation is a fluid concept with no agreed-upon definition and that the company didn't want to be an "arbiter of truth."

According to Donovan, Nancy Gibbs, Donovan's faculty advisor, was supportive after the incident. She says that they discussed how Schrage would likely try to pressure Douglas Elmendorf, the dean of the Kennedy School of Government (where the Shorenstein Center hosting Donovan's project is based) about the idea of creating a public archive of the documents... After Elmendorf called her in for a status meeting, Donovan claims that he told her she was not to raise any more money for her project; that she was forbidden to spend the money that she had raised (a total of twelve million dollars, she says); and that she couldn't hire any new staff. According to Donovan, Elmendorf told her that he wasn't going to allow any expenditure that increased her public profile, and used a number of Meta talking points in his assessment of her work...

Donovan says she tried to move her work to the Berkman Klein Center at Harvard, but that the head of that center told her that they didn't have the "political capital" to bring on someone whom Elmendorf had "targeted"... Donovan told me that she believes the pressure to shut down her project is part of a broader pattern of influence in which Meta and other tech platforms have tried to make research into disinformation as difficult as possible... Donovan said she hopes that by blowing the whistle on Harvard, her case will be the "tip of the spear."

Another interesting detail from the article: [Donovan] alleges that Meta pressured Elmendorf to act, noting that he is friends with Sheryl Sandberg, the company's chief operating officer. (Elmendorf was Sandberg's advisor when she studied at Harvard in the early nineties; he attended Sandberg's wedding in 2022, four days before moving to shut down Donovan's project.)
AI

Meta Publicly Launches AI Image Generator Trained On Your Facebook, Instagram Photos (venturebeat.com) 28

An anonymous reader quotes a report from VentureBeat: Meta Platforms, the parent company of Facebook, Instagram, Whatsapp and Quest VR headsets and creator of leading open source large language model Llama 2 -- is getting into the text-to-image AI generator game. Actually, to clarify: Meta was already in that game via a text-to-image and text-to-sticker generator that was launched within Facebook and Instagram Messengers earlier this year. However, as of this week, the company has launched a standalone text-to-image AI generator service, "Imagine" outside of its messaging platforms. Meta's Imagine now a website you can simply visit and begin generating images from: imagine.meta.com. You'll still need to log in with your Meta or Facebook/Instagram account (I tried Facebook, and it forced me to create a new "Meta account," but hey -- it still worked). [...]

Meta's Imagine service was built on its own AI model called Emu, which was trained on 1.1 billion Facebook and Instagram user photos, as noted by Ars Technica and disclosed in the Emu research paper published by Meta engineers back in September. An earlier report by Reuters noted that Meta excluded private messages and images that were not shared publicly on its services.

When developing Emu, Meta's researchers also fine-tuned it around quality metrics. As they wrote in their paper: "Our key insight is that to effectively perform quality tuning, a surprisingly small amount -- a couple of thousand -- exceptionally high-quality images and associated text is enough to make a significant impact on the aesthetics of the generated images without compromising the generality of the model in terms of visual concepts that can be generated. " Interestingly, despite Meta's vocal support for open source AI, neither Emu nor the Imagine by Meta AI service appear to be open source.

Encryption

Meta Defies FBI Opposition To Encryption, Brings E2EE To Facebook, Messenger (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: Meta has started enabling end-to-end encryption (E2EE) by default for chats and calls on Messenger and Facebook despite protests from the FBI and other law enforcement agencies that oppose the widespread use of encryption technology. "Today I'm delighted to announce that we are rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook," Meta VP of Messenger Loredana Crisan wrote yesterday. In April, a consortium of 15 law enforcement agencies from around the world, including the FBI and ICE Homeland Security Investigations, urged Meta to cancel its plan to expand the use of end-to-end encryption. The consortium complained that terrorists, sex traffickers, child abusers, and other criminals will use encrypted messages to evade law enforcement.

Meta held firm, telling Ars in April that "we don't think people want us reading their private messages" and that the plan to make end-to-end encryption the default in Facebook Messenger would be completed before the end of 2023. Meta also plans default end-to-end encryption for Instagram messages but has previously said that may not happen this year. Meta said it is using "the Signal Protocol, and our own novel Labyrinth Protocol," and the company published two technical papers that describe its implementation (PDF). "Since 2016, Messenger has had the option for people to turn on end-to-end encryption, but we're now changing personal chats and calls across Messenger to be end-to-end encrypted by default. This has taken years to deliver because we've taken our time to get this right," Crisan wrote yesterday. Meta said it will take months to implement across its entire user base.
A post written by two Meta software engineers said the company "designed a server-based solution where encrypted messages can be stored on Meta's servers while only being readable using encryption keys under the user's control."

"Product features in an E2EE setting typically need to be designed to function in a device-to-device manner, without ever relying on a third party having access to message content," they wrote. "This was a significant effort for Messenger, as much of its functionality has historically relied on server-side processing, with certain features difficult or impossible to exactly match with message content being limited to the devices."

The company says it had "to redesign the entire system so that it would work without Meta's servers seeing the message content."
Technology

How Tech Giants Use Money, Access To Steer Academic Research (washingtonpost.com) 19

Tech giants including Google and Facebook parent Meta have dramatically ramped up charitable giving to university campuses over the past several years -- giving them influence over academics studying such critical topics as artificial intelligence, social media and disinformation. From a report: Meta CEO Mark Zuckerberg alone has donated money to more than 100 university campuses, either through Meta or his personal philanthropy arm, according to new research by the Tech Transparency Project, a nonprofit watchdog group studying the technology industry. Other firms are helping fund academic centers, doling out grants to professors and sitting on advisory boards reserved for donors, researchers told The Post.

Silicon Valley's influence is most apparent among computer science professors at such top-tier schools as Berkeley, University of Toronto, Stanford and MIT. According to a 2021 paper by University of Toronto and Harvard researchers, most tenure-track professors in computer science at those schools whose funding sources could be determined had taken money from the technology industry, including nearly 6 of 10 scholars of AI. The proportion rose further in certain controversial subjects, the study found. Of 33 professors whose funding could be traced who wrote on AI ethics for the top journals Nature and Science, for example, all but one had taken grant money from the tech giants or had worked as their employees or contractors.

Security

Android Vulnerability Exposes Credentials From Mobile Password Managers (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: A number of popular mobile password managers are inadvertently spilling user credentials due to a vulnerability in the autofill functionality of Android apps. The vulnerability, dubbed "AutoSpill," can expose users' saved credentials from mobile password managers by circumventing Android's secure autofill mechanism, according to university researchers at the IIIT Hyderabad, who discovered the vulnerability and presented their research at Black Hat Europe this week. The researchers, Ankit Gangwal, Shubham Singh and Abhijeet Srivastava, found that when an Android app loads a login page in WebView, password managers can get "disoriented" about where they should target the user's login information and instead expose their credentials to the underlying app's native fields, they said. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.

"Let's say you are trying to log into your favorite music app on your mobile device, and you use the option of 'login via Google or Facebook.' The music app will open a Google or Facebook login page inside itself via the WebView," Gangwal explained to TechCrunch prior to their Black Hat presentation on Wednesday. "When the password manager is invoked to autofill the credentials, ideally, it should autofill only into the Google or Facebook page that has been loaded. But we found that the autofill operation could accidentally expose the credentials to the base app." Gangwal notes that the ramifications of this vulnerability, particularly in a scenario where the base app is malicious, are significant. He added: "Even without phishing, any malicious app that asks you to log in via another site, like Google or Facebook, can automatically access sensitive information."

The researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability. Gangwal says he alerted Google and the affected password managers to the flaw. Gangwal tells TechCrunch that the researchers are now exploring the possibility of an attacker potentially extracting credentials from the app to WebView. The team is also investigating whether the vulnerability can be replicated on iOS.

Encryption

Facebook Kills PGP-Encrypted Emails (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: In 2015, as part of the wave of encrypting all the things on the internet, encouraged by the Edward Snowden revelations, Facebook announced that it would allow users to receive encrypted emails from the company. Even at the time, this was a feature for the paranoid users. By turning on the feature, all emails sent from Facebook -- mostly notifications of "likes" and private messages -- to the users who opted-in would be encrypted with the decades-old technology called Pretty Good Privacy, or PGP. Eight years later, Facebook is killing the feature due to low usage, according to the company. The feature was deprecated Tuesday. Facebook declined to specify exactly how many users were still using the encrypted email feature.
Encryption

Beeper Mini is an iMessage-for-Android App That Doesn't Require Any Apple Device at All (liliputing.com) 122

An anonymous reader shares a report: Beeper has been offering a unified messaging platform for a few years, allowing users to open a single app to communicate with contacts via SMS, Google Chat, Facebook Messenger, Slack, Discord, WhatsApp, and perhaps most significantly, iMessage. Up until this week though, Android users that wanted to use Beeper to send "blue bubble" messages to iMessage users had their messages routed through a Mac or iOS device. Now Beeper has launched a new app called Beeper Mini that handles everything on-device, no iPhone or Mac bridge required.

Beeper Mini is available now from the Google Play Store, and offers a 7-day free trial. After that, it costs $2 per month to keep using. [...] previously the company had to rely on a Mac-in-the-cloud? The company explains the method it's using in a blog post, but in a nutshell, Beeper says a security researcher has reverse engineered "the iMessage protocol and encryption," so that "all messages are sent and received by Beeper Mini Android app directly to Apple's servers" and "the encryption keys needed to encrypt these messages never leave your phone." That security researcher, by the way, is a high school student that goes by jjtech, who was hired by Beeper after showing the company his code. A proof-of-concept Python script is also available on Github if you'd like to run it to send messages to iMessage from a PC.

AI

Meta, IBM Create Industrywide AI Alliance To Share Technology (bloomberg.com) 6

Meta and IBM are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks. From a report: The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models -- rather than the proprietary systems favored by some companies -- develop new hardware and team up with academic researchers.

Proponents of open source AI technology, which is made public by developers for others to use, see the approach as a more efficient way to cultivate the highly complex systems. Over the past few months, Meta has been releasing open source versions of its large language models, which are the foundation of AI chatbots.

AI

Meta Will Enforce Ban On AI-Powered Political Ads In Every Nation, No Exceptions (zdnet.com) 15

An anonymous reader quotes a report from ZDNet: Meta says its generative artificial intelligence (AI) advertising tools cannot be used to power political campaigns anywhere globally, with access blocked for ads targeting specific services and issues. The social media giant said earlier this month that advertisers will be barred from using generative AI tools in its Ads Manager tool to produce ads for politics, elections, housing, employment, credit, or social issues. Ads related to health, pharmaceuticals, and financial services also are not allowed access to the generative AI features. This policy will apply globally, as Meta continues to test its generative AI ads creation tools, confirmed Dan Neary, Meta's Asia-Pacific vice president. "This approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries," said Neary.
Facebook

Meta Says There's Been No Downside To Sharing AI Technology (bloomberg.com) 30

Meta executives said there's been no major drawbacks to openly sharing its AI technology, even as many peers take the opposite approach. From a report: Over the past few months, Meta has been releasing open-source versions of its large language models -- the technology behind AI chatbots like ChatGPT. The idea is to keep those models free and then gain an advantage by building products and services on top of them, executives said at an event for the company's AI research Lab FAIR. "There is really no commercial downside to also making it available to other people," said Yann LeCun, Meta's chief AI scientist. Meta has joined most of the world's biggest technology companies in embracing generative AI, which can create text, images and even video based on simple prompts. But they aren't taking the same path.

Many of the top AI developers, including OpenAI and Google's DeepMind, don't currently open-source their large language models. Companies are often fearful of opening up their work because competitors could steal it, said Mike Schroepfer, Meta's senior fellow and former chief technology officer. "I feel like we're approaching this world where everyone is closing down as it becomes competitively important," he said. But staying open has its advantages. Meta can rely on thousands of developers across the world to help enhance its AI models.

Businesses

Tech's New Normal: Microcuts Over Growth at All Costs (wsj.com) 78

The tech industry has largely recovered from the downturn, but Silicon Valley learned a long-lasting lesson: how to do more with less. From a report: Amazon, Google, Microsoft and Meta Platforms have been cutting dozens or a few hundred employees at a time as executives keep tight controls on costs, even as their businesses and stock prices have rebounded sharply. The cuts are far smaller than the mass layoffs that reached tens of thousands in late 2022 and early this year. But they suggest a new era for an industry that in years past grew with little restraint, one in which companies are focusing on efficiency and acting more like their corporate peers that emphasize shareholder value and healthy margins.

The launch of the humanlike chatbot ChatGPT late last year served as a bright spot of growth in an industry that was otherwise scaling back. Challenges regarding the technology and calls for regulation remain, but some of the biggest tech companies are starting to make it their priority. There is a reallocation of resources from noncore areas to projects such as AI rather than hiring new people, said Ward, who was previously a director of recruiting at Facebook and the head of recruiting at Pinterest.

Amazon eliminated several hundred roles this month from its Alexa division to maximize its "resources and efforts focused on generative AI," according to an internal memo. The company has also made small cuts in recent weeks to its gaming and music divisions. Facebook's parent, Meta, recently posted its largest quarterly revenue in more than a decade. It laid off 20 people weeks later. Chief Executive Officer Mark Zuckerberg said on an earnings call that the company would continue to operate more efficiently going forward "both because it creates a more disciplined and lean culture, and also because it provides stability to see our long-term initiatives through in a very volatile world."

Facebook

Meta Designed Platforms To Get Children Addicted, Court Documents Allege (theguardian.com) 64

An anonymous reader quotes a report from The Guardian: Instagram and Facebook parent company Meta purposefully engineered its platforms to addict children and knowingly allowed underage users to hold accounts, according to a newly unsealed legal complaint. The complaint is a key part of a lawsuit filed against Meta by the attorneys general of 33 states in late October and was originally redacted. It alleges the social media company knew -- but never disclosed -- it had received millions of complaints about underage users on Instagram but only disabled a fraction of those accounts. The large number of underage users was an "open secret" at the company, the suit alleges, citing internal company documents.

In one example, the lawsuit cites an internal email thread in which employees discuss why a 12-year-old girl's four accounts were not deleted following complaints from the girl's mother stating her daughter was 12 years old and requesting the accounts to be taken down. The employees concluded that "the accounts were ignored" in part because representatives of Meta "couldn't tell for sure the user was underage." The complaint said that in 2021, Meta received over 402,000 reports of under-13 users on Instagram but that 164,000 -- far fewer than half of the reported accounts -- were "disabled for potentially being under the age of 13" that year. The complaint noted that at times Meta has a backlog of up to 2.5m accounts of younger children awaiting action. The complaint alleges this and other incidents violate the Children's Online Privacy and Protection Act, which requires that social media companies provide notice and get parental consent before collecting data from children. The lawsuit also focuses on longstanding assertions that Meta knowingly created products that were addictive and harmful to children, brought into sharp focus by whistleblower Frances Haugen, who revealed that internal studies showed platforms like Instagram led children to anorexia-related content. Haugen also stated the company intentionally targets children under the age of 18.

Company documents cited in the complaint described several Meta officials acknowledging the company designed its products to exploit shortcomings in youthful psychology, including a May 2020 internal presentation called "teen fundamentals" which highlighted certain vulnerabilities of the young brain that could be exploited by product development. The presentation discussed teen brains' relative immaturity, and teenagers' tendency to be driven by "emotion, the intrigue of novelty and reward" and asked how these asked how these characteristics could "manifest ... in product usage." [...] One Facebook safety executive alluded to the possibility that cracking down on younger users might hurt the company's business in a 2019 email. But a year later, the same executive expressed frustration that while Facebook readily studied the usage of underage users for business reasons, it didn't show the same enthusiasm for ways to identify younger kids and remove them from its platforms.

Facebook

Russia Puts Spokesman For Facebook-owner Meta on a Wanted List (yahoo.com) 100

Russia has added the spokesman of U.S. technology company Meta, which owns Facebook and Instagram, to a wanted list, according to an online database maintained by the country's interior ministry. From a report: Russian state agency Tass and independent news outlet Mediazona first reported that Meta communications director Andy Stone was included on the list Sunday, weeks after Russian authorities in October classified Meta as a "terrorist and extremist" organization, opening the way for possible criminal proceedings against Russian residents using its platforms.

The interior ministry's database doesn't give details of the case against Stone, stating only that he is wanted on criminal charges. According to Mediazona, an independent news website that covers Russia's opposition and prison system, Stone was put on the wanted list in February 2022, but authorities made no related statements at the time and no news media reported on the matter until this week. In March this year, Russia's federal Investigative Committee opened a criminal investigation into Meta.

Slashdot Top Deals