Software

'Software Isn't Dead, But Its Cosy Business Model Might Be' (ft.com) 27

The software industry's decades-old habit of charging companies a flat fee for every employee who uses a product is running into a fundamental problem: AI agents don't sit in chairs, and they don't need licences.

As autonomous agents take on tasks that human workers once handled, the per-seat pricing model that made SaaS revenue so predictable is giving way to consumption-based and hybrid alternatives. Snowflake and Databricks (valued at $134 billion) already charge based on usage. Salesforce initially priced its Agentforce customer relations bot at $2 per conversation but faced customer pushback and now offers action-based pricing, upfront credits and fixed fees.

ServiceNow's finance chief Amit Zavery said last month that some customers aren't ready for purely consumption-based models. Goldman Sachs estimates US software spending will nearly triple to $2.8 trillion by 2037 as automated tasks blur the boundary between IT and wage budgets, but that money will no longer arrive in the neat recurring instalments that investors and private equity firms have come to expect.
Sony

Sony Tech Can Identify Original Music in AI-Generated Songs (nikkei.com) 40

Sony Group has developed a technology that can identify the underlying music used in tunes generated by AI, making it possible for songwriters to seek compensation from AI developers if their music was used. From a report: Sony Group's technology analyzes which musicians' songs were used in learning and generating music. It can quantify the contribution of each original work, such as "30% of the music used by the Beatles and 10% by Queen," for example.

If the AI developer agrees to cooperate for the analysis, Sony Group will obtain data by connecting to the developer's base model system. When cooperation is not attainable, the technology estimates the original work by comparing AI-generated music with existing music. The AI boom has sparked numerous cases in which AI developers are accused of using copyrighted music, video and writing without permission to train machines. In the music industry, AI-generated songs using the voices of well-known singers have been distributed online. The Japanese company thinks the technology will help create a system that distributes revenue generated by AI music to original songwriters based on their contribution.

Earth

New EU Rules To Stop the Destruction of Unsold Clothes and Shoes (europa.eu) 111

The European Commission has adopted new measures under the Ecodesign for Sustainable Products Regulation (ESPR) to prevent the destruction of unsold apparel, clothing, accessories and footwear. From a report: The rules will help cut waste, reduce environmental damage and create a level playing field for companies embracing sustainable business models, allowing them to reap the benefits of a more circular economy. Every year in Europe, an estimated 4-9% of unsold textiles are destroyed before ever being worn. This waste generates around 5.6 million tons of CO2 emissions -- almost equal to Sweden's total net emissions in 2021. To help reduce this wasteful practice, the ESPR requires companies to disclose information on the unsold consumer products they discard as waste. It also introduces a ban on the destruction of unsold apparel, clothing accessories and footwear.
AI

Pentagon Threatens Anthropic Punishment (axios.com) 151

An anonymous reader shares a report: Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" -- meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

That kind of penalty is usually reserved for foreign adversaries. Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities.

AI

Will Tech Giants Just Use AI Interactions to Create More Effective Ads? (seattletimes.com) 59

Google never asked its users before adding AI Overviews to its search results and AI-generated email summaries to Gmail, notes the New York Times. And Meta didn't ask before making "Meta AI" an unremovable part of its tool in Instagram, WhatsApp and Messenger.

"The insistence on AI everywhere — with little or no option to turn it off — raises an important question about what's in it for the internet companies..." Behind the scenes, the companies are laying the groundwork for a digital advertising economy that could drive the future of the internet. The underlying technology that enables chatbots to write essays and generate pictures for consumers is being used by advertisers to find people to target and automatically tailor ads and discounts to them....

Last month, OpenAI said it would begin showing ads in the free version of ChatGPT based on what people were asking the chatbot and what they had looked for in the past. In response, a Google executive mocked OpenAI, adding that Google had no plans to show ads inside its Gemini chatbot. What he didn't mention, however, was that Google, whose profits are largely derived from online ads, shows advertising on Google.com based on user interactions with the AI chatbot built into its search engine.

For the past six years, as regulators have cracked down on data privacy, the tech giants and online ad industry have moved away from tracking people's activities across mobile apps and websites to determine what ads to show them. Companies including Meta and Google had to come up with methods to target people with relevant ads without sharing users' personal data with third-party marketers. When ChatGPT and other AI chatbots emerged about four years ago, the companies saw an opportunity: The conversational interface of a chatty companion encouraged users to voluntarily share data about themselves, such as their hobbies, health conditions and products they were shopping for.

The strategy already appears to be working. Web search queries are up industrywide, including for Google and Bing, which have been incorporating AI chatbots into their search tools. That's in large part because people prod chatbot-powered search engines with more questions and follow-up requests, revealing their intentions and interests much more explicitly than when they typed a few keywords for a traditional internet search.

Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

Transportation

Detroit Automakers Take $50 Billion Hit (msn.com) 179

The Detroit Big Three -- General Motors, Ford and Stellantis -- have collectively announced more than $50 billion in write-downs on their electric-vehicle businesses after years of aggressive investment into a transition that, even before Republican lawmakers abolished a $7,500 federal tax credit last fall, was already running below expectations.

U.S. EV sales fell more than 30% in the fourth quarter of 2025 once the credit expired in September, and Congress also eliminated federal fuel-efficiency mandates. More than $20 billion in previously announced investments in EV and battery facilities were canceled last year -- the first net annual decrease in years, according to Atlas Public Policy.

GM has laid off thousands of workers and is converting plants once earmarked for EV trucks and motors to produce gas-powered trucks and V-8 engines. Ford dissolved a joint venture with a South Korean conglomerate to make batteries and now plans to build just one low-cost electric pickup by 2027. Stellantis is unloading its stake in a battery-making business after booking the largest EV-related charge of any automaker so far. Outside the U.S., the trajectory looks different: China's BYD recently overtook Tesla as the world's largest EV seller.
Facebook

Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead (businessinsider.com) 89

Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone.

Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology.
AI

FTC Ratchets Up Microsoft Probe, Queries Rivals on Cloud, AI (bloomberg.com) 19

The US Federal Trade Commission is accelerating scrutiny of Microsoft as part of an ongoing probe into whether the company illegally monopolizes large swaths of the enterprise computing market with its cloud software and AI offerings, including Copilot. From a report: The agency has issued civil investigative demands in recent weeks to companies that compete with Microsoft in the business software and cloud computing markets, according to people familiar with the matter. The demands feature an array of questions on Microsoft's licensing and other business practices, according to the people, who were granted anonymity to discuss a confidential investigation.

With the demands, which are effectively like civil subpoenas, the FTC is seeking evidence that Microsoft makes it harder for customers to use Windows, Office and other products on rival cloud services. The agency is also requesting information on Microsoft's bundling of artificial intelligence, security and identity software into other products, including Windows and Office, some of the people said.

United States

CIA Makes New Push To Recruit Chinese Military Officers as Informants (reuters.com) 72

An anonymous reader shares a report: Just weeks after a dramatic purge of China's top general, the CIA is moving to capitalize on any resulting discord with a new public video targeting potential informants in the Chinese military. The U.S. spy agency on Thursday rolled out the video depicting a disillusioned mid-level Chinese military officer, in the latest U.S. step in a campaign to ramp up human intelligence gathering on Washington's strategic rival.

It follows a similar effort last May that focused on fictional figures within China's ruling Communist Party that provided detailed Chinese-language instructions on how to securely contact U.S. intelligence. CIA Director John Ratcliffe said in a statement that the agency's videos had reached many Chinese citizens and that it would continue offering Chinese government officials an "opportunity to work toward a brighter future together."

IBM

IBM Plans To Triple Entry-Level Hiring in the US (bloomberg.com) 39

IBM said it will triple entry-level hiring in the US in 2026, even as AI appears to be weighing on broader demand for early-career workers. From a report: While the company declined to disclose specific hiring figures, it said the expansion will be "across the board," affecting a wide range of departments. "And yes, it's for all these jobs that we're being told AI can do," said Nickle LaMoreaux, IBM's chief human resources officer, speaking at a conference this week in New York.

LaMoreaux said she overhauled entry-level job descriptions for software developers and other roles to make the case internally for the recruitment push. "The entry-level jobs that you had two to three years ago, AI can do most of them," she said at Charter's Leading With AI Summit. "So, if you're going to convince your business leaders that you need to make this investment, then you need to be able to show the real value these individuals can bring now. And that has to be through totally different jobs."

Programming

Amazon Engineers Want Claude Code, but the Company Keeps Pushing Its Own Tool (businessinsider.com) 40

Amazon engineers have been pushing back against internal policies that steer them toward Kiro, the company's in-house AI coding assistant, and away from Anthropic's Claude Code for production work, according to a Business Insider report based on internal messages. About 1,500 employees endorsed the formal adoption of Claude Code in one internal forum thread, and some pointed out the awkwardness of being asked to sell the tool through AWS's Bedrock platform while not being permitted to use it themselves.

Kiro runs on Anthropic's Claude models but uses Amazon's own tooling, and the company says roughly 70% of its software engineers used it at least once in January. Amazon says there is no explicit ban on Claude Code but applies stricter requirements for production use.
United States

US Had Almost No Job Growth in 2025 (nbcnews.com) 106

An anonymous reader shares a report: The U.S. economy experienced almost zero job growth in 2025, according to revised federal data. On a more encouraging note: hiring has picked up in 2026. Preliminary data had indicated that the U.S. economy added 584,000 jobs last year. But the Bureau of Labor Statistics revised that number after it received additional state data, and found that the labor market had added 181,000 jobs in all of 2025. This is far fewer than the 1.46 million jobs that were added in 2024.

One bright spot was last month, when hiring increased by 130,000 roles. This was significantly more than the 55,000 additions that had been expected by economists. "Job gains occurred in health care, social assistance, and construction, while federal government and financial activities lost jobs," BLS said in a statement.

AI

The First Signs of Burnout Are Coming From the People Who Embrace AI the Most 61

An anonymous reader shares a report: The most seductive narrative in American work culture right now isn't that AI will take your job. It's that AI will save you from it. That's the version the industry has spent the last three years selling to millions of nervous people who are eager to buy it. Yes, some white-collar jobs will disappear. But for most other roles, the argument goes, AI is a force multiplier. You become a more capable, more indispensable lawyer, consultant, writer, coder, financial analyst -- and so on. The tools work for you, you work less hard, everybody wins.

But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn't a productivity revolution. It finds companies are at risk of becoming burnout machines.

As part of what they describe as "in-progress research," UC Berkeley researchers spent eight months inside a 200-person tech company watching what happened when workers genuinely embraced AI. What they found across more than 40 "in-depth" interviews was that nobody was pressured at this company. Nobody was told to hit new targets. People just started doing more because the tools made more feel doable. But because they could do these things, work began bleeding into lunch breaks and late evenings. The employees' to-do lists expanded to fill every hour that AI freed up, and then kept going.
Software

Software Poses 'All-Time' Risk To Speculative Credit, Deutsche Bank Warns (bloomberg.com) 22

The software and technology sectors pose one of the all-time great concentration risks to the speculative-grade credit market, according to Deutsche Bank AG analysts. Bloomberg: They comprise $597 billion and $681 billion of the speculative-grade credit universe, or about 14% and 16% respectively, analysts led by Steve Caprio wrote in a Monday note. Speculative debt spans high-yield debt, leveraged loans and US private credit.

That's "a meaningful chunk of debt outstanding that risks souring broader sentiment, if software defaults increase," the analysts wrote, with "a potential impact that would rival that of the Energy sector in 2016." Unlike in 2016, pressures would likely first emerge in private credit, business development companies and leveraged loans, with the high-yield market weakening later, the analysts added.

The rapid adoption of artificial intelligence tools risks further weighing down multiples and revenues for software-as-a-service firms, while the US Federal Reserve's hawkish stance since 2022 has pressured cash flows, the analysts wrote. For instance, software payment-in-kind loan usage has risen to 11.3% in BDC portfolios, over 2.5 percentage points higher than the already elevated index average of 8.7%, according to Deutsche. PIK deals typically allow borrowers to pay interest in more debt rather than cash.

AI

OpenAI Starts Running Ads in ChatGPT (openai.com) 70

OpenAI has started testing ads inside ChatGPT for logged-in adult users on the Free and Go subscription tiers in the United States, the company said. The Plus, Pro, Business, Enterprise and Education tiers remain ad-free. Ads are matched to users based on conversation topics, past chats, and prior ad interactions, and appear clearly labeled as "sponsored" and visually separated from ChatGPT's organic responses.

OpenAI says the ads do not influence ChatGPT's answers, and advertisers receive only aggregate performance data like view and click counts rather than access to individual conversations. Users under 18 do not see ads, and ads are excluded from sensitive topics such as health, mental health, and politics. Free-tier users can opt out of ads in exchange for fewer daily messages.

Further reading: Anthropic Pledges To Keep Claude Ad-free, Calls AI Conversations a 'Space To Think'.
AI

Romance Publishing Has an AI Problem and Most Readers Don't Know It Yet (nytimes.com) 104

The romance genre -- long the publishing industry's earliest adopter of technological shifts, from e-books to self-publishing to serial releases -- has become the front line for AI-generated fiction, and the results as you can imagine are messy. Coral Hart, a Cape Town-based novelist previously published by Harlequin and Mills & Boon, produced more than 200 AI-assisted romance novels last year and self-published them on Amazon, where they collectively sold around 50,000 copies. She found Anthropic's Claude delivered the most elegant prose but was terrible at sexy banter; other programs like Grok and NovelAI wrote graphic scenes that felt rushed and mechanical. Chatbots struggled broadly to build the slow-burn sexual tension romance readers crave, she said.

A BookBub survey of more than 1,200 authors found roughly a third were using generative AI for plotting, outlining, or writing, and the majority did not disclose this to readers. Romance accounts for more than 20% of all adult fiction print sales, according to Circana BookScan, and the genre's reliance on familiar tropes and narrative formulas makes it especially susceptible to AI disruption.
Moon

SpaceX Prioritizes Lunar 'Self-Growing City' Over Mars Project, Musk Says (reuters.com) 157

"Elon Musk said on Sunday that SpaceX has shifted its focus to building a 'self-growing city' on the moon," reports Reuters, "which could be achieved in less than 10 years." SpaceX still intends to start on Musk's long-held ambition of a city on Mars within five to seven years, he wrote on his X social media platform, "but the overriding priority is securing the future of civilization and the Moon is faster."

Musk's comments echo a Wall Street Journal report on Friday, stating that SpaceX has told investors it would prioritize going to the moon and attempt a trip to Mars at a later time, targeting March 2027 for an uncrewed lunar landing. As recently as last year, Musk said that he aimed to send an uncrewed mission to Mars by the end of 2026.

Slashdot Top Deals