Government

Pentagon Wants $54 Billion For Drones (arstechnica.com) 59

An anonymous reader quotes a report from Ars Technica: The US military's massive $1.5 trillion budget request for the next fiscal year includes what Pentagon officials described as the largest investment in drone warfare and counter-drone technology in US history. The proposed spending on drone and autonomous warfare technologies within the FY2027 budget proposal for the US Department of Defense would surpass most countries' defense budgets and rank among the top 10 in the world for military spending, ahead of countries such as Ukraine, South Korea, and Israel.

Specifically, the Pentagon is requesting $53.6 billion to boost US production and procurement of drones, train drone operators, build out a logistics network for sustaining drone deployments, and expand counter-drone systems to defend more US military sites. The funding request is budgeted under the Defense Autonomous Warfare Group (DAWG), an organization established in late 2025 that would see a massive budget increase after receiving about $226 million in the 2026 fiscal year budget.

[...] Another $20.6 billion would help purchase one-way attack drones and drone aircraft developed through the US Air Force's Collaborative Combat Aircraft program, which is building drone prototypes capable of teaming up with human-piloted fighter jets. Part of this funding would also go toward defensive systems for countering small drones and the US Navy's Boeing MQ-25 drone designed to perform midair refueling of carrier-borne fighter aircraft to extend their strike ranges. Such drone-related spending even rivals the entire budget of the US Marine Corps. But the Pentagon has not said that it is creating a dedicated drone branch of the US military similar to the standalone Space Force.

Pentagon officials emphasized that most of the money would go toward procuring drone and autonomous warfare technologies that already exist, and is largely separate from additional funding that would bolster US domestic manufacturing capacity to build such weapon systems. "That $70 billion is all going into existing systems and technologies," said Hurst. "The industrial base support is entirely separate."
"The evolution we've seen in the battlefield is this evolution of technologies in the timeframe of weeks, not the typical years we see with our defense production," said Lt. Gen. Steven Whitney, director of force structure, resources, and assessment for the Pentagon's Joint Chiefs of Staff, during a Pentagon press briefing. "So it's really critical we work with industry to get that capability fielded."
Government

NSA Using Anthropic's Mythos Despite Blacklist (axios.com) 65

Axios reports that the NSA is using Anthropic's restricted Mythos Preview model despite the Pentagon insisting the company poses a "supply chain risk." Axios reports: The government's cybersecurity needs appear to be outweighing the Pentagon's feud with Anthropic. The department moved in February to cut off Anthropic and force its vendors to follow suit. That case is ongoing. The military is now broadening its use of Anthropic's tools while simultaneously arguing in court that using those tools threatens U.S. national security.

Two sources said the NSA was using Mythos, while one said the model was also being used more widely within the department. It's unclear how the NSA is currently using Mythos, but other organizations with access to the model are using it predominantly to scan their own environments for exploitable security vulnerabilities.

Anthropic restricted access to Mythos to around 40 organizations, contending that its offensive cyber capabilities were too dangerous to allow for a wider release. Anthropic only announced 12 of those organizations. One source said the NSA was among the unnamed agencies with access. The NSA's counterparts in the U.K. have said they have access to the model through the country's AI Security Institute.
Anthropic's CEO met with top U.S. officials on Friday to discuss "opportunities for collaboration," according to a White House spokesperson, "as well as shared approaches and protocols to address the challenges associated with scaling this technology."
Government

Google, Pentagon Discuss Classified AI Deal (reuters.com) 19

An anonymous reader quotes a report from Reuters: Alphabet's Google is negotiating an agreement with the Department of Defense that would allow the Pentagon to deploy its Gemini AI models in classified settings, the Information reported on Thursday, citing two people with direct knowledge of the discussions. The two parties are discussing an agreement that would allow the Pentagon to use Google's AI for all lawful uses, according to the report.

During the negotiations, Google has proposed additional language in its contract with the department to prevent its AI from being used for domestic mass surveillance or autonomous weapons without appropriate human control, the Information reported. The Pentagon will continue to deploy frontier AI capabilities through strong industry partnerships across all classification levels, a Pentagon official said, without confirming any talks with Google.

Sci-Fi

Air Force Pushed Out UFO Investigator (yahoo.com) 114

J. Allen Hynek started as an Air Force consultant brought in to help explain away early UFO reports, but over time he grew frustrated with what he saw as the government's effort to minimize unexplained cases rather than seriously investigate them. Longtime Slashdot reader schwit1 shares an article from Popular Mechanics, in collaboration with Biography.com, that argues Hynek's shift from skeptic to advocate helped shape modern ufology, and that the Air Force's attempts to control the narrative may have deepened the public distrust and conspiracy thinking that followed. From the report: Do you think the U.S. government is hiding, and possibly reverse-engineering, extraterrestrial technology? Think again. Or better yet, don't think about it at all. Nothing to see here. That's the underlying message of a report released in 2024 by the Department of Defense. The 63-page "Report on the Historical Record of U.S. Government Involvement with Unidentified Anomalous Phenomena (UAP) " concludes that the DoD's All-Domain Anomaly Resolution Office (AARO) "found no evidence that any [U.S. Government] investigation, academic-sponsored research, or official review panel has confirmed that any sighting of a UAP represented extraterrestrial technology."

The AARO, as The Guardian summarizes, is "a government office established in 2022 to detect and, as necessary, mitigate threats including 'anomalous, unidentified space, airborne, submerged and transmedium objects.'" This report came on the heels of, and in contradiction to, what was arguably the most high-profile hearing on UAPs -- formerly known as unidentified flying objects, or UFOs -- in decades: the August 2023 testimony of "whistleblower" Dave Grusch.

[...] The 2024 AARO report stated that during the time Hynek was working with Project Blue Book [the U.S. Air Force's best-known UFO investigation program], "about 75 percent of Americans trusted the [US government] 'to do the right thing almost always or most of the time.'" But, the report noted, since 2007, that number has never risen above 30 percent. "This lack of trust probably has contributed to the belief held by some subset of the U.S. population that the USG has not been truthful regarding knowledge of extraterrestrial craft."

Ultimately, the Air Force's efforts to stifle Hynek -- pressuring him to offer the public standard responses to questions he wasn't even allowed to ask -- appears to have backfired. Ironically, the Air Force's attempts to quiet suspicions only fueled them, leading to more conspiracy theories and distrust. People came to believe that the government was hiding the truth, contrary to Hynek's actual revelation: that, in reality, the people at the top may not care much about finding the answers after all.

AI

Anthropic Asks Christian Leaders for Help Steering Claude's Spiritual Development (msn.com) 162

Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post: Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God."

"They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations...

Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character...

Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said.

Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.

"Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."
Government

To Fill Air Traffic Controller Shortage, FAA Turns To Gamers (nytimes.com) 80

An anonymous reader quotes a report from the New York Times: As the Trump administration seeks to fill a national shortage of air traffic controllers, officials are targeting a new talent pool: gamers. The Federal Aviation Administration on Friday is making a recruiting push aimed at avid players of video games, as the agency strives to fill thousands of vacancies that lawmakers have said leave the traveling public less safe. In a new YouTube ad, the agency is using flashy graphics and the promise of six-figure salaries to convince video game enthusiasts to apply their trigger fingers in service of air safety.

In recent years, video gamers have emerged as a target demographic for recruiters at a number of federal agencies, including the military and the Department of Homeland Security. They are welcomed for their hand-eye coordination, quick decision-making in complex environments and ability to remain focused on screens for hours on end. "To reach the next generation of air traffic controllers, we need to adapt," Transportation Secretary Sean Duffy said in a statement. Focusing recruiting efforts on gamers, he added, "taps into a growing demographic of young adults who have many of the hard skills it takes to be a successful controller."

[...] The F.A.A. plans to begin prioritizing recruiting gamers over more traditional avenues like college fairs, officials said, pointing out that only 25 percent of controllers have a traditional college degree, while the vast majority appear to have logged hours gaming. During the presidential transition in 2024, incoming Trump administration officials polled about 250 new air traffic academy graduates over six weeks. Only two of those interviewed were not gamers, according to F.A.A. officials [...]. Students who failed out of the training academy were not similarly queried, officials said, though they have plans to conduct more comprehensive exit interviews in the future. Still, the overwhelming presence of gaming habits among graduates tracked with what they were hearing anecdotally from controllers already certified to work in towers and other air traffic facilities, the officials said, many of whom liked to play video games during breaks in their shifts.

Privacy

Hacker Steals 10 Petabytes of Data From China's Tianjin Supercomputer Center (cnn.com) 71

An anonymous reader quotes a report from CNN: A hacker has allegedly stolen a massive trove of sensitive data -- including highly classified defense documents and missile schematics -- from a state-run Chinese supercomputer in what could potentially constitute the largest known heist of data from China. The dataset, which allegedly contains more than 10 petabytes of sensitive information, is believed by experts to have been obtained from the National Supercomputing Center (NSCC) in Tianjin -- a centralized hub that provides infrastructure services for more than 6,000 clients across China, including advanced science and defense agencies.

Cyber experts who have spoken to the alleged hacker and reviewed samples of the stolen data they posted online say they appeared to gain entry to the massive computer with comparative ease and were able to siphon out huge amounts of data over the course of multiple months without being detected. An account calling itself FlamingChina posted a sample of the alleged dataset on an anonymous Telegram channel on February 6, claiming it contained "research across various fields including aerospace engineering, military research, bioinformatics, fusion simulation and more." The group alleges the information is linked to "top organizations" including the Aviation Industry Corporation of China, the Commercial Aircraft Corporation of China, and the National University of Defense Technology.

Cyber security experts who have reviewed the data say the group is offering a limited preview of the alleged dataset, for thousands of dollars, with full access priced at hundreds of thousands of dollars. Payment was requested in cryptocurrency. CNN cannot verify the origins of the alleged dataset and the claims made by FlamingChina, but spoke with multiple experts whose initial assessment of the leak indicated it was genuine. The alleged sample data appeared to include documents marked "secret" in Chinese, along with technical files, animated simulations and renderings of defense equipment including bombs and missiles.

The Courts

Anthropic Loses Appeals Court Bid To Temporarily Block Pentagon Blacklisting (cnbc.com) 40

A federal appeals court denied Anthropic's bid to temporarily block the Pentagon's blacklisting, meaning the company remains shut out of Defense Department contracts while the case continues, even though a separate court has allowed other federal agencies to keep using Claude for now. CNBC reports: "In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic's motion for a stay pending review on the merits." With the split decisions by the two courts, Anthropic is excluded from DOD contracts but is able to continue working with other government agencies while litigation plays out. Defense contractors will be prohibited from using Claude in their work with the agency, but they can use it for other cases.

[...] In the ruling on Wednesday, the court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but that the company's interests "seem primarily financial in nature." While the company claimed the DOD was standing in the way of its right to free speech, "Anthropic does not show that its speech has been chilled during the pendency of this litigation," the order said. Because of the harm Anthropic is likely to suffer, the appeals court said "substantial expedition is warranted."

An Anthropic spokesperson said in a statement after the ruling that the company is "grateful the court recognized these issues need to be resolved quickly" and that it's "confident the courts will ultimately agree that these supply chain designations were unlawful." "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic said.

The Military

After 16 Years and $8 Billion, the Military's New GPS Software Still Doesn't Work (arstechnica.com) 73

An anonymous reader quotes a report from Ars Technica: Last year, just before the Fourth of July holiday, the US Space Force officially took ownership of a new operating system for the GPS navigation network, raising hopes that one of the military's most troubled space programs might finally bear fruit. The GPS Next-Generation Operational Control System, or OCX, is designed for command and control of the military's constellation of more than 30 GPS satellites. It consists of software to handle new signals and jam-resistant capabilities of the latest generation of GPS satellites, GPS III, which started launching in 2018. The ground segment also includes two master control stations and upgrades to ground monitoring stations around the world, among other hardware elements.

RTX Corporation, formerly known as Raytheon, won a Pentagon contract in 2010 to develop and deliver the control system. The program was supposed to be complete in 2016 at a cost of $3.7 billion. Today, the official cost for the ground system for the GPS III satellites stands at $7.6 billion. RTX is developing an OCX augmentation projected to cost more than $400 million to support a new series of GPS IIIF satellites set to begin launching next year, bringing the total effort to $8 billion.

Although RTX delivered OCX to the Space Force last July, the ground segment remains nonoperational. Nine months later, the Pentagon may soon call it quits on the program. Thomas Ainsworth, assistant secretary of the Air Force for space acquisition and integration, told Congress last week that OCX is still struggling.
The GAO found the OCX program was undermined by "poor acquisition decisions and a slow recognition of development problems." By 2016, it had blown past cost and schedule targets badly enough to trigger a Pentagon review for possible cancellation.

Officials also pointed to cybersecurity software issues, a "persistently high software development defect rate," the government's lack of software expertise, and Raytheon's "poor systems engineering" practices. Even after the military restructured the program, it kept running into delays and overruns, with Ainsworth telling lawmakers, "It's a very stressing program" and adding, "We are still considering how to ensure we move forward."
Robotics

Melania Trump Welcomes Humanoid Robot At White House Summit 94

Longtime Slashdot reader theodp writes: In Melania and the Robot, the New York Times reports on First Lady Melania Trump's inaugural Fostering the Future Together Coalition Summit, which brought together international leaders, First Spouses from around the world, tech leaders, educators, and nonprofits to collaborate on practical solutions that expand access to educational tools while strengthening protections for children in digital environments (Day 2 WH summary). The Times begins:

"On Wednesday, Mrs. Trump appeared at the White House alongside Figure 3, a humanoid, A.I.-powered robot whose uses, according to the company that makes it, include fetching towels, carrying groceries and serving champagne. But Mrs. Trump joins tech executives and some researchers in envisioning a world beyond robot butlery. She is interested in how these robots could cut it as educators. Both clad in shades of white, the first lady and the visiting robot walked into a gathering of first spouses from around the world, a group that included Sara Netanyahu of Israel, Olena Zelenska of Ukraine, and Brigitte Macron of France. The dulcet tones from a (presumably human) military orchestra played as the first lady and her guest entered the event. Both lady and robot extolled the virtues of further integrating robots into the educational and social lives of children. In the history of modern first-lady initiatives, which have included building a national book festival (Laura Bush), reshuffling the food pyramid (Michelle Obama) and advocating for free community college (Jill Biden), Mrs. Trump's involvement of a humanoid robot in education policy was a first."

"Figure 3 delivered brief remarks and delivered salutations in several languages. With its sleek black-and-white appearance, Figure 3 would fit right in with the first lady's branding aesthetic, which includes a self-titled coffee table book and movie, not least because the name "MELANIA" was emblazoned on the side of its glossy plastic head. After Figure 3 teetered gingerly away, Mrs. Trump looked around the room and told them that the future looked a lot like what they had just witnessed. 'The future of A.I. is personified,' she told her audience. 'It will be formed in the shape of humans. Very soon artificial intelligence will move from our mobile phones to humanoids that deliver utility.' She invited her guests to envision a future in which a robot philosopher educated children."
Government

How One Company Finally Exposed North Korea's Massive Remote Workers Scam (nbcnews.com) 24

NBC News investigates North Korea's "wide-ranging effort to place remote workers at U.S. companies in order to funnel money back to its coffers and, in some cases, steal sensitive information."

And working with the FBI, one corporate security/investigations company decided to knowingly hire one of North Korea's remote workers — then "ship him a laptop and gain as much information as possible" about this "sprawling international employment scheme that is estimated to include hundreds of American companies, thousands of people and hundreds of millions of dollars per year." It worked.... Over a roughly three-month investigation, Nisos uncovered an apparent network of at least 20 North Korean operatives including "Jo" who had collectively applied to at least 160,000 roles. During that time, workers in the network — which some evidence showed were based in China — were employed by five U.S.-based companies and allegedly helped by an American citizen operating out of two nondescript suburban homes in Florida...

Nisos estimated that in about a year, "Jo", who was likely a newer member of the team, applied to about 5,000 jobs... "They attended interviews all day every day, and then once they secured a job, they would collect paychecks until they were terminated," [according to Jared Hudson, Nisos' chief technology officer]... With the ability to see which other U.S. companies Jo and his team were working for — all remote technology roles — Nisos' CEO, Ryan LaSalle, began making calls to their security teams to alert them of the fraud. "Most of the companies weren't aware of it, even if they had pretty robust security teams," LaSalle said. "It wasn't really high on the radar."

NBC News describes North Korea's 10-year effort — and its educational pipeline that steers promising students into "computer science and hacking training before being placed into cyberunits under military and state agencies, according to a recent report by DTEX, a risk-adaptive security and behavioral intelligence firm that tracks North Korea's cybercrime." In one case, a North Korean worker stole sensitive information related to U.S. military technology, according to the Justice Department. In another, an American accomplice obtained an ID that enabled access to government facilities, networks and systems. At least three organizations have been extorted and suffered hundreds of thousands of dollars in damages after proprietary information was posted online by IT workers... Analysts warn that North Korean IT workers are targeting larger organizations, increasing extortion attempts and seeking out employers that pay salaries in cryptocurrency. More recently, security researchers have uncovered fake job application platforms impersonating major U.S. cryptocurrency and AI firms, including Anthropic, designed to infect legitimate applicants' networks with malware to be utilized once hired. The global cybersecurity company CrowdStrike identified a 220% rise in 2025 in instances of North Koreans gaining fraudulent employment at Western companies to work remotely as developers...

The payoff flowing back to Pyongyang from these schemes is enormous. Some North Korean IT workers earn more than $300,000 per year, far more than they'd be able to earn domestically, with as much as 90% of their wages directed back to the regime, according to congressional testimony from Bruce Klinger, a former CIA deputy division chief for Korea. The United Nations estimates the schemes, which proliferated after the pandemic when more companies' workforces went remote, generate as much as $600 million annually, while a U.S. State Department-led sanctions monitoring assessment placed earnings for 2024 as high as $800 million... So far, at least 10 alleged U.S.-based facilitators have been federally charged, including one active-duty member of the U.S. Army, for their alleged roles in hosting laptop farms, laundering payments and moving proceeds through shell companies. At least six other alleged U.S. facilitators have been identified in court documents but not named...

"We believe there are many more hundreds of people out there who are participating in these schemes," said Rozhavsky, the FBI assistant director. "They could never pull this off if they didn't have willing facilitators in the U.S. helping them...." The scheme itself is also becoming more complex. North Korean IT teams are now subcontracting work to developers in Pakistan, Nigeria and India, expanding into fields like customer service, financial processing, insurance and translation services — roles far less scrutinized than software development.

Crime

Facial Recognition Error Jails Innocent Grandmother For Months (theguardian.com) 144

Mr. Dollar Ton shares a report from the Guardian: Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes. Lipps, a mother of three and grandmother of five, said she has lived most of her life in north-central Tennessee. She had never been on an airplane until authorities flew her to North Dakota last year to face charges.

In July, U.S. marshals arrested Lipps at her Tennessee home while she was babysitting four children. She said she was taken away at gunpoint and booked into a county jail as a fugitive from justice from North Dakota. "I've never been to North Dakota, I don't know anyone from North Dakota," Lipps told WDAY News. She remained in a Tennessee jail for nearly four months without bail while awaiting extradition. She was charged with four counts of unauthorized use of personal identifying information and four counts of theft.

According to Fargo police records obtained by WDAY News, detectives investigating bank fraud cases in April and May 2025 reviewed surveillance video of a woman using a fake U.S. army military ID to withdraw tens of thousands of dollars. The officers allegedly used facial recognition software to identify the suspect as Lipps. A detective reportedly wrote in court documents that Lipps appeared to match the suspect based on facial features, body type and hairstyle. Lipps told WDAY News that no one from the Fargo police department contacted her before the arrest. Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

Microsoft

Microsoft Backs Anthropic To Halt US DOD's 'Supply-Chain Risk' Designation (reuters.com) 35

joshuark shares a report from Reuters: Microsoft has filed an amicus brief on Tuesday in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense designation of the AI startup as a supply-chain risk. In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic's request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case. Microsoft, which integrates the AI lab's products and services into technology it provides to the U.S. military, said that it was directly impacted by the DOD designation.

"Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products. The judge overseeing the case must approve Microsoft's request to file the brief before it is officially entered, but courts often permit outside parties to weigh in on important cases.

China

China Moves To Curb OpenClaw AI Use At Banks, State Agencies (bloomberg.com) 18

An anonymous reader quotes a report from Bloomberg: Chinese authorities moved to restrict state-run enterprises and government agencies from running OpenClaw AI apps on office computers, acting swiftly to defuse potential security risks after companies and consumers across China began experimenting with the agentic AI phenomenon. Government agencies and state-owned enterprises, including the largest banks, have received notices in recent days warning them against installing OpenClaw software on office devices for security reasons [...]. Several of them were instructed to notify superiors if they had already installed related apps for security checks and possible removal, some of the people said.

Certain employees, including those at state-run banks and some government agencies, were banned from installing OpenClaw on office computers and also personal phones using the company's network, some of the people said. One person said the ban was also extended to the families of military personnel. Other notices stopped short of calling for an outright ban on OpenClaw software, saying only that prior approval is needed before use, the people said. The warning underscores Beijing's growing concern about OpenClaw, an agentic AI platform that requires unusually broad access to private data and can communicate externally, potentially exposing computers to external attack. [...]

Despite the potential security risks, companies from Tencent to JD.com Inc. have been rolling out OpenClaw apps to try and capitalize on the groundswell of enthusiasm, while several local government agencies have declared millions of yuan in subsidies for companies that develop atop the platform. [...] Tech giants like Tencent and Alibaba, along with AI upstarts ranging from Moonshot to MiniMax, have rolled out their own tweaks of the software touting simple, one-click adoption. A slew of government agencies, in cities from Shenzhen to Wuxi, have issued notices offering multimillion-yuan subsidies to startups leveraging OpenClaw to make advances. The frenzy has helped drive up shares of AI model developer MiniMax nearly 640% since its listing just two months ago. It's now worth about $49 billion, surpassing Baidu -- once viewed as the frontrunner in Chinese AI development -- in market value. The company launched MaxClaw, an agent built on OpenClaw, in late February.

United States

US Military Tested Device That May Be Tied To Havana Syndrome On Rats, Sheep (cbsnews.com) 50

An anonymous reader quotes a report from CBS News: Tonight, we have details of a classified U.S. intelligence mission that has obtained a previously unknown weapon that may finally unlock a mystery. Since at least 2016, U.S. diplomats, spies and military officers have suffered crippling brain injuries. They've told of being hit by an overwhelming force, damaging their vision, hearing, sense of balance and cognition. but the government has doubted their stories. They've been called delusional. Well now, 60 Minutes has learned that a weapon that can inflict these injuries was obtained overseas and secretly tested on animals on a U.S. military base. We've investigated this mystery for nine years. This is our fourth story called, "Targeting Americans." Despite official government doubt, we never stopped reporting because of the haunting stories we heard [...]. 60 Minutes interviewed Dr. David Relman, a scientific expert and professor from Stanford University who was tasked by the government to lead two investigations into the Havana Syndrome cases. What he and his panel of doctors, physicists, engineers and others found was that "the most plausible explanation for a subset of these cases was a form of radiofrequency or microwave energy," the report says.

According to confidential sources cited in the report, undercover Homeland Security agents bought a miniaturized microwave weapon from a Russian criminal network in 2024 and tested it on animals at a U.S. military lab. The injuries reportedly matched those seen in the human cases. "Our confidential sources tell us the still classified weapon has been tested in a U.S. military lab for more than a year," says Dr. Relman. "Tests on rats and sheep show injuries consistent with those seen in humans."

He continues: "Also, as a separate part of the investigation, security camera videos have been collected that show Americans being hit. The videos are classified but they were described to us. In one, a camera in a restaurant in Istanbul captured two FBI agents on vacation sitting at a table with their families. A man with a backpack walks in and suddenly everyone at the table grabs their head as if in pain. Our sources say another video comes from a stairwell in the U.S. embassy in Vienna. The stairs lead to a secure facility. In the video, two people on the stairs suddenly collapse. Those videos and the weapon were among the reasons the Biden administration summoned about half a dozen victims to the White House with about two months left in the president's term."

Former intelligence officials and researchers claim elements of the U.S. government downplayed or dismissed the theory for years, possibly to avoid political consequences of accusing a foreign state like Russia of conducting attacks on American personnel.
AI

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 125

Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.

How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
AI

OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined' (engadget.com) 56

In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation. They said they "care deeply about the Robotics team and the work we built together," so this "wasn't an easy call," but offered this reason for resigning: AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.

This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.

"To be clear, my issue is that the announcement was rushed without the guardrails defined," explains a later tweet. "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I can't share any internal details."

The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, reports Engadget: OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.
Communications

Military GPS Jamming is Interfering with the Navigation Systems of Commercial Ships (cnn.com) 61

"Within 24 hours of the first US-Israeli strikes on Iran, ships in the region's waters found their navigation systems had gone haywire," reports CNN, "erroneously indicating that the vessels were at airports, a nuclear power plant and on Iranian land.

"The location confusion was a result of widespread jamming and spoofing of signals from global positioning satellite systems." Used by all sides in conflict zones to disrupt the paths of drones and missiles, the process involves militaries and affiliated groups intentionally broadcasting high-intensity radio signals in the same frequency bands used by navigation tools. Jamming results in the disruption of a vehicle's satellite-based positioning while spoofing leads to navigation systems reporting a false location. Though commercial vessels are not the target, the electronic interference disrupted the navigation systems of more than 1,100 commercial ships in UAE, Qatari, Omani and Iranian waters on February 28, according to a report from Windward, a shipping intelligence firm. Jamming and spoofing also slowed marine traffic moving through the Strait of Hormuz, a congested shipping lane that handles roughly 20% of the world's oil and gas exports and where precise navigation is essential, Windward's data showed.... Daily incidents have more than doubled, rising from 350 when the conflict began to 672 by March 2, the firm reported.

As use of this warfare tactic grows, experts worry the impacts could reach far beyond battlespaces.... In June 2025, electronic interference with navigation systems was thought to be a factor in the collision between two oil tankers, Adalynn and Front Eagle, off the coast of the UAE... The number of global positioning system signal loss events affecting aircraft increased by 220% between 2021 and 2024, according to data from the International Air Transport Association. Last year, IATA said that the aviation industry must act to stay ahead of the threat.

Cockpits are seeing their navigation displays "literally drift away from reality," said a commercial pilot, who didn't want to be identified because he was not permitted to speak publicly. He said that he and his colleagues have experienced map shifts, where the aircraft location appears to move up to 1 mile away from the actual flight path, false altitude information that leads to phantom "pull up" commands, and systems suggesting an aircraft was on a taxiway, a path that connects runways with various airport facilities, when taking off. These incidents force pilots to rely on manual actions that increase workload, often during the most exhausting points of long-haul flights, he said.

"Alternative navigational tools that don't rely on GPS, but instead harness quantum technology, are also in development," the article points out, "but remain a long way off operational use."
AI

Iran War Provides a Large-Scale Test For AI-Assisted Warfare 113

An anonymous reader quotes a report from Bloomberg, written by Katrina Manson: The U.S. strikes on Iran ordered by President Donald Trump mark the arrival on a large scale of a new era of warfare assisted by artificial intelligence. Captain Timothy Hawkins, a Central Command spokesperson, told me last night that the AI tools the U.S. military is using in Iran operations don't make targeting decisions and don't replace humans. But they do help "make smarter decisions faster." That's been the driving ambition of the U.S. military, which has spent years looking at how to develop and deploy AI to the battlefield [...].

Critics, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a "dangerously thin" line. Hawkins said the military's use of AI assistance follows a rigorous process aligned with U.S. policy, military doctrine and the law. Artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations, he told me. AI is also helping to pull data within systems and organize information to provide clarity.

Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform produced by Palantir [...]. That emerged from Project Maven, a project started in 2017 by the Pentagon to develop AI for the battlefield. Among the large language models installed on the system is Anthropic's Claude AI tool, according to the people, who said it has become central to U.S. operations against Iran and to accelerating Maven's development. Claude is also at the center of a row that pits Anthropic against the Department of Defense over limits on the software.
Further reading: Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei
The Courts

AI Startup Sues Ex-CEO Saying He Took 41GB of Email, Lied On Resume (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Hayden AI, a San Francisco startup that makes spatial analytics tools for cities worldwide, has sued its co-founder and former CEO, alleging that he stole a large quantity of proprietary information in the days leading up to his ouster from the company in September 2024. In a lawsuit filed late last month in San Francisco Superior Court but only made public this week, Hayden AI claims that former CEO Chris Carson undertook what it called "numerous fraudulent actions," which include "forged board signatures, unauthorized stock sales, and improper allocation of personal expenses." [...] Hayden AI, which is worth $464 million according to an estimated valuation on PitchBook, has asked the court to impose preliminary injunctive relief, requiring Carson to either return or destroy the data he allegedly stole. Specifically, the lawsuit alleges that Carson secretly sold over $1.2 million in company stock, forged board signatures, and copied 41GB of proprietary company emails before being fired in September 2024. The complaint also claims Carson fabricated key parts of his resume, including a PhD and military service. It's a "carefully constructed fraud," says Hayden AI.

"That is a lie," the complaint states. "Carson does not hold a PhD from Waseda or any other university. In 2007, he was not obtaining a PhD but was operating 'Splat Action Sports,' a paintball equipment business in a Florida strip mall."

Slashdot Top Deals