AI

Wikipedia's Guide to Spotting AI Is Now Being Used To Hide AI 34

Ars Technica's Benj Edwards reports: On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday. "It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)

But as with all AI prompts, language models don't always perfectly follow skill files, so does the Humanizer actually work? In our limited testing, Chen's skill file made the AI agent's output sound less precise and more casual, but it could have some drawbacks: it won't improve factuality and might harm coding ability. [...] Even with its drawbacks, it's ironic that one of the web's most referenced rule sets for detecting AI-assisted writing may help some people subvert it.
Sci-Fi

Bank of England 'Must Plan For a Financial Crisis Triggered By Aliens' (msn.com) 80

A former Bank of England analyst has urged contingency planning for a potential financial shock if the U.S. government were to confirm the existence of extraterrestrial intelligence. The argument is that "ontological shock" alone could destabilize confidence and trigger crisis dynamics. The Independent reports: [Helen McCaw, who served as a senior analyst in financial security at the UK's central bank and worked for the Bank of England for 10 years until 2012] said politicians and bankers can no longer afford to dismiss talk of alien life, and warned a declaration of this nature could trigger bank collapses. She reportedly said: "The United States government appears to be partway through a multi-year process to declassify and disclose information on the existence of a technologically advanced non-human intelligence responsible for Unidentified Anomalous Phenomena (UAPs)."

"If the UAP proves to be of non-human origin, we may have to acknowledge the existence of a power or intelligence greater than any government and with potentially unknown intentions." Her warning comes as senior American officials have recently indicated their belief in the possibility of alien life. [...] Ms McCaw said: "UAP disclosure is likely to induce ontological shock and provoke psychological responses with material consequences ... There might be extreme price volatility in financial markets due to catastrophising or euphoria, and a collapse in confidence if market participants feel uncertain on how to price assets using any of the familiar methods."

The former Bank of England worker explained there might be a rush towards assets such as gold or other precious metals, and government bonds, which are perceived as "safe." Alternatively, she said precious metals might lose their status as perceived safe assets if people speculate that new space-faring technologies will soon increase the supply of precious metals.
The article cites a recent UFO documentary, The Age of Disclosure, where 34 U.S. government insiders, including those from the military and intelligence community officials, share insights about the governments work with UAP. Per the film's description, the documentary "reveals an 80-year global cover-up of non-human intelligent life and a secret war among major nations to reverse-engineer advanced technology of non-human origin."
Science

The World's Longest-Running Lab Experiment Is Almost 100 Years Old (sciencealert.com) 51

alternative_right shares a report from ScienceAlert: It all started in 1927, when physicist Thomas Parnell at the University of Queensland in Australia filled a closed funnel with the world's thickest known fluid: pitch, a derivative of tar that was once used to seal ships against the seas. Three years later, in 1930, Parnell cut the funnel's stem, like a ribbon at an event, heralding the start of the Pitch Drop Experiment. From then on, the black substance began to flow. At least, that is, in a manner of speaking. At room temperature pitch might look solid, but it is actually a fluid 100 billion times more viscous than water.

It took eight years for the first droplet to finally hit the beaker below. Then, they dripped at a cadence of once every eight years or so, slowing down only after air conditioning was installed in the building in the 1980s. Today, 96 years after the funnel was cut, only nine drops in total have seeped out. The last was in 2014. Scientists expect another will fall sometime in the 2020s, but they are still waiting. No one has ever actually seen a droplet fall directly, despite all the watchful eyes. The experiment is now live-streamed, but various glitches in the past meant that each fateful moment has slipped us by.

Books

Nvidia Contacted Anna's Archive To Secure Access To Millions of Pirated Books (torrentfreak.com) 32

An anonymous reader quotes a report from TorrentFreak: NVIDIA executives allegedly authorized the use of millions of pirated books from Anna's Archive to fuel its AI training. In an expanded class-action lawsuit that cites internal NVIDIA documents, several book authors claim (PDF) that the trillion-dollar company directly reached out to Anna's Archive, seeking high-speed access to the shadow library data. [...] Last Friday, the authors filed an amended complaint that significantly expands the scope of the lawsuit. In addition to adding more books, authors, and AI models, it also includes broader "shadow library" claims and allegations. The authors, including Abdi Nazemian, now cite various internal Nvidia emails and documents, suggesting that the company willingly downloaded millions of copyrighted books. The new complaint alleges that "competitive pressures drove NVIDIA to piracy," which allegedly included collaborating with the controversial Anna's Archive library.

According to the amended complaint, a member of Nvidia's data strategy team reached out to Anna's Archive to find out what the pirate library could offer the trillion-dollar company "Desperate for books, NVIDIA contacted Anna's Archive -- the largest and most brazen of the remaining shadow libraries -- about acquiring its millions of pirated materials and 'including Anna's Archive in pre-training data for our LLMs,'" the complaint notes. "Because Anna's Archive charged tens of thousands of dollars for 'high-speed access' to its pirated collections [] NVIDIA sought to find out what "high-speed access" to the data would look like."

According to the complaint, Anna's Archive then warned Nvidia that its library was illegally acquired and maintained. Because the site previously wasted time on other AI companies, the pirate library asked NVIDIA executives if they had internal permission to move forward. This permission was allegedly granted within a week, after which Anna's Archive provided the chip giant with access to its pirated books. "Within a week of contacting Anna's Archive, and days after being warned by Anna's Archive of the illegal nature of their collections, NVIDIA management gave 'the green light' to proceed with the piracy. Anna's Archive offered NVIDIA millions of pirated copyrighted books." The complaint states that Anna's Archive promised to provide NVIDIA with access to roughly 500 terabytes of data. This included millions of books that are usually only accessible through Internet Archive's digital lending system, which itself has been targeted in court. The complaint does not explicitly mention whether NVIDIA ended up paying Anna's Archive for access to the data.

Additionally, it's worth mentioning that NVIDIA also stands accused of using other pirated sources. In addition to the previously included Books3 database, the new complaint also alleges that the company downloaded books from LibGen, Sci-Hub, and Z-Library. In addition to downloading and using pirated books for its own AI training, the authors allege NVIDIA distributed scripts and tools that allowed its corporate customers to automatically download "The Pile", which contains the Books3 pirated dataset.

Wikipedia

Wikipedia Signs AI Licensing Deals On Its 25th Birthday (apnews.com) 51

Wikipedia turns 25 today, and the online encyclopedia is celebrating that with an announcement that it has signed new licensing deals with a slate of major AI companies -- Amazon, Microsoft, Meta Platforms, Perplexity and Mistral AI. The deals allow these companies to access Wikipedia content "at a volume and speed designed specifically for their needs." The Wikimedia Foundation did not disclose financial terms.

Google had already signed on as one of the first enterprise customers back in 2022. The agreements follow the Wikimedia Foundation's push last year for AI developers to pay for access through its enterprise platform. The foundation said human traffic had fallen 8% while bot visits -- sometimes disguised to evade detection -- were heavily taxing its servers.

Wikipedia founder Jimmy Wales said he welcomes AI training on the site's human-curated content but that companies "should probably chip in and pay for your fair share of the cost that you're putting on us." The site remains the ninth most visited on the internet, hosting more than 65 million articles in 300 languages maintained by some 250,000 volunteer editors.
Math

AI Models Are Starting To Crack High-Level Math Problems (techcrunch.com) 113

An anonymous reader quotes a report from TechCrunch: Over the weekend, Neel Somani, who is a software engineer, former quant researcher, and a startup founder, was testing the math skills of OpenAI's new model when he made an unexpected discovery. After pasting the problem into ChatGPT and letting it think for 15 minutes, he came back to a full solution. He evaluated the proof and formalized it with a tool called Harmonic -- but it all checked out. "I was curious to establish a baseline for when LLMs are effectively able to solve open math problems compared to where they struggle," Somani said. The surprise was that, using the latest model, the frontier started to push forward a bit.

ChatGPT's chain of thought is even more impressive, rattling off mathematical axioms like Legendre's formula, Bertrand's postulate, and the Star of David theorum. Eventually, the model found a Math Overflow post from 2013, where Harvard mathematician Noam Elkies had given an elegant solution to a similar problem. But ChatGPT's final proof differed from Elkies' work in important ways, and gave a more complete solution to a version of the problem posed by legendary mathematician Paul Erdos, whose vast collection of unsolved problems has become a proving ground for AI.

For anyone skeptical of machine intelligence, it's a surprising result -- and it's not the only one. AI tools have become ubiquitous in mathematics, from formalization-oriented LLMs like Harmonic's Aristotle to literature review tools like OpenAI's deep research. But since the release of GPT 5.2 -- which Somani describes as "anecdotally more skilled at mathematical reasoning than previous iterations" -- the sheer volume of solved problems has become difficult to ignore, raising new questions about large language models' ability to push the frontiers of human knowledge.
Somani examined the online archive of more than 1,000 Erdos conjectures. Since Christmas, 15 Erdos problems have shifted from "open" to "solved," with 11 solutions explicitly crediting AI involvement.

On GitHub, mathematician Terence Tao identifies eight Erdos problems where AI made meaningful autonomous progress and six more where it advanced work by finding and extending prior research, noting on Mastodon that AI's scalability makes it well suited to tackling the long tail of obscure, often straightforward Erdos problems.

Progress is also being accelerated by a push toward formalization, supported by tools like the open-source "proof assistant" Lean and newer AI systems such as Harmonic's Aristotle.
Television

Batman TV Series Premiered 60 Years Ago Today (cordcuttersnews.com) 47

60 years ago today, ABC aired the first episode of its live-action Batman television series, introducing Adam West as the deadpan Caped Crusader in what became a pop culture phenomenon blending high-camp humor and cliffhanger thrills. The mid-season replacement ran for 120 episodes over three seasons before ending in March 1968.
Unix

That Bell Labs 'Unix' Tape from 1974: From a Closet to Computing History (ksltv.com) 19

Remember that re-discovered computer tape with one of the earliest versions of Unix from the early 1970s? This week several local news outlets in Utah reported on the find, with KSL creating a video report with shots of the tape arriving at Silicon Valley's Computer History Museum, the closet where it was found, and even its handwritten label.

The Salt Lake Tribune reports that the closet where it was found also contained "old cords from unknown sources and mountains of papers that had been dumped from a former professor's file cabinet, including old drawings from his kids and saved plane ticket stubs." (Their report also includes a photo of the University of Utah team that found the tape — the University's Flux Research Group).

Professor Robert Ricci believes only 20 copies were ever produced of the version of Unix on that tape: At the time, in the 1970s, Ricci estimates there would have been maybe two or three of those computers — called a PDP-11, or programmed data processor — in Utah that could have run UNIX V4, including the one at the U. Having that technology is part of why he believes the U. got a copy of the rare software. The other part was the distinguished computing faculty at the school.

The new UNIX operating system would've been announced at conferences in the early 1970s, and a U. professor at the time named Martin Newell frequently attended those because of his own recognized work in the field, Ricci said. In another box, stuffed in under manila envelopes, [researcher Aleks] Maricq found a 1974 letter written to Newell from Ken Thompson at Bell Labs that said as soon as "a new batch comes from the printers, I will send you the system." Ricci and Maricq are unsure if the software was ever used. They reached out to Newell, who is now 72 and retired, as well as some of his former students. None of them recalled actually running it through the PDP-11...

The late Jay Lepreau also worked at the U.'s computing department and created the Flux Research Group that Ricci, Maricq and [engineering research associate Jon] Duerig are now part of. Lepreau overlapped just barely with Newell's tenure. In 1978, Lepreau and a team at the U. worked with a group at the University of California, Berkeley. Together, they built their own clone of the UNIX operating system. They called it BSD, or Berkeley Standard Distribution. Steve Jobs, the former CEO of Apple, worked with BSD, too, and it influenced his work.

Ultimately, it was Lepreau who saved the 9-track tape with the UNIX system on it in his U. office. And he's why the university still has it today. "He seems to have found it and decided it was worth keeping," Ricci said...

The U. will also get the tape back from the museum. Maricq said it will likely be displayed in the university's new engineering building that's set to open in January 2027. That's why, the research associate said, he was cleaning out the storage room to begin with — to try to prepare for the move. He was mostly just excited to see the floor again. "I thought we'd find some old stuff, but I didn't think it'd be anything like this," he said. And Maricq still has boxes to go through, including more believed to be from Lepreau's office.

Local news station KMYU captured the thoughts of some of the University researchers who found the tape: "When you see the very first beginnings of something, and you go from seed to sapling, that's what we saw here," [engineering research associate Jon] Duerig said. "We see this thing in the moment of flux. We see the signs of all the things changing — of all the things developing that we now see today."
Duerig also gave this comment to local news station KSL. "The coolest thing is that anybody, anywhere in the world can now access this, right? People can go on the internet archive and download the raw tape file and simulate running it," Duerig said. "People have posted browsable directory trees of the whole thing." One of the museum's directors said the tape's recovery marked a big day for the museum "One of the things that was pretty exciting to us is that just that there is this huge community of people around the world who were excited to jump on the opportunity to look at this piece of history," Ricci said. "And it was really cool that we were able to share that."

Duerig said while there weren't many comments or footnotes from the programmers of that time, they did discovery more unexpected content having to do with Bell Labs on the tape. "There were survey results of them actually asking survey questions of their employees at these operator centers," he said.

Thanks to long-time Slashdot reader walterbyrd for sharing the news.
Open Source

Cory Doctorow: Legalising Reverse Engineering Could End 'Enshittification' (theguardian.com) 90

Scifi author/tech activist Cory Doctorow has decried the "enshittification" of our technologies to extract more profit. But Saturday he also described what could be "the beginning of the end for enshittification" in a new article for the Guardian — "our chance to make tech good again". There is only one reason the world isn't bursting with wildly profitable products and projects that disenshittify the US's defective products: its (former) trading partners were bullied into passing an "anti-circumvention" law that bans the kind of reverse-engineering that is the necessary prelude to modifying an existing product to make it work better for its users (at the expense of its manufacturer)...

Post-Brexit, the UK is uniquely able to seize this moment. Unlike our European cousins, we needn't wait for the copyright directive to be repealed before we can strike article 6 off our own law books and thereby salvage something good out of Brexit... Until we repeal the anti-circumvention law, we can't reverse-engineer the US's cloud software, whether it's a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours — say, the solar inverters and batteries we buy from China.

This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks. Admittedly, the door is only open a crack, but it's been locked tight since the turn of the century. When it comes to a better technology future, "open a crack" is the most exciting proposition I've heard in decades.

Thanks to Slashdot reader Bruce66423 for sharing the article.
Science

Scientists Tried To Break Einstein's Speed of Light Rule (sciencedaily.com) 72

Scientists are putting Einstein's claim that the speed of light is constant to the test. While researchers found no evidence that light's speed changes with energy, this null result dramatically tightens the constraints on quantum-gravity theories that predict even the tiniest violations. ScienceDaily reports: Special relativity rests on the principle that the laws of physics remain the same for all observers, regardless of how they are moving relative to one another. This idea is known as Lorentz invariance. Over time, Lorentz invariance became a foundational assumption in modern physics, especially within quantum theory. [...] One prediction shared by several Lorentz-invariance-violating quantum gravity models is that the speed of light may depend slightly on a photon's energy. Any such effect would have to be tiny to match existing experimental limits. However, it could become detectable at the highest photon energies, specifically in very-high-energy gamma rays.

A research team led by former UAB student Merce Guerrero and current IEEC PhD student at the UAB Anna Campoy-Ordaz set out to test this idea using astrophysical observations. The team also included Robertus Potting from the University of Algarve and Markus Gaug, a lecturer in the Department of Physics at the UAB who is also affiliated with the IEEC. Their approach relies on the vast distances light travels across the universe. If photons of different energies are emitted at the same time from a distant source, even minuscule differences in their speeds could build up into measurable delays by the time they reach Earth.

Using a new statistical technique, the researchers combined existing measurements of very-high-energy gamma rays to examine several Lorentz-invariance-violating parameters favored by theorists within the Standard Model Extension (SME). The goal was ambitious. They hoped to find evidence that Einstein's assumptions might break down under extreme conditions. Once again, Einstein's predictions held firm. The study did not detect any violation of Lorentz invariance. Even so, the results are significant. The new analysis improves previous limits by an order of magnitude, sharply narrowing where new physics could be hiding.

AI

AI Models Are Starting To Learn By Asking Themselves Questions (wired.com) 82

An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.

The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].

Games

Lego's Smart Brick Gives the Iconic Analog Toy a New Digital Brain (wired.com) 22

An anonymous reader quotes a report from Wired: At CES in Las Vegas today, Lego has unveiled its new Smart Play platform, aimed at taking its distinctly analog plastic blocks and figures into a new world of tech-powered interactive play -- but crucially one without any reliance on screens. Smart Play revolves around Lego's patented sensor- and tech-packed brick. It's the same size as a standard 2 x 4 Lego brick, but it is capable of connecting to compatible Smart Minifigures and Smart Tags and interacting with them in real time. By pairing these components, kids big and small can create context-appropriate sounds and light effects as they play with the Danish company's toys.

[...] Lego is claiming this Smart Play platform developed in house by the company's Creative Play Lab team in collaboration with Capgemini's Cambridge Consultants "features more than 20 patented world-firsts within its technology." The heart of the system is the Smart Brick's custom-made chip, measuring smaller than a standard Lego stud. Other elements crammed into the eight-stud brick are an LED light array, accelerometers, light sensors, and sound sensor, and even a miniature speaker. The internal battery will supposedly work even after years of inactivity, and to avoid any need for cable access to the Smart Brick once it's built into a beloved creation, Lego has also added wireless charging. Indeed, Lego has made a charging pad that will power up several Smart Bricks simultaneously.

That all-important brain chip is a 4.1-millimeter custom mixed-signal ASIC chip running a bespoke Play Engine, which interprets motion, orientation, and magnetic fields. A copper coil assembly enables the brick's tag recognition, while a proprietary "Brick-to-Brick position system" uses these coils to sense distance, direction, and orientation between multiple Smart Bricks. Moreover, Lego claims this use of multiple Smart Bricks creates a "self-organizing network" that requires no setup, no app, no central hub, nor external controllers -- and so no screens. A Bluetooth-based "BrickNet" protocol shares the data between the Smart Bricks.

Sounds are handled by a tiny analog synthesizer putting out real-time audio (thus minimizing memory load) via the brick's miniature speaker, which uses the brick's internal air spaces to amplify sound. As a result, the audio effects are apparently immediate and can be used to enhance play with real-time sound. Lego insists there are no prerecorded clips of lightsabers or other pieces of audio being used as a cheat. Just like the Smart Minifigs, the 2 x 2 studless tile tags trigger sounds, lights, or behaviors tied to where they are placed or how they are played with. They communicate with other components through near-field magnetic connections. Each tile has a unique digital ID, which is read by the brain brick, while the minifigures -- outwardly identical to standard minifigs -- carry their unique digital ID on an internal chip.

Lord of the Rings

2025 Ends With Release of J. R. R. Tolkein's Unpublished Story (lareviewofbooks.org) 16

2025'S final months finally saw the publication of J.R.R. Tolkein's The Bovadium Fragments, writes the Los Angeles Review of Books: Anyone who has read Tolkien's letters will know that he is at his funniest when filled with rage, and The Bovadium Fragments is a work brimming with Tolkien's fury — specifically, ire over mankind's obsession with motor vehicles. Tolkien's anger is expressed through a playful satire told from the perspective of a group of future archaeologists who are studying the titular fragments, which tell of a civilization that asphyxiated itself on its own exhaust fumes. Tolkien's fictional fragments use the language of ancient myth, reframing modern issues like traffic congestion and parking with a grandeur that highlights their total absurdity. It is Tolkien at his angriest and funniest, making The Bovadium Fragments a minor treasure in his ever-growing catalog...

As Tolkien put it in one of his private letters, "the spirit of 'Isengard,' if not of Mordor, is of course always cropping up. The present design of destroying Oxford in order to accommodate motor-cars is a case." Readers of The Lord of the Rings (1954-55) will recognize the allusion. In the author's magnum opus, Isengard is a kind of industrial hell, endlessly feeding its furnaces with felled trees... The Bovadium Fragments brings Tolkien's visceral hatred of such machines to the fore for the first time — on the same level as Isengard or the scoured Shire. In Tolkien's story, the words "Motores" and "monsters" are interchangeable. And with his grand, mythic register, Tolkien defamiliarizes the car enough for modern readers to see it as he does — as truly monstrous. "[T]he Motores continued to bring forth an ever larger progeny," Tolkien writes. "[M]any of the citizens harboured the monsters, feeding them with the costly oils and essences which they required, and building houses for them in their gardens...."

One suspects that Tolkien would have preferred to see Oxford return to the era of the donkey cart. That kind of nostalgia is familiar in Tolkien's work — the idea that we developed just a little too far, skipping past an Eden we failed to recognize a generation or two ago. (For Tolkien, the paragon of paradise seems to have been a rural village around the time of Queen Victoria's Diamond Jubilee.) But he also knows that mankind's impulse to develop is something we cannot help. And the inevitable blowback we get from our hubris is something we cannot avoid. That defeatist attitude is suggested in the frame narrative to The Bovadium Fragments, in which the archaeologists smugly declare their superiority to the extinct citizens of old Oxford. "We at any rate are not likely to fall into such folly," one of them says.

In their more enlightened future, we are told, they only pursue the more benign science of longevity. Their wish is that one day they shall "at last conquer mortality, and not 'die like animals.'" But humans are animals, Tolkien argues. And in stretching beyond that, we may find progress and modern conveniences like motorcars. But perhaps we also pave a road to Isengard. And we may not recognize that destination until it is too late — until we are trapped within its walls, suffocating on our own exhaust fumes.

DRM

Fleischer Studios Criticized for Claiming Betty Boop is Not Public Domain (duke.edu) 23

Here it is — Betty Boop's first appearance, which became public domain on Thursday. It's a 60-second song halfway through a longer cartoon about a restaurant titled Dizzy Dishes. (The first scene makes it clear this is a restaurant of anthropomorphized animals — which explains why the as-yet-unnamed character has floppy dog ears...)

So Fleischer Studios has now warned that claiming Betty Boop is public domain "is actually not true." Very often, different versions of a character that have been developed later can independently enjoy copyright protection. Also, names and depictions of a character very frequently will remain separately protected by trademark and other laws, regardless of whether the copyright has expired.
But is that really true? Fleischer Studios went out of business in 1946, notes Los Angeles Times columnist Michael Hiltzik: By then it had sold the rights to its cartoons and the Betty Boop character. A new Fleischer Studios was formed in the 1970s by Fleischer descendants, including Max's grandson Mark Fleischer, and set about repurchasing the rights that had been sold. Whether it reacquired the rights to Betty Boop is up for discussion... According to a federal appeals court ruling in 2011, the answer is no. Having navigated its way through the three or four copyright transfers that followed the original rights sale, the appeals court concluded that the original Fleischer studios sold the rights to Betty Boop and the related cartoons to Paramount in 1941 but couldn't verify that the rights to the character had been sold in an unbroken chain placing them with the new studio. The "chain of title" was broken, the appellate judges found — but they didn't say who ended up with Betty Boop.
And last month Cory Doctorow pointed out that "while the Fleischer studio (where Betty Boop was created) renewed the copyright on Dizzy Dishes, there were many other shorts that entered the public domain years ago." That means that all the aspects of Betty Boop that were developed for Dizzy Dishes are about to enter the public domain. But also, all the aspects of Betty Boop from those non-renewed shorts are already in the public domain. But some of the remaining aspects of Betty Boop's character design — those developed in subsequent shorts that were also renewed — are also in the public domain, because they aren't copyrightable in the first place, because they're "generic," or "trivial," constitute "minuscule variations," or be so standard or indispensable as to be a "scène à faire...." But we're not done yet! Just because some later aspects of the Betty Boop character design are still in copyright, it doesn't follow that you aren't allowed to use them! U.S. Copyright law has a broad set "limitations and exceptions," including fair use.
So while Fleischer Studios insists Betty Boop "will continue to enjoy copyright and trademark protection for years to come," Doctorow has some thoughts on that trademark: Even the Supreme Court has (repeatedly) upheld the principle that trademark can't be used as a backdoor to extend copyright.

That's important, because the current Betty Boop license-holders have been sending out baseless legal threats claiming that their trademarks over Betty Boop mean that she's not going into the public domain. They're not the only ones, either! This is a routine, petty scam perpetrated by marketing companies that have scooped up the (usually confused and difficult-to-verify) title to cultural icons and then gone into business extracting rent from people and businesses who want to make new works with them.

"Trademarks only prevent you from using character names and depictions in a way that misleads consumers into thinking your work is produced or sponsored by the rightsholder," Duke University clarified in their January 1st explanation of Public Domain Day 2026 — "for example, by putting them on unlicensed merchandise. They do not prevent you from using them in a new creative work clearly unaffiliated with the rights owners..."

"Regardless of who owns the later versions of the character, the original Betty Boop character from 1930 is in the public domain." This is another reason why copyright expiration is so important: It brings clarity... Under US copyright law, anyone is free to use characters as they appeared in public domain works. If those characters recur in later works that are still under copyright, the rights only extend to the newly added material in those works, not the underlying material from the public domain works — that content remains freely available. Second, with newer versions of characters, copyright only extends to those new features that qualify for such protection...

Dozens of post-1930 Betty Boop cartoons, including Ker-Choo (1932) and Poor Cinderella (1934), did not have renewals. The newly added material in these animations is also in the public domain... To sum up the copyright story so far: in 2026, the underlying Betty Boop character goes into the public domain. She is joined there by the attributes, plot lines, and dialogue that were first introduced in those later cartoons without renewed copyrights, as well as the uncopyrightable attributes of her later instantiations...

Certainly, there would be a risk of consumer confusion if you use Betty Boop as a brand identifier on the kind of merchandise Fleischer sells — jewelry, back packs, water bottles, dolls. Trademark law does protect Fleischer against that risk. Contrast these uses with simply putting the Boop character in a new artistic work. This is exactly what copyright expiration is intended to allow. Were trademark law to prevent this, then trademark rights would be leveraged to obtain the effective equivalent of a perpetual copyright — precisely what the Supreme Court said we cannot do...

If courts have delineated the line between copyright and trademark, why is there so little clarity in this area? Sadly, companies sometimes claim to have more expansive rights than they actually do, capitalizing on fear, uncertainty, and doubt to collect royalties and licensing fees to which they are not legally entitled.

Hardware

Stewart Cheifet, Computer Chronicles Host, Dies At 87 (goldsteinsfuneral.com) 19

Pibroch(CiH) writes: According to the obituary linked, Stewart Cheifet of Computer Chronicles fame has died. The obituary states he passed Dec 28, 2025. Cheifet and Digital Research founder Gary Kildall hosted the public television show The Computer Chronicles starting in 1984, and Stewart continued to host the show well into the 1990s. He was well-known for his affable presence and adeptness at interviewing guests and finding out the straight dope about their products. He had recently undergone spinal surgery and had somewhat disappeared from public view after the death of his wife Peta in 2024.
AI

Rob Pike Angered by 'AI Slop' Spam Sent By Agent Experiment (simonwillison.net) 54

"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village.

"IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...."

Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."

Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.)

Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement." The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."]

Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.

The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses."

The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.
AI

Did Tim Cook Post AI Slop in His Christmas Message Promoting 'Pluribus'? (daringfireball.net) 23

Artist Keith Thomson is a modern (and whimsical) Edward Hopper. And Apple TV says he created the "festive artwork" shared on X by Apple CEO Tim Cook on Christmas Eve, "made on MacBook Pro."

Its intentionally-off picture of milk and cookies was meant to tease the season finale of Pluribus. ("Merry Christmas Eve, Carol..." Cook had posted.)

But others were convinced that the weird image was AI-generated.

Tech blogger John Gruber was blunt. "Tim Cook posts AI Slop in Christmas message on Twitter/X, ostensibly to promote 'Pluribus'." As for sloppy details, the carton is labeled both "Whole Milk" and "Lowfat Milk", and the "Cow Fun Puzzle" maze is just goofily wrong. (I can't recall ever seeing a puzzle of any kind on a milk carton, because they're waxy and hard to write on. It's like a conflation of milk cartons and cereal boxes.)
Tech author Ben Kamens — who just days earlier had blogged about generating mazes with AI — said the image showed the "specific quirks" of generative AI mazes (including the way the maze couldn't be solved, expect by going around the maze altogether). Former Google Ventures partner M.G. Siegler even wondered if AI use intentionally echoed the themes of Pluribus — e.g., the creepiness of a collective intelligence — since otherwise "this seems far too obvious to be a mistake/blunder on Apple's part." (Someone on Reddit pointed out that in Pluribus's dystopian world, milk plays a key role — and the open spout of the "natural" milk's carton does touch a suspiciously-shining light on the Christmas tree...)

Slashdot contacted artist Keith Thomson to try to ascertain what happened...
Programming

What Might Adding Emojis and Pictures To Text Programming Languages Look Like? 83

theodp writes: We all mix pictures, emojis, and text freely in our communications. So why not in our code? That's the premise of "Fun With Python and Emoji: What Might Adding Pictures to Text Programming Languages Look Like?" (two-image Bluesky explainer; full slides), which takes a look at what mixing emoji with Python and SQL might look like. A GitHub repo includes a Google Colab-ready Python notebook proof of concept that does rudimentary emoji-to-text translation via an IPython input transformer.

So, in the Golden Age of AI -- some 60+ years after Kenneth Iverson introduced the chock-full-of-symbols APL -- are valid technical reasons still keeping symbols and pictures out of code, or is their absence more of a programming dogma thing?
AI

Does AI Really Make Coders Faster? (technologyreview.com) 139

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me."

But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...

The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.

Other key points from the article:
  • LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks."
  • "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain."
  • "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..."
  • "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt."

Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools."

The story is part of MIT Technology Review's new Hype Correction series of articles about AI.


Security

Security Researcher Found Critical Kindle Vulnerabilities That Allowed Hijacking Amazon Accounts (thetimes.com) 13

The Black Hat Europe hacker conference in London included a session titled "Don't Judge an Audiobook by Its Cover" about a two critical (and now fixed) flaws in Amazon's Kindle. The Times reports both flaws were discovered by engineering analyst Valentino Ricotta (from the cybersecurity research division of Thales), who was awarded a "bug bounty" of $20,000 (£15,000 ). He said: "What especially struck me with this device, that's been sitting on my bedside table for years, is that it's connected to the internet. It's constantly running because the battery lasts a long time and it has access to my Amazon account. It can even pay for books from the store with my credit card in a single click. Once an attacker gets a foothold inside a Kindle, it could access personal data, your credit card information, pivot to your local network or even to other devices that are registered with your Amazon account."

Ricotta discovered flaws in the Kindle software that scans and extracts information from audiobooks... He also identified a vulnerability in the onscreen keyboard. Through both of these, he tricked the Kindle into loading malicious code, which enabled him to take the user's Amazon session cookies — tokens that give access to the account. Ricotta said that people could be exposed to this type of hack if they "side-load" books on to the Kindle through non-Amazon stores.

Ricotta donated his bug bounties to charity...

Slashdot Top Deals