Medicine

Retina Scan for Diabetes Could Also Reduce Deaths During Pregnancy in Developing Countries (gatesnotes.com) 20

This week Bill Gates wrote a blog post about a special camera from medtech startup Remidio, which delivers high-resolution images of a patient's retina in seconds. The camera plugs into a phone running an AI system that watches for early signs of diabetes — all without needing a blood draw, eye dilation, or a dibetes specialist. It's already been used in 40 countries for more than 15 million patients. But that same hardware, with different software, can also flag the conditions that drive so many dangerous pregnancies. Gestational diabetes sharply increases the risk of pre-eclampsia [a spike in blood pressure during pregnancy responsible for half a million fetal deaths every year and 70,000 maternal deaths]... In most of rural sub-Saharan Africa or South Asia, it usually isn't screened for at all, because the standard test requires a lab. A retinal scan offers a different way in.

Remidio's device is currently being used in India to screen pregnant women for conditions that drive stillbirth. And researchers are now adapting the same hardware to screen for anemia and hypertension, too... [S]mall, portable, affordable diagnostics in the hands of community health workers are exactly the kind of lever that can start to move a number that hasn't moved in a long time.

Linux

Linux Percentage of Steam Users Doubled in One Year (phoronix.com) 44

Steam on Linux use in March "had skyrocketed to 5.33%..." reports Phoronix, "easily the highest level we've seen Steam on Linux at since its inception more than a decade ago."

So what happened in April? [April's results] point to Linux having a 4.52% marketshare on Steam, a drop of 0.81% compared to March. Year-over-year it's roughly double with Steam on Linux in April 2025 being at 2.27%. Or two years ago for April 2024, Steam on Linux was at 1.9%.
Programming

AI Agent Designed To Speed Up Company's Coding Wipes Entire Database In 9 Seconds (livescience.com) 110

joshuark shares a report from Live Science: An AI coding agent designed to help a small software company streamline its tasks instead blew a hole through its business in just nine seconds. PocketOS founder Jer Crane, said that the AI coding agent Cursor --powered by Anthropic's Claude Opus 4.6 model -- deleted the company's entire production database and backups with a single call to its cloud provider, Railway, on April 24. [...] "This isn't a story about one bad agent or one bad API [Application Programming Interfaces]," Crane wrote in an X post. "It's about an entire industry building AI-agent integrations into production infrastructure faster than it's building the safety architecture to make those integrations safe."

Crane's company, PocketOS makes software for car rental companies, handling tasks such as reservations, payments, customer records and vehicle tracking. After the deletion, Crane said customers lost reservations and new signups, and some could not find records for people arriving to pick up their rental cars. "We've contacted legal counsel," Crane wrote. "We are documenting everything." Crane explained that Cursor found an API token -- a "digital key" made of a short sequence of code that lets software talk to other services and prove it has permission to act -- in an unrelated file which it then used to run the destructive command. According to Crane, Railway's setup allowed the deletion without confirmation, and because the backups were stored close enough to the main database, they were also erased.

"[Railway] resolved the issue and restored the data," Railway confirmed via email to Live Science. "We maintain both user backups as well as disaster backups. We take data very, VERY seriously." In his post, he pointed to earlier reports of Cursor ignoring user rules, changing files it was not supposed to touch and taking actions beyond the task it had been given. To him, the database wipe was not a freak accident but the next step in a larger, more concerning, pattern. After the database vanished, Crane asked Cursor to explain what happened. The AI agent reportedly admitted that it had guessed, acted without permission and failed to understand the command before running it. "I violated every principle I was given," the AI agent wrote. "I guessed instead of verifying. I ran a destructive action without being asked. I didn't understand what I was doing before doing it." The statement reads like a confession [...]. "We are not the first," Crane wrote. "We will not be the last unless this gets airtime."

AI

The Case Against an Imminent Software Developer Apocalypse (zdnet.com) 59

ZipNada shares a report from ZDNet: Given the dour headlines as of late concerning the diminishing amounts of entry-level software development jobs, coupled with predictions of applications entirely AI-generated, one could be forgiven for assuming that software developers may soon be an endangered species. However, the data tells a different story. James Bessen, professor at Boston University, has been pushing back for some time against the talk of AI and automation displacing jobs on a mass scale, and lately has been arguing that the roles of software developers are nowhere near extinction.

AI is certainly not killing the software developer, Bessen said in a recent analysis (PDF). AI is taking over software development tasks and boosting productivity and output, but that is not translating into lost jobs, he argued. Instead, the types of software skills sought by companies are changing. "Surprisingly, however, after three years of AI use, software developer jobs have continued to grow robustly, reaching record levels of employment -- 2.5 million in February," Bessen said in the report, citing data from the US Bureau of Labor Statistics. The number of software developers in the US has grown by over 400,000, or 19%, since ChatGPT was introduced in 2022. At that time, the employed software developer population was just under 2.1 million. [...]

The productivity uptick developers are seeing may ultimately be a boost to their professional opportunities, however. "An important and possibly disruptive change is happening, but the common view misunderstands what is going on," Bessen pointed out in his report. "Careful case studies find that AI improves the productivity of software developers -- that is, the software produced per developer -- by 30%, 50%, or more. And the rate of productivity improvement in software development is improving." Tellingly, since 2022, when ChatGPT was introduced, developer productivity has increased noticeably, Bessen continued. "From 2003 to 2022, developer productivity grew at 3.9% per year; but from 2022 through 2025, it grew at 6% per year." [...] A coming flood of new software products, now more likely to be enhanced by AI, will continue to create jobs for developers, Bessen predicted. "Thus, mass unemployment of software developers seems unlikely to happen soon." This doesn't mean the job descriptions of developers or other computer occupations will remain static. AI is shifting and re-inventing these roles, Bessen added.

AI

GPT-5.5 Matches Heavily Hyped Mythos Preview In New Cybersecurity Tests (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: Last month, Anthropic made a big deal about the supposedly outsize cybersecurity threat represented by its Mythos Preview model, leading the company to restrict the initial release to "critical industry partners." But new research from the UK's AI Security Institute (AISI) suggests that OpenAI's GPT-5.5, which launched publicly last week, reached "a similar level of performance on our cyber evaluations" as Mythos Preview, which the group evaluated last month.

Since 2023, the AISI has run a variety of frontier AI models through 95 different Capture the Flag challenges designed to test capabilities on cybersecurity tasks, such as reverse engineering, web exploitation, and cryptography. On the highest-level "Expert" tasks, GPT-5.5 passed an average of 71.4 percent, slightly higher than the 68.6 percent achieved by Mythos Preview (though within the margin of error). In one particularly difficult task that involved building a disassembler to decode a Rust binary, AISI notes that "GPT-5.5 solved the challenge in 10 minutes and 22 seconds with no human assistance at a cost of $1.73" in API calls.

GPT-5.5 also matched Mythos Preview in its progress on "The Last Ones" (TLO), an AISI test range set up to simulate a 32-step data extraction attack on a corporate network. GPT-5.5 succeeded in 3 of 10 attempts on TLO, compared to 2 of 10 for Mythos Preview -- no previous model had ever succeeded at the test even once. But GPT-5.5 still fails at AISI's more difficult "Cooling Tower" simulation of an attempted disruption of the control software for a power plant, as every previously tested AI model also has. The new results for GPT-5.5 suggest that, when it comes to cybersecurity risk, Mythos Preview was likely not "a breakthrough specific to one model" but rather "a byproduct of more general improvements in long-horizon autonomy, reasoning, and coding," AISI writes.

Bug

Hackers Are Actively Exploiting a Bug In cPanel, Used By Millions of Websites (techcrunch.com) 20

Hackers are actively exploiting a critical cPanel and WHM vulnerability, tracked as CVE-2026-41940, that allows remote attackers to bypass the login screen and gain full administrative access to affected web servers. Major hosts including Namecheap, HostGator, and KnownHost have taken mitigation steps or patched systems, but cPanel is urging all customers and web hosts to update immediately because the software is widely used across millions of websites. TechCrunch reports: cPanel and WHM are two software suites used for managing web servers that host websites, manage emails, and handle important configurations and databases needed to maintain an internet domain. The two suites have deep-access to the servers that they manage, allowing a malicious hacker potentially unrestricted access to data managed by the affected software.

Given the ubiquity of the cPanel and WHM software across the web hosting industry, hackers could compromise potentially large numbers of websites that haven't patched the bug. Canada's national cybersecurity agency said in an advisory that the bug could be exploited to compromise websites on shared hosting servers, such as large web hosting companies.

The agency said that "exploitation is highly probable" and that immediate action from cPanel customers, or their web hosts, is necessary to prevent malicious access. [...] One web hosting company says it found evidence that hackers have been abusing the vulnerability for months before the attempts were discovered.

Open Source

Microsoft Open-Sources 'Earliest DOS Source Code Discovered To Date' (arstechnica.com) 47

An anonymous reader quotes a report from Ars Technica: Several times in the last couple of decades, Microsoft has released source code for the original MS-DOS operating system that kicked off its decades-long dominance of consumer PCs. This week, the company has reached further back than ever, releasing "the earliest DOS source code discovered to date" along with other documentation and notes from its developer.

Today's source release is so old that it predates the MS-DOS branding, and it includes "sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK," write Microsoft's Stacey Haffner and Scott Hanselman in their co-authored post about the release. [...] This source code is old enough that it hadn't been stored digitally. "A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini," calling itself the "DOS Disassembly Group," painstakingly transcribed and scanned in code from paper printouts provided by Paterson. This process was made even more difficult because modern OCR software struggled with the quality of the decades-old printout.

Ubuntu

Ubuntu's AI Plans Have Linux Users Looking For a 'Kill Switch' (theverge.com) 135

Canonical's plan to add AI features to Ubuntu has sparked pushback from users who are concerned it could follow Windows 11's AI-heavy direction. "After Canonical's announcement earlier this week that it's bringing AI features to Ubuntu, replies included requests for an AI 'kill switch' or a way to disable the upcoming features," reports The Verge. Canonical says it has no plans for a "global AI kill switch" but it will allow users to remove any AI features they don't want. From the report: In his original post, [Canonical's VP of engineering, Jon Seager] said the upcoming AI features will include accessibility tools like AI speech-to-text and text-to-speech, along with agentic AI features for tasks like troubleshooting and automation. Canonical is also encouraging its engineers to use AI more and plans to begin introducing AI features in Ubuntu "throughout the next year."

In a follow-up comment, Seager clarified that, "my plan is to introduce AI-backed features as a 'preview' on a strictly opt-in basis in [Ubuntu version] 26.10. In subsequent releases, my plan is to have a step in the initial setup wizard that allows the user to choose whether or not they'd like the AI-native features enabled." Ultimately, he said, "All of these capabilities will be delivered as Snaps to the OS, layered on top of the existing Ubuntu stack. That means there will always be the option of removing those Snaps."
Users who prefer to avoid AI entirely could switch to other distros like Linux Mint, Pop!_OS, or Zorin OS. "These distros have some similarities to Ubuntu, but may not necessarily adopt the new AI features Canonical is rolling out," adds The Verge.
Emulation (Games)

GitHub 'No Longer a Place For Serious Work', Says Hashicorp Co-Founder (theregister.com) 82

Hashicorp co-founder Mitchell Hashimoto says GitHub's frequent outages have made it "no longer a place for serious work," prompting him to move his Ghostty terminal emulator project elsewhere after 18 years on the platform. The Register reports: "I've been angry about it. I've hurt people's feelings. I've been lashing out. Because GitHub is failing me, every single day, and it is personal. It is irrationally personal," he wrote. The reason for his ire is the service has become unreliable. "For the past month I've kept a journal where I put an 'X' next to every date where a GitHub outage has negatively impacted my ability to work," he wrote. "Almost every day has an 'X'. On the day I am writing this post, I've been unable to do any PR review for ~2 hours because there is a GitHub Actions outage."

Hashimoto penned his post a few days before an April 28 incident that saw pull requests fail to complete due to an Elasticsearch SNAFU. Incidents like that mean Hashimoto has decided GitHub "is no longer a place for serious work if it just blocks you out for hours per day, every day." "It's not a fun place for me to be anymore," he lamented. "I want to be there but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software."

The developer says he wants GitHub to improve, but "I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go." He's open to a return if GitHub can deliver "real results and improvements, not words and promises." But for now, he's working to move Ghostty to another collaborative code locker. "We have a plan but I'm also very much still in discussions with multiple providers (both commercial and FOSS)," Hashimoto wrote. "It'll take us time to remove all of our dependencies on GitHub and we have a plan in place to do it as incrementally as possible."

He's doing the equivalent of leaving a toothbrush at a former partner's house by leaving a read-only mirror of Ghostty on GitHub, and by keeping his personal projects on the Microsoft-owned service. But Hashimoto's moving his day job somewhere new. "Ghostty is where I, our maintainers, and our open source community are most impacted so that is the focus of this change. We'll see where it goes after that," he concluded.

DRM

Sony Rolls Out 30-Day Online DRM Check-In For PlayStation Digital Games (tomshardware.com) 89

Sony is reportedly rolling out a 30-day online check-in requirement for some digital PS4 and PS5 games, meaning players could temporarily lose access if their console does not reconnect to renew the license. Tom's Hardware reports: In the info page of an affected game, you'd see a new validity period and a "remaining time" deadline. At first, this seemed like a software bug, but now PlayStation Support has confirmed its authenticity to multiple users. PlayStation owners are furious about the change.

From what we've seen, this DRM is intended for digital game copies. It works by instating a mandatory online check-in where you have to connect to the internet within a rolling 30-day window or risk losing access to the game. Afterward, you can still restore access, but you'll need an internet connection to renew the game's license first. So far, it seems like only games installed after the recent March firmware update are affected.

Affected customers report that setting your PS4 or PS5 as the primary console doesn't alleviate this check-in policy either. No matter what, any game you download from now on will feature this new requirement, effectively eliminating the concept of offline play for even single-player titles.

AI

The Bloomberg Terminal Is Getting an AI Makeover 6

An anonymous reader quotes a report from Wired: For its famous intractability, the Bloomberg Terminal has long inspired devotion, bordering on obsession. Among traders, the ability to chart a path through the software's dizzying scrolls of numbers and text to isolate far-flung information is the mark of a seasoned professional. But as a greater mass of data is fed into the Terminal -- not only earnings and asset prices, but weather forecasts, shipping logs, factory locations, consumer spending patterns, private loans, and so on -- valuable information is being lost. "It has become more and more untenable," says Shawn Edwards, chief technology officer at Bloomberg. "You miss things, or it takes too long."

To try to remedy the problem, Bloomberg is testing a chatbot-style interface for the Terminal, ASKB (pronounced ask-bee), built atop a basket of different language models. The broad idea is to help finance professionals to condense labor-intensive tasks, and make it possible to test abstract investment theses against the data through natural language prompts. As of publication, the ASKB beta is open to roughly a third of the software's 375,000 users; Bloomberg has not specified a date for a full release.
Wired spoke with Edwards at Bloomberg's palatial London headquarters in early April, where he shared several examples of what ASKB can do. "With ASKB, I can create workflow templates. I can write a long query, and say, 'Hey, here's all the data I'm going to need. Give me a synopsis of the bull and bear cases, what the Street is saying, what the guidance is.' Now, I want to schedule [the workflows] or trigger them when I see this or that condition in the world."

As for what separates mediocre traders from the best, assuming both have access to the same data, Edwards said: "These tools are not magical. They don't make an average [employee] all of a sudden great. The difference will be your ideas. In the hands of experts, it allows them to do better analysis, deeper research -- to sift through 10 great ideas when they might have only had time for one. If you're a mediocre analyst, they'll be 10 mediocre ideas."
Businesses

Microsoft To Stop Sharing Revenue With OpenAI (cnbc.com) 15

Bloomberg reports that Microsoft is ending revenue-sharing payments to OpenAI (paywalled; alternative source) and making the partnership non-exclusive. "The rapid pace of innovation requires us to continue to evolve our partnership to benefit our customers and both companies," Microsoft said Monday in a blog post. Bloomberg reports: The revised deal is meant to simplify a complicated relationship between two partners that has been foundational to OpenAI's rise and the broader AI boom. OpenAI has since pursued partnerships with multiple cloud providers, including Microsoft rival Amazon.com Inc., to meet its growing computing needs to build and service AI software to a wider audience. As part of OpenAI's restructuring last year as a for-profit business, Microsoft received a 27% ownership stake in the AI startup.
Unix

Remembering The 1984 Unix PC. Why Did It Fail So Hard? (youtu.be) 62

"I love these machines," writes long-time Slashdot reader Shayde: I was super-active in the Unix-PC Usenet groups back in the 90s... We hacked the hell out of them. They were small, sexy, and... they ran Unix!

Unfortunately, they were a commercial failure. There were so many things wrong with them — not just stuff that broke, but the baseline configuration was nigh on worthless. I recently was able to get another machine and got it up and running (with a few hiccups). I whipped up a video showing all the cool things it can do, but also running through what went wrong and why it ultimately failed.

The video shows the ancient green-on-black screen of 1984's AT&T Unix PC (with the OS running on a silicon drive emulation). The original machine had 512K of memory and a 10-megabyte hard drive described as slow, failure-prone, and noisy. There's also a drive for inserting floppy disks, and a separate MS-DOS board (with its own CPU) that could be plugged into the expansion slot — but the device was "remarkably heavy," weighing in aqt 40 pounds

See the strange 1984 mouse, and its keyboard with both a Return key and a separate Enter key. There's even plug-in ports for phone landlines. "It looked great," Shayde says in the video, showing off its Spirograph demo and '80s-era games like Pong, Conway's Game of Life, GNU Chess, "Trk", and NetHack. But besides slow startup times, it was expensive — in today's dollars, it would've cost roughly $15,000 — and suffered from Unix's lack of spreadsheets, word processing software and other office productivity tools at the time. At that price the Unix PCs couldn't compete with IBM's home computers and their desktop applications. "It just didn't have the resources, the software, the capabilities and the price point that made it attractive."
Government

Colorado Adds Open-Source Exemption to Age-Verification Bill (linuxiac.com) 29

Colorado's "age-attestation" bill left the House committee with new exemptions for open-source operating systems, applications, code repositories, and containerized software distribution, reports the blog Linuxiac: [The bill] focuses on operating system providers and application stores. Its main requirement is that these providers supply an age-related signal via an interface, so applications can determine whether a user is a minor... System76 founder Carl Richell shared on Fosstodon that the updated bill now includes "a strong exemption for open source distros and apps" and has passed in the House committee. He also quoted the key part, which says Article 30 does not apply to an operating system provider or developer that distributes software under license terms that let recipients copy, redistribute, and modify the software without restrictions from the provider or developer... This wording covers Linux distributions and many open-source applications without linking the exemption to any specific project, company, or ecosystem.

The amendment also excludes applications from free, public code repositories from being considered covered applications. It also excludes code repository providers and containerized software distribution from being defined as covered application stores. This is meant to prevent platforms like GitHub, GitLab, Docker, or Podman-based distributions from being treated like commercial app stores under the bill.

"There are more steps but we're on our way to protecting the open source community," Richell posted on Fosstodon, "at least in Colorado."
GNU is Not Unix

Free Software Foundation Says 'Responsible AI' Licenses Which Restrict Harmful Uses are Unethical and Nonfree (fsf.org) 49

The Free Software Foundation's Licensing and Compliance Manager published a blog post this week to explicitly state that"Responsible AI" Licenses (RAIL) are nonfree and unethical. The licenses restrict AI and ML software "from being used in a specific list of harmful applications," according to the license's web site, "e.g. in surveillance and crime prediction." (The license's steering committee is volunteers from multiple academic institutions.)

But even though Responsible AI licenses are marketed as addressing ethical challenges, the FSF argues "they do not require anything that is really necessary for users to control their computing done with machine learning, including: complete training inputs, training configuration settings, trained model, or — last, but not least — the source code of software used for training, testing, and running tools based on machine learning." Thus, RAILed machine learning can be, and most probably will be, unethical. Use restrictions do not prevent these licenses from being used to exercise power over users...

RAIL contribute to unethical marketing of machine learning, again under the disguise of morally-loaded restrictions they purport to enforce. If we want software to help decrease social injustice, we should oppose licenses that restrict how software can be used. We should focus on effective ways of addressing injustices: government and community support for freedom-respecting tools and services; releasing programs under strong copyleft licenses; and entrusting copyrights to organizations that have the resources to enforce copyleft.

Software freedom must be defended, not denied. More specifically, the more free software is out there, the more likely people will collaborate on tools and services that do not pose moral dangers and help solve existing ones. Free software also makes it more likely that users have real choices when looking for freedom-respecting ethical programs and tools based on machine learning. Denying people the freedom to a particular program, as RAIL or similar licenses would have it, prevents them from using such program for the common good.

AI

Claude Is Connecting Directly To Your Personal Apps 48

Anthropic is expanding Claude's app integrations beyond work tools, adding personal-service connectors like Spotify, Uber, AllTrails, TripAdvisor, Instacart, and TurboTax. The Verge reports: Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. Anthropic notes in its blog post announcing the new connectors that, "Your data from [connected apps] isn't used to train our models, and the app doesn't see your other conversations with Claude. You can also disconnect it at any time."

Additionally, Anthropic says "there are no paid placements or sponsored answers in conversations with Claude." When multiple apps seem relevant, Claude will show results from both "ranked by what's most useful." Claude will also ask users to verify before taking actions like making a purchase or reservation using a connected app.
IOS

Tim Cook Calls Apple Maps Launch His 'First Really Big Mistake' as CEO (macrumors.com) 79

In a recent town hall meeting reported by Bloomberg (paywalled), Apple CEO Tim Cook named the troubled 2012 launch of Apple Maps as his "first really big mistake" in the role. "The product wasn't ready, and we thought it was because we were testing more of local kind of stuff," Cook told staff. MacRumors reports: Reflecting on the debacle, Cook said it was "valuable," noting that he expressed regret to users at the time and suggested they use competing navigation apps instead.

"We apologized for it, and we said, 'Go use these other apps. They're better than ours.' And that was some humble pie," Cook said. "But it was the right thing for our users. And so it's an example of keeping the user at the center of the decisions that we made." Cook added: "Now we've got the best map app on the planet. We learned about persistence, and we did exactly the right thing having made the mistake."

Security

Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (bloomberg.com) 32

Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. [...] To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.

Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.

AI

AI Tool Rips Off Open Source Software Without Violating Copyright (404media.co) 120

A satirical but working tool called Malus uses AI to create "clean room" clones of open-source software, aiming to reproduce the same functionality while shedding attribution and copyleft obligations. "It works," Mike Nolan, one of the two people behind Malus, who researches the political economy of open source software and currently works for the United Nations, told 404 Media. "The Stripe charge will provide you the thing, and it was important for us to do that, because we felt that if it was just satire, it would end up like every other piece of research I've done on open source, which ends up being largely dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation." 404 Media reports: Malus's legal strategy for bypassing copyright is based on a historically pivotal moment for software and copyright law dating back to 1982. Back then, IBM dominated home computing, and competitors like Columbia Data Products wanted to sell products that were compatible with software that IBM customers were already using. Reverse engineering IBM's computer would have infringed on the company's copyright, so Columbia Data Products came up with what we now know as a "clean room" design.

It tasked one team with examining IBM's BIOS and creating specifications for what a clone of that system would require. A different "clean" team, one that was never exposed to IBM's code, then created BIOS that met those specifications from scratch. The result was a system that was compatible with IBM's ecosystem but didn't violate its copyright because it did not copy IBM's technical process and counted as original work.

This clean room method, which has been validated by case law and dramatized in the first season of Halt and Catch Fire, made computing more open and competitive than it would have been otherwise. But it has taken on new meaning in the age of generative AI. It is now easier than ever to ask AI tools to produce software that is identical in function to existing open source projects, and that, some would argue, are built from scratch and are therefore original work that can bypass existing copyright licenses. Others would say that software produced by large language models is inherently derivative, because like any LLM output, it is trained on the collective output of humans scraped from the internet, including specific open source projects.

Malus (pronounced malice), uses AI to do the same thing. "Finally, liberation from open source license obligations," Malus's site says. "Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems." Copyleft is a type of copyright license that ensures reproductions or applications of the software keep it free to share and modify.

Businesses

SpaceX Strikes Deal With Coding Startup Cursor For $60 Billion (nytimes.com) 74

An anonymous reader quotes a report from the New York Times: SpaceX, Elon Musk's rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence start-up Cursor that could result in its acquiring the young company for $60 billion. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. In a social media post, SpaceX said the combination with Cursor, which makes code-writing software, would "allow us to build the world's most useful" A.I. models.

SpaceX added that the agreement gave it the option "to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." It is unclear if the companies plan to consummate the deal before or after SpaceX's I.P.O., which could happen as early as June. [...] Cursor, which has raised more than $3 billion in funding, was founded in 2022 and made waves as a fast-growing A.I. start-up. It was under pressure in recent months after OpenAI and Anthropic announced competing code-writing products that were embraced by tech companies. Cursor had been in talks to raise funding in recent weeks.

Slashdot Top Deals