United Kingdom

Were Still More UK Postmasters Also Wrongly Prosecuted Over Accounting Bug? (computerweekly.com) 48

U.K. postmasters were mistakenly sent to prison due to a bug in their "Horizon" accounting software — as first reported by Computer Weekly back in 2009. Nearly 16 years later, the same site reports that now the Scottish Criminal Cases Review Commission "is attempting to contact any former subpostmasters that could have been prosecuted for unexplained losses on the Post Office's pre-Horizon Capture software.

"There are former subpostmasters that, like Horizon users, could have been convicted of crimes based on data from these systems..." Since the Post Office Horizon scandal hit the mainstream in January 2024 — revealing to a wide audience the suffering experienced by subpostmasters who were blamed for errors in the Horizon accounting system — users of Post Office software that predated Horizon have come forward... to tell their stories, which echoed those of victims of the Horizon scandal. The Criminal Cases Review Commission for England and Wales is now reviewing 21 cases of potential wrongful conviction... where the Capture IT system could be a factor...

The SCCRC is now calling on people that might have been convicted based on Capture accounts to come forward. "The commission encourages anyone who believes that their criminal conviction, or that of a relative, might have been affected by the Capture system to make contact with it," it said. The statutory body is also investigating a third Post Office system, known as Ecco+, which was also error-prone...

A total of 64 former subpostmasters in Scotland have now had their convictions overturned through the legislation brought through Scottish Parliament. So far, 97 convicted subpostmasters have come forward, and 86 have been assessed, out of which the 64 have been overturned. However, 22 have been rejected and another 11 are still to be assessed. An independent group, fronted by a former Scottish subpostmaster, is also calling on users of any of the Post Office systems to come forward to tell their stories, and for support in seeking justice and redress.

Businesses

Makers of Rent-Setting Software Sue California City Over Ban (apnews.com) 95

Berkeley, California is "the latest city to try to block landlords from using algorithms when deciding rents," reports the Associated Press (noting that officials in many cities claim the practice is driving up the price of housing).

But then real estate software company RealPage filed a federal lawsuit against Berkeley on Wednesday: Texas-based RealPage said Berkeley's ordinance, which goes into effect this month, violates the company's free speech rights and is the result of an "intentional campaign of misinformation and often-repeated false claims" about its products.

The U.S. Department of Justice sued Realpage in August under former President Joe Biden, saying its algorithm combines confidential information from each real estate management company in ways that enable landlords to align prices and avoid competition that would otherwise push down rents. That amounts to cartel-like illegal price collusion, prosecutors said. RealPage's clients include huge landlords who collectively oversee millions of units across the U.S. In the lawsuit, the Department of Justice pointed to RealPage executives' own words about how their product maximizes prices for landlords. One executive said, "There is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the entire industry down."

San Francisco, Philadelphia and Minneapolis have since passed ordinances restricting landlords from using rental algorithms. The Department of Justice case remains ongoing, as do lawsuits against RealPage brought by tenants and the attorneys general of Arizona and Washington, D.C...

[On a conference call, RealPage attorney Stephen Weissman told reporters] RealPage officials were never given an opportunity to present their arguments to the Berkeley City Council before the ordinance was passed and said the company is considering legal action against other cities that have passed similar policies, including San Francisco.

RealPage blames high rents not on the software they make, but on a lack of housing supply...
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
Linux

An Interactive-Speed Linux Computer Made of Only 3 8-Pin Chips (dmitry.gr) 35

Software engineer and longtime Slashdot reader, Dmitry Grinberg (dmitrygr), shares a recent project they've been working on: "an interactive-speed Linux on a tiny board you can easily build with only 3 8-pin chips": There was a time when one could order a kit and assemble a computer at home. It would do just about what a contemporary store-bought computer could do. That time is long gone. Modern computers are made of hundreds of huge complex chips with no public datasheets and many hundreds of watts of power supplied to them over complex power delivery topologies. It does not help that modern operating systems require gigabytes of RAM, terabytes of storage, and always-on internet connectivity to properly spy on you. But what if one tried to fit a modern computer into a kit that could be easily assembled at home? What if the kit only had three chips, each with only 8 pins? Can it be done? Yes. The system runs a custom MIPS emulator written in ARMv6 assembly and includes a custom bootloader that supports firmware updates via FAT16-formatted SD cards. Clever pin-sharing hacks allow all components (RAM, SD, serial I/O) to work despite the 6 usable I/O pins. Overclocked to up to 150MHz, the board boots into a full Linux shell in about a minute and performs at ~1.65MHz MIPS-equivalent speed.

It's not fast, writes Dmitry, but it's fully functional -- you can edit files, compile code, and even install Debian packages. A kit may be made available if a partner is found.
IT

Camera Makers Defend Proprietary RAW Formats Despite Open Standard Alternative (theverge.com) 65

Camera manufacturers continue to use different proprietary RAW file formats despite the 20-year existence of Adobe's open-source DNG (Digital Negative) format, creating ongoing compatibility challenges for photographers and software developers.

Major manufacturers including Sony, Canon, and Panasonic defended their proprietary formats as necessary for maintaining control over image processing. Sony's product team told The Verge their ARW format allows them "to maximize performance based on device characteristics such as the image sensor and image processing engine." Canon similarly claims proprietary formats enable "optimum processing during image development."

The Verge argues that this fragmentation forces editing software to specifically support each manufacturer's format and every new camera model -- creating delays for early adopters when new cameras launch. Each new device requires "measuring sensor characteristics such as color and noise," said Adobe's Eric Chan.

For what it's worth, smaller manufacturers like Ricoh, Leica, and Sigma have adopted DNG, which streamlines workflow by containing metadata directly within a single file rather than requiring separate XMP sidecar files.
AI

Google's NotebookLM AI Can Now 'Discover Sources' For You 6

Google's NotebookLM has added a new "Discover sources" feature that allows users to describe a topic and have the AI find and curate relevant sources from the web -- eliminating the need to upload documents manually. "When you tap the Discover button in NotebookLM, you can describe the topic you're interested in, and NotebookLM will bring back a curated collection of relevant sources from the web," says Google software engineer Adam Bignell. Click to add those sources to your notebook; "it's a fast and easy way to quickly grasp a new concept or gather essential reading on a topic." PCMag reports: You can still add your files. NotebookLM can ingest PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides and summarize, transcribe, narrate, or convert into FAQs and study guides. "Discover sources" helps incorporate information you may not have saved. [...] The imported sources stay within the notebook you created. You can read the entire original document, ask questions about it via chat, or apply other NotebookLM features to it.

Google started rolling out both features on Wednesday. It should be available for all users in about "a week or so." For those concerned about privacy, Google says, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
There's also an "I'm Feeling Curious" button (a reference to its iconic "I'm feeling lucky" search button) that generates sources on a random topic you might find interesting.
Oracle

Oracle Tells Clients of Second Recent Hack, Log-In Data Stolen 16

An anonymous reader shares a report: Oracle has told customers that a hacker broke into a computer system and stole old client log-in credentials, according to two people familiar with the matter. It's the second cybersecurity breach that the software company has acknowledged to clients in the last month.

Oracle staff informed some clients this week that the attacker gained access to usernames, passkeys and encrypted passwords, according to the people, who spoke on condition that they not be identified because they're not authorized to discuss the matter. Oracle also told them that the FBI and cybersecurity firm CrowdStrike are investigating the incident, according to the people, who added that the attacker sought an extortion payment from the company. Oracle told customers that the intrusion is separate from another hack that the company flagged to some health-care customers last month, the people said.
Microsoft

Microsoft Pulls Back on Data Centers From Chicago To Jakarta 21

Microsoft has pulled back on data center projects around the world, suggesting the company is taking a harder look at its plans to build the server farms powering artificial intelligence and the cloud. From a report: The software company has recently halted talks for, or delayed development of, sites in Indonesia, the UK, Australia, Illinois, North Dakota and Wisconsin, according to people familiar with the situation. Microsoft is widely seen as a leader in commercializing AI services, largely thanks to its close partnership with OpenAI. Investors closely track Microsoft's spending plans to get a sense of long-term customer demand for cloud and AI services.

It's hard to know how much of the company's data center pullback reflects expectations of diminished demand versus temporary construction challenges, such as shortages of power and building materials. Some investors have interpreted signs of retrenchment as an indication that projected purchases of AI services don't justify Microsoft's massive outlays on server farms. Those concerns have weighed on global tech stocks in recent weeks, particularly chipmakers like Nvidia which suck up a significant share of data center budgets.
AI

Vibe Coded AI App Generates Recipes With Very Few Guardrails 76

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.
Power

Open-Source Tool Designed To Throttle PC and Server Performance Based On Electricity Pricing (tomshardware.com) 56

Robotics and machine learning engineer Naveen Kul developed WattWise, a lightweight open-source CLI tool that monitors power usage via smart plugs and throttles system performance based on electricity pricing and peak hours. Tom's Hardware reports: The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later. Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.

Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance. Naveen's power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)

Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop -- even one designed and used for gaming. WattWise optimizes the system's clock speed based on the current system load, power consumption as reported by the smart plug, and the time -- with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.
A blog post with more information is available here.

WattWise is also available on GitHub.
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
AI

95% of Code Will Be AI-Generated Within Five Years, Microsoft CTO Says 130

Microsoft Chief Technology Officer Kevin Scott has predicted that AI will generate 95% of code within five years. Speaking on the 20VC podcast, Scott said AI would not replace software engineers but transform their role. "It doesn't mean that the AI is doing the software engineering job.... authorship is still going to be human," Scott said.

According to Scott, developers will shift from writing code directly to guiding AI through prompts and instructions. "We go from being an input master (programming languages) to a prompt master (AI orchestrator)," he said. Scott said the current AI systems have significant memory limitations, making them "awfully transactional," but predicted improvements within the next year.
The Almighty Buck

Zelle Is Shutting Down Its App (techcrunch.com) 18

An anonymous reader quotes a report from TechCrunch: Zelle is shutting down its stand-alone app on Tuesday, according to a company blog post. This news might be alarming if you're one of the over 150 million customers in the U.S. who use Zelle for person-to-person payments. But only about 2% of transactions take place via Zelle's app, which is why the company is discontinuing its stand-alone app.

Most consumers access Zelle via their bank, which then allows them to send money to their phone contacts. Zelle users who relied on the stand-alone app will have to re-enroll in the service through another financial institution. Given the small user base of the Zelle app, it makes sense why the company would decide to get rid of it -- maintaining an app takes time and money, especially one where people's financial information is involved.

Mozilla

Mozilla To Launch 'Thunderbird Pro' Paid Services (techspot.com) 36

Mozilla plans to introduce a suite of paid professional services for its open-source Thunderbird email client, transforming the application into a comprehensive communication platform. Dubbed "Thunderbird Pro," the package aims to compete with established ecosystems like Gmail and Office 365 while maintaining Mozilla's commitment to open-source software.

The Pro tier will include four core services: Thunderbird Appointment for streamlined scheduling, Thunderbird Send for file sharing (reviving the discontinued Firefox Send), Thunderbird Assist offering AI capabilities powered by Flower AI, and Thundermail, a revamped email client built on Stalwart's open-source stack. Initially, Thunderbird Pro will be available free to "consistent community contributors," with paid access for other users.

Mozilla Managing Director Ryan Sipes indicated the company may consider limited free tiers once the service establishes a sustainable user base. This initiative follows Mozilla's 2023 announcement about "remaking" Thunderbird's architecture to modernize its aging codebase, addressing user losses to more feature-rich competitors.
The Courts

Donkey Kong Champion Wins Defamation Case Against Australian YouTuber Karl Jobst (theguardian.com) 58

An anonymous reader quotes a report from The Guardian: A professional YouTuber in Queensland has been ordered to pay $350,000 plus interest and costs to the former world record score holder for Donkey Kong, after the Brisbane district court found the YouTuber had defamed him "recklessly" with false claims of a link between a lawsuit and another YouTuber's suicide. William "Billy" Mitchell, an American gamer who had held world records in Donkey Kong and Pac-Man going back to 1982, as recognized by the Guinness World Records and the video game database Twin Galaxies, brought the case against Karl Jobst, seeking $400,000 in general damages and $50,000 in aggravated damages.

Jobst, who makes videos about "speed running" (finishing games as fast as possible), as well as gaming records and cheating in games, made a number of allegations against Mitchell in a 2021 YouTube video. He accused Mitchell of cheating, and "pursuing unmeritorious litigation" against others who had also accused him of cheating, the court judgment stated. The court heard Mitchell was accused in 2017 of cheating in his Donkey Kong world records by using emulation software instead of original arcade hardware. Twin Galaxies investigated the allegation, and subsequently removed Mitchell's scores and banned him from participating in its competitions. The Guinness World Records disqualified Mitchell as a holder of all his records -- in both Donkey Kong and Pac-Man -- after the Twin Galaxies decision. The judgment stated that Jobst's 2021 video also linked the December 2020 suicide of another YouTuber, Apollo Legend, to "stress arising from [his] settlement" with Mitchell, and wrongly asserted that Apollo Legend had to pay Mitchell "a large sum of money."

AI

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.

"Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday.

MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

Encryption

Gmail is Making It Easier For Businesses To Send Encrypted Emails To Anyone (theverge.com) 39

Google is rolling out a new encryption model for Gmail that allows enterprise users to send encrypted messages without requiring recipients to use custom software or exchange encryption certificates. The feature, launching in beta today, initially supports encrypted emails within the same organization, with plans to expand to all Gmail inboxes "in the coming weeks" and third-party email providers "later this year."

Unlike Gmail's current S/MIME-based encryption, the new system lets users simply toggle "additional encryption" in the email draft window. Non-Gmail recipients will receive a link to access messages through a guest Google Workspace account, while Gmail users will see automatically decrypted emails in their inbox.
Programming

'There is No Vibe Engineering' 121

Software engineer Sergey Tselovalnikov weighs in on the new hype: The term caught on and Twitter quickly flooded with posts about how AI has radically transformed coding and will soon replace all software engineers. While AI undeniably impacts the way we write code, it hasn't fundamentally changed our role as engineers. Allow me to explain.

[...] Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the "vibe coder", all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left -- to the earlier stages of development when they're cheap to address. Yet with vibe coding, they're shifted very far to the right -- when addressing them is expensive.

The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won't have anything to do with vibe coding -- at least the way it's defined today.

[...] It is possible that there'll be a future where software is built from vibe-coded blocks, but the work of designing software able to evolve and scale doesn't go away. That's not vibe engineering -- that's just engineering, even if the coding part of it will look a bit different.

Slashdot Top Deals