Businesses

Amazon Pays $650 Million For Nuclear-Powered Data Center 68

Michelle Lewis reports via Electrek: One of the US's largest nuclear power plants will directly power cloud service provider Amazon Web Services' new data center. Power provider Talen Energy sold its data center campus, Cumulus Data Assets, to Amazon Web Services for $650 million. Amazon will develop an up to 960-megawatt (MW) data center at the Salem Township site in Luzerne County, Pennsylvania. The 1,200-acre campus is directly powered by an adjacent 2.5 gigawatt (GW) nuclear power station also owned by Talen Energy.

The 1,075-acre Susquehanna Steam Electric Station is the sixth-largest nuclear power plant in the US. It's been online since 1983 and produces 63 million kilowatt hours per day. The plant has two General Electric boiling water reactors within a Mark II containment building that are licensed through 2042 and 2044. According to Talen Energy's investor presentation, it will supply fixed-price nuclear power to Amazon's new data center as it's built. Amazon has minimum contractual power commitments that ramp up in 120 MW increments over several years. The cloud service giant has a one-time option to cap commitments at 480 MW and two 10-year extension options tied to nuclear license renewals.
Open Source

Why Desktop Linux Is Finally Growing In Popularity (zdnet.com) 188

According to the latest data from StatCounter, Linux's market share has reached 4.03% -- surging by an additional 1% in the last eight months. What's the reason behind this recent growth? "That's a good question," writes ZDNet's Steven Vaughan-Nichols. "While Windows is the king of the hill with 72.13% and MacOS comes in a distant second at 15.46%, it's clear that Linux is making progress." An anonymous Slashdot reader shares the five reasons why Vaughan-Nichols thinks it's growing: 1. Microsoft isn't that interested in Windows
If you think Microsoft is all about the desktop and Windows, think again. Microsoft's profits these days come from its Azure cloud and Software-as-a-Service (SaaS), Microsoft 365 in particular. Microsoft doesn't want you to buy Windows; the Redmond powerhouse wants you to subscribe to Windows 365 Cloud PC. And, by the way, you can run Windows 365 Cloud PC on Macs, Chromebooks, Android tablets, iPads, and, oh yes, Linux desktops.

2. Linux gaming, thanks to Steam, is also growing
Gaming has never been a strong suit for Linux, but Linux gamers are also a slowly growing group. I suspect that's because Steam, the most popular Linux gaming platform, also has the lion's share of the gaming distribution market

3. Users are finally figuring out that some Linux distros are easy to use
Even now, you'll find people who insist that Linux is hard to master. True, if you want to be a Linux power user, Linux will challenge you. But, if all you want to do is work and play, many Linux distributions are suitable for beginners. For example, Linux Mint is simple to use, and it's a great end-user operating system for everyone and anyone.

4. Finding and installing Linux desktop software is easier than ever
While some Linux purists dislike containerized application installation programs such as Flatpak, Snap, and AppImage, developers love them. Why? They make it simple to write applications for Linux that don't need to be tuned just right for all the numerous Linux distributions. For users, that means they get more programs to choose from, and they don't need to worry about finicky installation details.

5. The Linux desktop is growing in popularity in India
India is now the world's fifth-largest economy, and it's still growing. Do you know what else is growing in India? Desktop Linux. In India, Windows is still the number one operating system with 70.37%, but number two is Linux, with 15.23%. MacOS is way back in fourth place with 3.11%. I suspect this is the case because India's economy is largely based on technology. Where you find serious programmers, you find Linux users.

Cloud

Amazon Cancels Fees for Customers Moving To Rival Cloud Services (bloomberg.com) 9

Amazon's cloud services division is halting fees it has long charged customers that switch to a rival provider -- following in the steps of Google, which recently announced it was ending the practice. From a report: Amazon Web Services will no longer charge customers who want to extract all of their data from the company's servers and move them to another service, AWS Vice President Robert Kennedy said in a blog post on Tuesday. "Beginning today, customers globally are now entitled to free data transfers out to the internet if they want to move to another IT provider," Kennedy said.
Cloud

Propose Class Action Alleges Apple's Cloud Storage is an 'Illegal Monopoly' (thehill.com) 169

"Apple faces a proposed class action lawsuit alleging the company holds an illegal monopoly over digital storage for its customers," reports the Hill: The suit, filed Friday, claims "surgical" restraints prevent customers from effectively using any service except its iCloud storage system. iCloud is the only service that can host certain data from the company's phones, tablets and computers, including application data and device settings. Plaintiffs allege the practice has "unlawfully 'tied'" the devices and iCloud together... "As a result of this restraint, would-be cloud competitors are unable to offer Apple's device holders a full-service cloud-storage solution, or even a pale comparison."
The suit argues that there are "no technological or security justifications for this limitation on consumer choice," according to PC Magazine.

The class action's web site is arguing that "Consumers may have paid higher prices than they allegedly would have in a competitive market."
AI

How AI is Taking Water From the Desert (msn.com) 108

Microsoft built two datacenters west of Phoenix, with plans for seven more (serving, among other companies, OpenAI). "Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late," writes the Atlantic. "One semiconductor analyst called this "the largest infrastructure buildout that humanity has ever seen."

But is this part of a concerning trend? Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. That might be a burden in the best of times. As of 2023, it seemed absurd. Phoenix had just endured its hottest summer ever, with 55 days of temperatures above 110 degrees. The weather strained electrical grids and compounded the effects of the worst drought the region has faced in more than a millennium. The Colorado River, which provides drinking water and hydropower throughout the region, has been dwindling. Farmers have already had to fallow fields, and a community on the eastern outskirts of Phoenix went without tap water for most of the year... [T]here were dozens of other facilities I could visit in the area, including those run by Apple, Amazon, Meta, and, soon, Google. Not too far from California, and with plenty of cheap land, Greater Phoenix is among the fastest-growing hubs in the U.S. for data centers....

Microsoft, the biggest tech firm on the planet, has made ambitious plans to tackle climate change. In 2020, it pledged to be carbon-negative (removing more carbon than it emits each year) and water-positive (replenishing more clean water than it consumes) by the end of the decade. But the company also made an all-encompassing commitment to OpenAI, the most important maker of large-scale AI models. In so doing, it helped kick off a global race to build and deploy one of the world's most resource-intensive digital technologies. Microsoft operates more than 300 data centers around the world, and in 2021 declared itself "on pace to build between 50 and 100 new datacenters each year for the foreseeable future...."

Researchers at UC Riverside estimated last year... that global AI demand could cause data centers to suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027. A separate study from a university in the Netherlands, this one peer-reviewed, found that AI servers' electricity demand could grow, over the same period, to be on the order of 100 terawatt hours per year, about as much as the entire annual consumption of Argentina or Sweden... [T]ensions over data centers' water use are cropping up not just in Arizona but also in Oregon, Uruguay, and England, among other places in the world.

The article points out that Microsoft "is transitioning some data centers, including those in Arizona, to designs that use less or no water, cooling themselves instead with giant fans." And an analysis (commissioned by Microsoft) on the impact of one building said it would use about 56 million gallons of drinking water each year, equivalent to the amount used by 670 families, according to the article. "In other words, a campus of servers pumping out ChatGPT replies from the Arizona desert is not about to make anyone go thirsty."
AMD

Huawei's New CPU Matches Zen 3 In Single-Core Performance (tomshardware.com) 77

Long-time Slashdot reader AmiMoJo quotes Tom's Hardware: A Geekbench 6 result features what is likely the first-ever look at the single-core performance of the Taishan V120, developed by Huawei's HiSilicon subsidiary (via @Olrak29_ on X). The single-core score indicates that Taishan V120 cores are roughly on par with AMD's Zen 3 cores from late 2020, which could mean Huawei's technology isn't that far behind cutting-edge Western chip designers.

The Taishan V120 core was first spotted in Huawei's Kirin 9000s smartphone chip, which uses four of the cores alongside two efficiency-focused Arm Cortex A510 cores. Since Kirin 9000s chips are produced using SMIC's second-generation 7nm node (which may make it illegal to sell internationally according to U.S. lawmakers), it would also seem likely that the Taishan V120 core tested in Geekbench 6 is also made on the second-generation 7nm node.

The benchmark result doesn't really say much about what the actual CPU is, with the only hint being 'Huawei Cloud OpenStack Nova.' This implies it's a Kunpeng server CPU, which may either be the Kunpeng 916, 920, or 930. While we can only guess which one it is, it's almost certain to be the 930 given the high single-core performance shown in the result. By contrast, the few Geekbench 5 results for the Kunpeng 920 show it performing well behind AMD's first-generation Epyc Naples from 2017.

Programming

Rust Survey Finds Linux and VS Code Users, More WebAssembly Targeting (rust-lang.org) 40

Rust's official survey team released results from their 8th annual survey "focused on gathering insights and feedback from Rust users". In terms of operating systems used by Rustaceans, the situation is very similar to the results from 2022, with Linux being the most popular choice of Rust users [69.7%], followed by macOS [33.5%] and Windows [31.9%], which have a very similar share of usage. Rust programmers target a diverse set of platforms with their Rust programs, even though the most popular target by far is still a Linux machine [85.4%]. We can see a slight uptick in users targeting WebAssembly [27.1%], embedded and mobile platforms, which speaks to the versatility of Rust.

We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) do they use. Visual Studio Code still seems to be the most popular option [61.7%], with RustRover (which was released last year) also gaining some traction [16.4%].

The site ITPro spoke to James Governor, co-founder of the developer-focused analyst firm RedMonk, who said Rust's usage is "steadily increasing", pointing to its adoption among hyperscalers and cloud companies and in new infrastructure projects. "Rust is not crossing over yet as a general-purpose programming language, as Python did when it overtook Java, but it's seeing steady growth in adoption, which we expect to continue. It seems like a sustainable success story at this point."

But InfoWorld writes that "while the use of Rust language by professional programmers continues to grow, Rust users expressed concerns about the language becoming too complex and the low level of Rust usage in the tech industry." Among the 9,374 respondents who shared their main worries for the future of Rust, 43% were most concerned about Rust becoming too complex, a five percentage point increase from 2022; 42% were most concerned about low usage of Rust in the tech industry; and 32% were most concerned about Rust developers and maintainers not being properly supported, a six percentage point increase from 2022. Further, the percentage of respondents who were not at all concerned about the future of Rust fell, from 30% in 2022 to 18% in 2023.
Programming

Stack Overflow To Charge LLM Developers For Access To Its Coding Content (theregister.com) 32

Stack Overflow has launched an API that will require all AI models trained on its coding question-and-answer content to attribute sources linking back to its posts. And it will cost money to use the site's content. From a report: "All products based on models that consume public Stack Overflow data are required to provide attribution back to the highest relevance posts that influenced the summary given by the model," it confirmed in a statement. The Overflow API is designed to act as a knowledge database to help developers build more accurate and helpful code-generation models. Google announced it was using the service to access relevant information from Stack Overflow via the API and integrate the data with its latest Gemini models, and for its cloud storage console.
Government

Government Watchdog Hacked US Federal Agency To Stress-Test Its Cloud Security (techcrunch.com) 21

In a series of tests using fake data, a U.S. government watchdog was able to steal more than 1GB of seemingly sensitive personal data from the cloud systems of the U.S. Department of the Interior. The experiment is detailed in a new report by the Department of the Interior's Office of the Inspector General (OIG), published last week. TechCrunch reports: The goal of the report was to test the security of the Department of the Interior's cloud infrastructure, as well as its "data loss prevention solution," software that is supposed to protect the department's most sensitive data from malicious hackers. The tests were conducted between March 2022 and June 2023, the OIG wrote in the report. The Department of the Interior manages the country's federal land, national parks and a budget of billions of dollars, and hosts a significant amount of data in the cloud. According to the report, in order to test whether the Department of the Interior's cloud infrastructure was secure, the OIG used an online tool called Mockaroo to create fake personal data that "would appear valid to the Department's security tools."

The OIG team then used a virtual machine inside the Department's cloud environment to imitate "a sophisticated threat actor" inside of its network, and subsequently used "well-known and widely documented techniques to exfiltrate data." "We used the virtual machine as-is and did not install any tools, software, or malware that would make it easier to exfiltrate data from the subject system," the report read. The OIG said it conducted more than 100 tests in a week, monitoring the government department's "computer logs and incident tracking systems in real time," and none of its tests were detected nor prevented by the department's cybersecurity defenses.

"Our tests succeeded because the Department failed to implement security measures capable of either preventing or detecting well-known and widely used techniques employed by malicious actors to steal sensitive data," said the OIG's report. "In the years that the system has been hosted in a cloud, the Department has never conducted regular required tests of the system's controls for protecting sensitive data from unauthorized access." That's the bad news: The weaknesses in the Department's systems and practices "put sensitive [personal information] for tens of thousands of Federal employees at risk of unauthorized access," read the report. The OIG also admitted that it may be impossible to stop "a well-resourced adversary" from breaking in, but with some improvements, it may be possible to stop that adversary from exfiltrating the sensitive data.

AI

StarCoder 2 Is a Code-Generating AI That Runs On Most GPUs (techcrunch.com) 44

An anonymous reader quotes a report from TechCrunch: Perceiving the demand for alternatives, AI startup Hugging Face several years ago teamed up with ServiceNow, the workflow automation platform, to create StarCoder, an open source code generator with a less restrictive license than some of the others out there. The original came online early last year, and work has been underway on a follow-up, StarCoder 2, ever since. StarCoder 2 isn't a single code-generating model, but rather a family. Released today, it comes in three variants, the first two of which can run on most modern consumer GPUs: A 3-billion-parameter (3B) model trained by ServiceNow; A 7-billion-parameter (7B) model trained by Hugging Face; and A 15-billion-parameter (15B) model trained by Nvidia, the newest supporter of the StarCoder project. (Note that "parameters" are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating code.)a

Like most other code generators, StarCoder 2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language. Trained with 4x more data than the original StarCoder (67.5 terabytes versus 6.4 terabytes), StarCoder 2 delivers what Hugging Face, ServiceNow and Nvidia characterize as "significantly" improved performance at lower costs to operate. StarCoder 2 can be fine-tuned "in a few hours" using a GPU like the Nvidia A100 on first- or third-party data to create apps such as chatbots and personal coding assistants. And, because it was trained on a larger and more diverse data set than the original StarCoder (~619 programming languages), StarCoder 2 can make more accurate, context-aware predictions -- at least hypothetically.

[I]s StarCoder 2 really superior to the other code generators out there -- free or paid? Depending on the benchmark, it appears to be more efficient than one of the versions of Code Llama, Code Llama 33B. Hugging Face says that StarCoder 2 15B matches Code Llama 33B on a subset of code completion tasks at twice the speed. It's not clear which tasks; Hugging Face didn't specify. StarCoder 2, as an open source collection of models, also has the advantage of being able to deploy locally and "learn" a developer's source code or codebase -- an attractive prospect to devs and companies wary of exposing code to a cloud-hosted AI. Hugging Face, ServiceNow and Nvidia also make the case that StarCoder 2 is more ethical -- and less legally fraught -- than its rivals. [...] As opposed to code generators trained using copyrighted code (GitHub Copilot, among others), StarCoder 2 was trained only on data under license from the Software Heritage, the nonprofit organization providing archival services for code. Ahead of StarCoder 2's training, BigCode, the cross-organizational team behind much of StarCoder 2's roadmap, gave code owners a chance to opt out of the training set if they wanted. As with the original StarCoder, StarCoder 2's training data is available for developers to fork, reproduce or audit as they please.
StarCoder 2's license may still be a roadblock for some. "StarCoder 2 is licensed under the BigCode Open RAIL-M 1.0, which aims to promote responsible use by imposing 'light touch' restrictions on both model licensees and downstream users," writes TechCrunch's Kyle Wiggers. "While less constraining than many other licenses, RAIL-M isn't truly 'open' in the sense that it doesn't permit developers to use StarCoder 2 for every conceivable application (medical advice-giving apps are strictly off limits, for example). Some commentators say RAIL-M's requirements may be too vague to comply with in any case -- and that RAIL-M could conflict with AI-related regulations like the EU AI Act."
Businesses

Nvidia's Free-tier GeForce Now Will Soon Show Ads While You're Waiting To Play (theverge.com) 34

Nvidia's completely free, no-strings attached trial of its cloud gaming service GeForce Now is about to be very slightly less of a deal. Nvidia says users will now start seeing ads. From a report: They're only for the free tier -- not Priority or Ultimate -- and even then, it sounds like they won't interrupt your gameplay. "Free users will start to see up to two minutes of ads while waiting in queue to start a gaming session," writes Nvidia spokesperson Stephanie Ngo. Currently, the free tier does often involve waiting in line for a remote computer to free up before every hour of free gameplay -- now, I guess there'll be a few ads too. Nvidia says the ads should help pay for the free tier of service, and that it expects the change "will reduce average wait times for free users over time."
Cloud

Google Steps Up Microsoft Criticism, Warns of Rival's Monopoly in Cloud (reuters.com) 110

Alphabet's Google Cloud on Monday ramped up its criticism of Microsoft's cloud computing practices, saying its rival is seeking a monopoly that would harm the development of emerging technologies such as generative AI. From a report: "We worry about Microsoft wanting to flex their decade-long practices where they had a lot of monopoly on the on-premise software before and now they are trying to push that into cloud now," Google Cloud Vice President Amit Zavery said in an interview. "So they are creating this whole walled garden, which is completely controlled and owned by Microsoft, and customers who want to do any of this stuff, you have to go to Microsoft only," he said.

"If Microsoft cloud doesn't remain open, we will have issues and long-term problems, even in next generation technologies like AI as well, because Microsoft is forcing customers to go to Azure in many ways," Zavery said, referring to Microsoft's cloud computing platform. He urged antitrust regulators to act. "I think regulators need to provide some kind of guidance as well as maybe regulations which prevent the way Microsoft is building the Azure cloud business, not allow your on-premise monopoly to bring it into the cloud monopoly," Zavery said.

Microsoft

Microsoft Strikes Deal With Mistral in Push Beyond OpenAI (ft.com) 13

Microsoft has struck a deal with French AI startup Mistral as it seeks to broaden its involvement in the fast-growing industry beyond OpenAI. From a report: The US tech giant will provide the 10-month-old Paris-based company with help in bringing its AI models to market. Microsoft will also take a minor stake in Mistral, although the financial details have not been disclosed. The partnership makes Mistral the second company to provide commercial language models available on Microsoft's Azure cloud computing platform. Microsoft has already invested about $13 billion in San Francisco-based OpenAI, an alliance that is being reviewed by competition watchdogs in the US, EU and UK. Other Big Tech rivals, such as Google and Amazon, are also investing heavily in building generative AI -- software that can produce text, images and code in seconds -- which analysts believe has the capacity to shake up industries across the world. WSJ adds: On Monday, Mistral plans to announce a new AI model, called Mistral Large, that Mensch said can perform some reasoning tasks comparably with GPT-4, OpenAI's most advanced language model to date, and Gemini Ultra, Google's new model. Mensch said his new model cost less than 20 million euros, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company's biggest models cost "much more than" $50 million to $100 million.
Moon

Moon Landing's Payloads Include Archive of Human Knowledge, Lunar Data Center Test, NFTs (medium.com) 75

In 2019 a SpaceX Falcon 9 rocket launched an Israeli spacecraft carrying a 30-million page archive of human civilization to the moon. Unfortunately, that spacecraft crashed. But thanks to this week's moon landing by the Odysseus, there's now a 30-million page "Lunar Library" on the moon — according to a Medium post by the Arch Mission Foundation.

"This historic moment secures humanity's cultural heritage and knowledge in an indestructible archive built to last for up to billions of years." Etched onto thin sheets of nickel, called NanoFiche, the Lunar Library is practically indestructible and can withstand the harsh conditions of space... Some of the notable content includes:


The Wikipedia. The entire English Wikipedia containing over 6 million articles on every branch of knowledge.
Project Gutenberg. Portions of Project Gutenberg's library of over 70,000 free eBooks containing some of our most treasured literature.
The Long Now Foundation's Rosetta Project archive of over 7,000 human languages and The Panlex datasets.
Selections from the Internet Archive's collections of books and important documents and data sets.
The SETI Institute's Earthling Project, featuring a musical compilation of 10,000 vocal submissions representing humanity united
The Arch Lunar Art Archive containing a collection of works from global contemporary and digital artists in 2022, recorded as NFTs.
David Copperfield's Magic Secrets — the secrets to all his greatest illusions — including how he will make the Moon disappear in the near future.
The Arch Mission Primer — which teaches a million concepts with images and words in 5 languages.
The Arch Mission Private Library — containing millions of pages as well as books, documents and articles on every subject, including a broad range of fiction and non-fiction, textbooks, periodicals, audio recordings, videos, historical documents, software sourcecode, data sets, and more.
The Arch Mission Vaults — private collections, including collections from our advisors and partners, and a collection of important texts and images from all the world's religions including the great religions and indigenous religions from around the world, collections of books, photos, and a collection of music by leading recording artists, and much more content that may be revealed in the future...


We also want to recognize our esteemed advisors, and our many content partners and collections including the Wikimedia Foundation, the Long Now Foundation, The SETI Institute Earthling Project, the Arch Lunar Art Archive project, Project Gutenberg, the Internet Archive, and the many donors who helped make the Lunar Library possible through their generous contributions. This accomplishment would not have happened without the collaborative support of so many...

We will continue to send backups of our important knowledge and cultural heritage — placing them on the surface of the Earth, in caves and deep underground bunkers and mines, and around the solar system as well. This is a mission that continues as long as humanity endures, and perhaps even long after we are gone, as a gift for whoever comes next.

Space.com has a nice rundown of the other new payloads that just landed on the moon. Some highlights:
  • "Cloud computing startup Lonestar's Independence payload is a lunar data center test mission for data storage and transmission from the lunar surface."
  • LRA is a small hemisphere of light-reflectors built to servce as a precision landmark to "allow spacecraft to ping it with lasers to help them determine their precise distance..."
  • ROLSES is a radio spectrometer for measuring the electron density near the lunar surface, "and how it may affect radio observatories, as well as observing solar and planetary radio waves and other phenomena."
  • "Artist Jeff Koons is sending 125 miniature stainless steel Moon Phase sculptures, each honoring significant human achievements across cultures and history, to be displayed on the moon in a cube. "

Cloud

Service Mesh Linkerd Moves Its Stable Releases Behind a Paywall (techtarget.com) 13

TechTarget notes it was Linkerd's original developers who coined the term "service mesh" — describing their infrastructure layer for communication between microservices.

But "There has to be some way of connecting the businesses that are being built on top of Linkerd back to funding the project," argues Buoyant CEO William Morgan. "If we don't do that, then there's no way for us to evolve this project and to grow it in the way that I think we all want."

And so, TechTarget reports... Beginning May 21, 2024, any company with more than 50 employees running Linkerd in production must pay Buoyant $2,000 per Kubernetes cluster per month to access stable releases of the project...

The project's overall source code will remain available in GitHub, and edge, or experimental early releases of code, will continue to be committed to open source. But the additional work done by Buoyant developers to backport minimal changes so that they're compatible with existing versions of Linkerd and to fix bugs, with reliability guarantees, to create stable releases will only be available behind a paywall, Morgan said... Morgan said he is prepared for backlash from the community about this change. In the last section of a company blog post FAQ about the update, Morgan included a question that reads, in part, "Who can I yell at...?"

But industry watchers flatly pronounced the change a departure from open source. "By saying, 'Sorry but we can no longer afford to hand out a production-ready product as free open source code,' Buoyant has removed the open source character of this project," said Torsten Volk, an analyst at Enterprise Management Associates. "This goes far beyond the popular approach of offering a managed version of a product that may include some additional premium features for a fee while still providing customers with the option to use the more basic open source version in production." Open source developers outside Buoyant won't want to contribute to the project — and Buoyant's bottom line — without receiving production-ready code in return, Volk predicted.

Morgan conceded that these are potentially valid concerns and said he's open to finding a way to resolve them with contributors... "I don't think there's a legal argument there, but there's an unresolved tension there, similar to testing edge releases — that's labor just as much as contributing is. I don't have a great answer to that, but it's not unique to Buoyant or Linkerd."

And so, "Starting in May, if you want the latest stable version of the open source Linkerd to download and run, you will have to go with Buoyant's commercial distribution," according to another report (though "there are discounts for non-profits, high-volume use cases, and other unique needs.") The Cloud Native Computing Foundation manages the open source project. The copyright is held by the Linkerd authors themselves. Linkerd is licensed under the Apache 2.0 license.

Buoyant CEO William Morgan explained in an interview with TNS that the changes in licensing are necessary to continue to ensure that Linkerd runs smoothly for enterprise users. Packaging the releases has also been demanding a lot of resources, perhaps even more than maintaining and advancing the core software itself, Morgan explained. He likened the approach to how Red Hat operates with Linux, which offers Fedora as an early release and maintains its core Linux offering, Red Hat Enterprise Linux (RHEL) for commercial clients.

"If you want the work that we put into the stable releases, which is predominantly around, not just testing, but also minimizing the changes in subsequent releases, that's hard hard work" requiring input from "world-leading experts in distributed systems," Morgan said.

"Well, that's kind of the dark, proprietary side of things."

Businesses

Nvidia Posts Record Revenue Up 265% On Booming AI Business (cnbc.com) 27

In its fourth quarter earnings report today, Nvidia beat Wall Street's forecast for earnings and sales, causing shares to rise about 10% in extended trading. CNBC reports: Here's what the company reported compared with what Wall Street was expecting for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

Earnings per share: $5.16 adjusted vs. $4.64 expected
Revenue: $22.10 billion vs. $20.62 billion expected

Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales. Nvidia CEO Jensen Huang addressed investor fears that the company may not be able to keep up this growth or level of sales for the whole year on a call with analysts. "Fundamentally, the conditions are excellent for continued growth" in 2025 and beyond, Huang told analysts. He says demand for the company's GPUs will remain high due to generative AI and an industry-wide shift away from central processors to the accelerators that Nvidia makes.

Nvidia reported $12.29 billion in net income during the quarter, or $4.93 per share, up 769% versus last year's $1.41 billion or 57 cents per share. Nvidia's total revenue rose 265% from a year ago, based on strong sales for AI chips for servers, particularly the company's "Hopper" chips such as the H100, it said. "Strong demand was driven by enterprise software and consumer internet applications, and multiple industry verticals including automotive, financial services and health care," the company said in commentary provided to investors. Those sales are reported in the company's Data Center business, which now comprises the majority of Nvidia's revenue. Data center sales were up 409% to $18.40 billion. Over half the company's data center sales went to large cloud providers. [...]

The company's gaming business, which includes graphics cards for laptops and PCs, was merely up 56% year over year to $2.87 billion. Graphics cards for gaming used to be Nvidia's primary business before its AI chips started taking off, and some of Nvidia's graphics cards can be used for AI. Nvidia's smaller businesses did not show the same meteoric growth. Its automotive business declined 4% to $281 million in sales, and its OEM and other business, which includes crypto chips, rose 7% to $90 million. Nvidia's business making graphics hardware for professional applications rose 105% to $463 million.

Businesses

International Nest Aware Subscriptions Jump in Price, as Much As 100% (arstechnica.com) 43

Google's "Nest Aware" camera subscription is going through another round of price increases. From a report: This time it's for international users. There's no big announcement or anything, just a smattering of email screenshots from various countries on the Nest subreddit. 9to5Google was nice enough to hunt down a pile of the announcements. Nest Aware is a monthly subscription fee for Google's Nest cameras. Nest cameras exclusively store all their video in the cloud, and without the subscription, you aren't allowed to record video 24/7.

There are two sets of subscriptions to keep track of: the current generation subscription for modern cameras and the "first generation Nest Aware" subscription for older cameras. To give you an idea of what we're dealing with, in the US, the current free tier only gets you three hours of "event" video -- meaning video triggered by motion detection. Even the basic $8-a-month subscription doesn't get you 24/7 recording -- that's still only 30 days of event video. The "Nest Aware Plus" subscription, at $15 a month in the US, gets you 10 days of 24/7 video recording. The "first-generation" Nest Aware subscription, which is tied to earlier cameras and isn't available for new customers anymore, is doubling in price in Canada. The basic tier of five days of 24/7 video is going from a yearly fee of CA$50 to CA$110 (the first-generation sub has 24/7 video on every tier). Ten days of video is jumping from CA$80 to CA$160, and 30 days is going from CA$110 to CA$220. These are the prices for a single camera; the first-generation subscription will have additional charges for additional cameras. The current Nest Aware subscription for modern cameras is getting jumps that look similar to the US, with Nest Aware Plus, the mid-tier, going from CA$16 to CA $20 per month, and presumably similar raises across the board.

Sony

Sony's PlayStation Portal Hacked To Run Emulated PSP Games (theverge.com) 12

An anonymous reader shares a report: Sony's new PlayStation Portal has been hacked by Google engineers to run emulated games locally. The $199.99 handheld debuted in November but was limited to just streaming games from a PS5 console and not even titles from Sony's cloud gaming service. Now, two Google engineers have managed to get the PPSSPP emulator running natively on the PlayStation Portal, allowing a Grand Theft Auto PSP version to run on the Portal without Wi-Fi streaming required. "After more than a month of hard work, PPSSPP is running natively on PlayStation Portal. Yes, we hacked it," says Andy Nguyen in a post on X. Nguyen also confirms that the exploit is "all software based," so it doesn't require any hardware modifications like additional chips or soldering. Only a photo of Grand Theft Auto: Liberty City Stories running on the PlayStation Portal has been released so far, but Nguyen may release some videos to demonstrate the exploit at the weekend.
Security

MIT Researchers Build Tiny Tamper-Proof ID Tag Utilizing Terahertz Waves (mit.edu) 42

A few years ago, MIT researchers invented a cryptographic ID tag — but like traditional RFID tags, "a counterfeiter could peel the tag off a genuine item and reattach it to a fake," writes MIT News.

"The researchers have now surmounted this security vulnerability by leveraging terahertz waves to develop an antitampering ID tag that still offers the benefits of being tiny, cheap, and secure." They mix microscopic metal particles into the glue that sticks the tag to an object, and then use terahertz waves to detect the unique pattern those particles form on the item's surface. Akin to a fingerprint, this random glue pattern is used to authenticate the item, explains Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on the antitampering tag. "These metal particles are essentially like mirrors for terahertz waves. If I spread a bunch of mirror pieces onto a surface and then shine light on that, depending on the orientation, size, and location of those mirrors, I would get a different reflected pattern. But if you peel the chip off and reattach it, you destroy that pattern," adds Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group in the Research Laboratory of Electronics.

The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy. Because the terahertz tag is so cheap to produce, it could be implemented throughout a massive supply chain. And its tiny size enables the tag to attach to items too small for traditional RFIDs, such as certain medical devices...

"These responses are impossible to duplicate, as long as the glue interface is destroyed by a counterfeiter," Han says. A vendor would take an initial reading of the antitampering tag once it was stuck onto an item, and then store those data in the cloud, using them later for verification."

Seems like the only way to thwart that would be carving out the part of the surface where the tag was affixed — and then pasting the tag, glue, and what it adheres to all together onto some other surface. But more importantly, Han says they'd wanted to demonstrate "that the application of the terahertz spectrum can go well beyond broadband wireless."

In this case, you can use terahertz for ID, security, and authentication. There are a lot of possibilities out there."
Earth

Ocean Temperatures Are Skyrocketing (arstechnica.com) 110

"For nearly a year now, a bizarre heating event has been unfolding across the world's oceans," reports Wired.

"In March 2023, global sea surface temperatures started shattering record daily highs and have stayed that way since..." Brian McNoldy, a hurricane researcher at the University of Miami. "It's really getting to be strange that we're just seeing the records break by this much, and for this long...." Unlike land, which rapidly heats and cools as day turns to night and back again, it takes a lot to warm up an ocean that may be thousands of feet deep. So even an anomaly of mere fractions of a degree is significant. "To get into the two or three or four degrees, like it is in a few places, it's pretty exceptional," says McNoldy.

So what's going on here? For one, the oceans have been steadily warming over the decades, absorbing something like 90 percent of the extra heat that humans have added to the atmosphere...

A major concern with such warm surface temperatures is the health of the ecosystems floating there: phytoplankton that bloom by soaking up the sun's energy and the tiny zooplankton that feed on them. If temperatures get too high, certain species might suffer, shaking the foundations of the ocean food web. But more subtly, when the surface warms, it creates a cap of hot water, blocking the nutrients in colder waters below from mixing upwards. Phytoplankton need those nutrients to properly grow and sequester carbon, thus mitigating climate change...

Making matters worse, the warmer water gets, the less oxygen it can hold. "We have seen the growth of these oxygen minimum zones," says Dennis Hansell, an oceanographer and biogeochemist at the University of Miami. "Organisms that need a lot of oxygen, they're not too happy when the concentrations go down in any way — think of a tuna that is expending a lot of energy to race through the water."

But why is this happening? The article suggests less dust blowing from the Sahara desert to shade the oceans, but also 2020 regulations that reduced sulfur aerosols in shipping fuels. (This reduced toxic air pollution — but also some cloud cover.)

There was also an El Nino in the Pacific ocean last summer — now waning — which complicates things, according to biological oceanographer Francisco Chavez of the Monterey Bay Aquarium Research Institute in California. "One of our challenges is trying to tease out what these natural variations are doing in relation to the steady warming due to increasing CO2 in the atmosphere."

But the article points out that even the Atlantic ocean is heating up — and "sea surface temperatures started soaring last year well before El Niño formed." And last week the U.S. Climate Prediction Center predicted there's now a 55% chance of a La Nina in the Atlantic between June and August, according to the article — which could increase the likelihood of hurricanes.

Thanks to long-time Slashdot reader mrflash818 for sharing the article.

Slashdot Top Deals