Communications

Starlink's Laser System Is Beaming 42 Million GB of Data Per Day (pcmag.com) 97

SpaceX revealed that it's delivering over 42 petabytes of data for customers per day, according to engineer Travis Brashears. "We're passing over terabits per second [of data] every day across 9,000 lasers," Brashears said today at SPIE Photonics West, an event in San Francisco focused on the latest advancements in optics and light. "We actually serve over lasers all of our users on Starlink at a given time in like a two-hour window." PCMag reports: Although Starlink uses radio waves to beam high-speed internet to customers, SpaceX has also been outfitting the company's satellites with a "laser link" system to help drive down latency and improve the system's global coverage. The lasers, which can sustain a 100Gbps connection per link, are especially crucial to helping the satellites fetch data when no SpaceX ground station is near, like over the ocean or Antarctic. Instead, the satellite can transmit the data to and from another Starlink satellite in Earth's orbit, forming a mesh network in space.

Tuesday's talk from Brashears revealed the laser system is quite robust, even as the equipment is flying onboard thousands of Starlink satellites constantly circling the Earth. Despite the technical challenges, the company has achieved a laser "link uptime" at over 99%. The satellites are constantly forming laser links, resulting in about 266,141 "laser acquisitions" per day, according to Brashears' presentation. But in some cases, the links can also be maintained for weeks at a time, and even reach transmission rates at up to 200Gbps.

Brashears also said Starlink's laser system was able to connect two satellites over 5,400 kilometers (3,355 miles) apart. The link was so long "it cut down through the atmosphere, all the way down to 30 kilometers above the surface of the Earth," he said, before the connection broke. "Another really fun fact is that we held a link all the way down to 122 kilometers while we were de-orbiting a satellite," he said. "And we were able to downstream the video." During his presentation, Brashears also showed a slide depicting how the laser system can deliver data to a Starlink dish in Antarctica through about seven different paths. "We can dynamically change those routes within milliseconds. So as long as we have some path to the ground [station], you're going to have 99.99% uptime. That's why it's important to get as many nodes up there as possible," he added.

United States

US Disabled Chinese Hacking Network Targeting Critical Infrastructure (reuters.com) 24

The U.S. government in recent months launched an operation to fight a pervasive Chinese hacking operation that successfully compromised thousands of internet-connected devices, Reuters reported Tuesday, citing two Western security officials and another person familiar with the matter. From the report: The Justice Department and Federal Bureau of Investigation sought and received legal authorization to remotely disable aspects of the Chinese hacking campaign, the sources told Reuters. The Biden administration has increasingly focused on hacking, not only for fear nation states may try to disrupt the U.S. election in November, but because ransomware wreaked havoc on Corporate America in 2023.

The hacking group at the center of recent activity, Volt Typhoon, has especially alarmed intelligence officials who say it is part of a larger effort to compromise Western critical infrastructure, including naval ports, internet service providers and utilities. While the Volt Typhoon campaign initially came to light in May 2023, the hackers expanded the scope of their operations late last year and changed some of their techniques, according to three people familiar with the matter. The widespread nature of the hacks led to a series of meetings between the White House and private technology industry, including several telecommunications and cloud commuting companies, where the U.S. government asked for assistance in tracking the activity.

Communications

T-Mobile Says It May Slow Home Internet Speeds of Some Users in Times of 'Congestion' (cnet.com) 72

T-Mobile has tweaked its terms of service for its home broadband users to add a new clause: If you are a heavy internet user that passes 1.2TB of data in a monthly billing cycle, you may have your speeds slowed in "times of congestion" or when there is a lot of pressure on the network. CNET: As spotted by The Mobile Report, the change went into effect on Jan. 18. In its updated terms, the carrier says that these users "will be prioritized last on the network" in congestion situations, which could mean painfully slow speeds for however long the congestion persists. T-Mobile does note that since its Home Internet service is available only in "limited areas" and intended to be used in a "stationary" setting, as opposed to a phone that could be in a busy place like a packed stadium, "these customers should be less likely to notice congestion in general."
Transportation

America's Car Industry Seeks to Crush AM Radio. Will Congress Rescue It? (msn.com) 262

The Wall Street Journal reports that "a motley crew of AM radio advocates," including conservative talk show hosts and federal emergency officials, are lobbying Congress to stop carmakers from dropping AM radio from new vehicles: Lawmakers say most car companies are noncommittal about the future of AM tuners in vehicles, so they want to require them by law to keep making cars with free AM radio. Supporters argue it is a critical piece of the emergency communication network, while the automakers say Americans have plenty of other ways, including their phones, to receive alerts and information. The legislation has united lawmakers who ordinarily want nothing to do with one another. Sens. Ted Cruz (R., Texas) and Ed Markey (D., Mass.) are leading the Senate effort, and on the House side, Speaker Mike Johnson — himself a former conservative talk radio host in Louisiana — and progressive "squad" member Rep. Rashida Tlaib of Michigan are among about 200 co-sponsors...

A spring 2023 Nielsen survey, the most recent one available, showed that AM radio reaches about 78 million Americans every month. That is down from nearly 107 million in the spring of 2016, one of the earliest periods for which Nielsen has data... Automakers say the rise of electric vehicles is driving the shift away from AM, because onboard electronics create interference with AM radio signals — a phenomenon that "makes the already fuzzy analog AM radio frequency basically unlistenable," according to the Alliance for Automotive Innovation, a car-industry trade group. Shielding cables and components to reduce interference would cost carmakers $3.8 billion over seven years, the group estimates.

Markey and other lawmakers say they want to preserve AM radio because of its role in emergency communications. The Federal Emergency Management Agency says that more than 75 radio stations, most of which operate on the AM band and cover at least 90% of the U.S. population, are equipped with backup communications equipment and generators that allow them to continue broadcasting information to the public during and after an emergency. Seven former FEMA administrators urged Congress in a letter last year to seek assurances from automakers that they would keep broadcast radio available. The companies' noncommittal response spurred legislation, lawmakers said.

Automakers increasingly want to put radio and other car features "behind a paywall," Markey said in an interview. "They see this as another profit center for them when the American driving public has seen it as a safety resource for them and their families...." He compared the auto industry's resistance to the bill to previous opposition to government mandates like seat belts and air bags. "Leaving safety decisions to the auto industry is very dangerous," Markey said.

Lawmakers have heard from over 400,000 AM radio supporters, according to the president of the National Association of Broadcasters.

But the article also cites an executive at the Consumer Technology Association, who says automakers and tech advocacy groups have told lawmakers that requiring AM radio "would be "inconsistent with the principles of a free market.... It's strange that Congress is focused on a 100-year-old technology."
Transportation

18-Year-Old Cleared After Encrypted Snapchat Joke Led To F-18s and Arrest (bbc.co.uk) 133

Slashdot reader Bruce66423 shared this report from the BBC: A Spanish court has cleared a British man of public disorder, after he joked to friends about blowing up a flight from London Gatwick to Menorca.

Aditya Verma admitted he told friends in July 2022: "On my way to blow up the plane. I'm a member of the Taliban." But he said he had made the joke in a private Snapchat group and never intended to "cause public distress"... The message he sent to friends, before boarding the plane, went on to be picked up by UK security services. They then flagged it to Spanish authorities while the easyJet plane was still in the air.

Two Spanish F-18 fighter jets were sent to flank the aircraft. One followed the plane until it landed at Menorca, where the plane was searched. Mr Verma, who was 18 at the time, was arrested and held in a Spanish police cell for two days. He was later released on bail... If he had been found guilty, the university student faced a fine of up to €22,500 (£19,300 or $20,967) and a further €95,000 (£81,204 or $103,200) in expenses to cover the cost of the jets being scrambled.

But how did his message first get from the encrypted app to the UK security services? One theory, raised in the trial, was that it could have been intercepted via Gatwick's Wi-Fi network. But a spokesperson for the airport told BBC News that its network "does not have that capability"... A spokesperson for Snapchat said the social media platform would not "comment on what's happened in this individual case".
richi (Slashdot reader #74,551) thinks it's obvious what happened: SnapChat's own web site says they scan messages for threats and passes them on to the authorities. ("We also work to proactively escalate to law enforcement any content appearing to involve imminent threats to life, such as...bomb threats...."

"In the case of emergency disclosure requests from law enforcement, our 24/7 team usually responds within 30 minutes."
Communications

Google and AT&T Invest In AST SpaceMobile For Satellite-To-Smartphone Service (fiercewireless.com) 18

AT&T, Google and Vodafone are investing a total of $206.5 million in AST SpaceMobile, a satellite manufacturer that plans to be the first space-based network to connect standard mobile phones at broadband speeds. Fierce Wireless reports: AST SpaceMobile claims it invented the space-based direct-to-device market, with a patented design facilitating broadband connectivity directly to standard, unmodified cellular devices. In a press release, AST SpaceMobile said the investment from the likes of AT&T, Google and Vodafone underscores confidence in the company's technology and leadership position in the emerging space-based cellular D2D market. There's the potential to offer connectivity to 5.5 billion cellular devices when they're out of coverage.

Bolstering the case for AST SpaceMobile, Vodafone and AT&T placed purchase orders -- for an undisclosed amount -- for network equipment to support their planned commercial services. In addition, Google and AST SpaceMobile agreed to collaborate on product development, testing and implementation plans for SpaceMobile network connectivity on Android and related devices. AST SpaceMobile boasts agreements and understandings with more than 40 mobile network operators globally. However, it's far from alone in the D2D space. Apple/Globalstar, T-Mobile/SpaceX, Bullitt and Lynk Global are among the others.

HP

HP CEO Evokes James Bond-Style Hack Via Ink Cartridges (arstechnica.com) 166

An anonymous reader quotes a report from Ars Technica: Last Thursday, HP CEO Enrique Lores addressed the company's controversial practice of bricking printers when users load them with third-party ink. Speaking to CNBC Television, he said, "We have seen that you can embed viruses in the cartridges. Through the cartridge, [the virus can] go to the printer, [and then] from the printer, go to the network." That frightening scenario could help explain why HP, which was hit this month with another lawsuit over its Dynamic Security system, insists on deploying it to printers.

Dynamic Security stops HP printers from functioning if an ink cartridge without an HP chip or HP electronic circuitry is installed. HP has issued firmware updates that block printers with such ink cartridges from printing, leading to the above lawsuit (PDF), which is seeking class-action certification. The suit alleges that HP printer customers were not made aware that printer firmware updates issued in late 2022 and early 2023 could result in printer features not working. The lawsuit seeks monetary damages and an injunction preventing HP from issuing printer updates that block ink cartridges without an HP chip. [...]

Unsurprisingly, Lores' claim comes from HP-backed research. The company's bug bounty program tasked researchers from Bugcrowd with determining if it's possible to use an ink cartridge as a cyberthreat. HP argued that ink cartridge microcontroller chips, which are used to communicate with the printer, could be an entryway for attacks. [...] It's clear that HP's tactics are meant to coax HP printer owners into committing to HP ink, which helps the company drive recurring revenue and makes up for money lost when the printers are sold. Lores confirmed in his interview that HP loses money when it sells a printer and makes money through supplies. But HP's ambitions don't end there. It envisions a world where all of its printer customers also subscribe to an HP program offering ink and other printer-related services. "Our long-term objective is to make printing a subscription. This is really what we have been driving," Lores said.

Security

How a Data Breach of 1M Cancer Center Patients Led to Extorting Emails (seattletimes.com) 37

The Seattle Times reports: Concerns have grown in recent weeks about data privacy and the ongoing impacts of a recent Fred Hutchinson Cancer Center cyberattack that leaked personal information of about 1 million patients last November. Since the breach, which hit the South Lake Union cancer research center's clinical network and has led to a host of email threats from hackers and lawsuits against Fred Hutch, menacing messages from perpetrators have escalated.

Some patients have started to receive "swatting" threats, in addition to spam emails warning people that unless they pay a fee, their names, Social Security and phone numbers, medical history, lab results and insurance history will be sold to data brokers and on black markets. Steve Bernd, a spokesperson for FBI Seattle, said last week there's been no indication of any criminal swatting events... Other patients have been inundated with spam emails since the breach...

According to The New York Times, large data breaches like this are becoming more common. In the first 10 months of 2023, more than 88 million individuals had their medical data exposed, according to the Department of Health and Human Services. Meanwhile, the number of reported ransomware incidents, when a specific malware blocks a victim's personal data until a ransom is paid, has decreased in recent years — from 516 in 2021 to 423 in 2023, according to Bernd of FBI Seattle. In Washington, the number dropped from 84 to 54 in the past three years, according to FBI data.

Fred Hutchinson Cancer Center believes their breach was perpetrated outside the U.S. by exploiting the "Citrix Bleed" vulnerability (which federal cybersecurity officials warn can allow the bypassing of passwords and mutifactor authentication measures).

The article adds that in late November, the Department of Health and Human Services' Health Sector Cybersecurity Coordination Center "urged hospitals and other organizations that used Citrix to take immediate action to patch network systems in order to protect against potentially significant ransomware threats."
Space

Nearby Galaxy's Giant Black Hole Is Real, 'Shadow' Image Confirms (science.org) 30

"A familiar shadow looms in a fresh image of the heart of the nearby galaxy M87," reports Science magazine.

"It confirms that the galaxy harbors a gravitational sinkhole so powerful that light cannot escape, one generated by a black hole 6.5 billion times the mass of the Sun." But compared with a previous image from the network of radio dishes called the Event Horizon Telescope (EHT), the new one reveals a subtle shift in the bright ring surrounding the shadow, which could provide clues to how gases churn around the black hole. "We can see that shift now," says team member Sera Markoff of the University of Amsterdam. "We can start to use that." The new detail has also whetted astronomers' desire for a proposed expansion of the EHT, which would deliver even sharper images of distant black holes.

The new picture, published this week in Astronomy & Astrophysics, comes from data collected 1 year after the observing campaign that led to the first-ever picture of a black hole, revealed in 2019 and named as Science's Breakthrough of the Year. The dark center of the image is the same size as in the original image, confirming that the image depicts physical reality and is not an artifact. "It tells us it wasn't a fluke," says Martin Hardcastle, an astrophysicist at the University of Hertfordshire who was not involved in the study. The black hole's mass would not have grown appreciably in 1 year, so the comparison also supports the idea that a black hole's size is determined by its mass alone. In the new image, however, the brightest part of a ring surrounding the black hole has shifted counterclockwise by about 30 degrees.

That could be because of random churning in the disk of material that swirls around the black hole's equator. It could also be associated with fluctuations in one of the jets launched from the black hole's poles — a sign that the jet isn't aligned with the black hole's spin axis, but precesses around it like a wobbling top. That would be "kind of exciting," Markoff says. "The only way to know is to keep taking pictures...."

[T]he team wants to add more telescopes to the network, which would further sharpen its images and enable it to see black holes in more distant galaxies.

Thanks to Slashdot reader sciencehabit for sharing the news.
Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


AI

OpenAI Ceo Sam Altman Is Still Chasing Billions To Build AI Chips 11

According to Bloomberg (paywalled), OpenAI CEO Sam Altman is reportedly raising billions to develop a global network of chip fabrication factories, collaborating with leading chip manufacturers to address the high demand for chips required for advanced AI models. The Verge reports: A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia's value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs.

Accordingly, the race to manufacture more high-powered chips to run complex AI systems has only intensified. The limited number of fabs capable of making high-end chips is driving Altman or anyone else to bid for capacity years before you need it in order to produce the new chips. And going against the likes of Apple requires deep-pocketed investors who will front costs that the nonprofit OpenAI still can't afford. SoftBank Group and Abu Dhabi-based AI holding company G42 have reportedly been in talks about raising money for Altman's project.
Science

Why Every Coffee Shop Looks the Same (theguardian.com) 67

An anonymous reader shares a report: These cafes had all adopted similar aesthetics and offered similar menus, but they hadn't been forced to do so by a corporate parent, the way a chain like Starbucks replicated itself. Instead, despite their vast geographical separation and total independence from each other, the cafes had all drifted toward the same end point. The sheer expanse of sameness was too shocking and new to be boring. Of course, there have been examples of such cultural globalisation going back as far as recorded civilisation. But the 21st-century generic cafes were remarkable in the specificity of their matching details, as well as the sense that each had emerged organically from its location. They were proud local efforts that were often described as "authentic," an adjective that I was also guilty of overusing. When travelling, I always wanted to find somewhere "authentic" to have a drink or eat a meal.

If these places were all so similar, though, what were they authentic to, exactly? What I concluded was that they were all authentically connected to the new network of digital geography, wired together in real time by social networks. They were authentic to the internet, particularly the 2010s internet of algorithmic feeds. In 2016, I wrote an essay titled Welcome to AirSpace, describing my first impressions of this phenomenon of sameness. "AirSpace" was my coinage for the strangely frictionless geography created by digital platforms, in which you could move between places without straying beyond the boundaries of an app, or leaving the bubble of the generic aesthetic. The word was partly a riff on Airbnb, but it was also inspired by the sense of vaporousness and unreality that these places gave me. They seemed so disconnected from geography that they could float away and land anywhere else. When you were in one, you could be anywhere.

My theory was that all the physical places interconnected by apps had a way of resembling one another. In the case of the cafes, the growth of Instagram gave international cafe owners and baristas a way to follow one another in real time and gradually, via algorithmic recommendations, begin consuming the same kinds of content. One cafe owner's personal taste would drift toward what the rest of them liked, too, eventually coalescing. On the customer side, Yelp, Foursquare and Google Maps drove people like me -- who could also follow the popular coffee aesthetics on Instagram -- toward cafes that conformed with what they wanted to see by putting them at the top of searches or highlighting them on a map. To court the large demographic of customers moulded by the internet, more cafes adopted the aesthetics that already dominated on the platforms. Adapting to the norm wasn't just following trends but making a business decision, one that the consumers rewarded. When a cafe was visually pleasing enough, customers felt encouraged to post it on their own Instagram in turn as a lifestyle brag, which provided free social media advertising and attracted new customers. Thus the cycle of aesthetic optimisation and homogenisation continued.

News

David Mills, an Internet Pioneer, Has Died 19

David Mills, the man who invented NTP and wrote the implementation, has passed away. He also created the Fuzzballs and EGP, and helped make global-scale internetworking possible. Vint Cerf, sharing the news on the Internet Society mail group: His daughter, Leigh, just sent me the news that Dave passed away peacefully on January 17, 2024. He was such an iconic element of the early Internet.

Network Time Protocol, the Fuzzball routers of the early NSFNET, INARG taskforce lead, COMSAT Labs and University of Delaware and so much more.

R.I.P.
Privacy

Have I Been Pwned Adds 71 Million Emails From Naz.API Stolen Account List (bleepingcomputer.com) 17

An anonymous reader quotes a report from BleepingComputer: Have I Been Pwned has added almost 71 million email addresses associated with stolen accounts in the Naz.API dataset to its data breach notification service. The Naz.API dataset is a massive collection of 1 billion credentials compiled using credential stuffing lists and data stolen by information-stealing malware. Credential stuffing lists are collections of login name and password pairs stolen from previous data breaches that are used to breach accounts on other sites.

Information-stealing malware attempts to steal a wide variety of data from an infected computer, including credentials saved in browsers, VPN clients, and FTP clients. This type of malware also attempts to steal SSH keys, credit cards, cookies, browsing history, and cryptocurrency wallets. The stolen data is collected in text files and images, which are stored in archives called "logs." These logs are then uploaded to a remote server to be collected later by the attacker. Regardless of how the credentials are stolen, they are then used to breach accounts owned by the victim, sold to other threat actors on cybercrime marketplaces, or released for free on hacker forums to gain reputation amongst the hacking community.

The Naz.API is a dataset allegedly containing over 1 billion lines of stolen credentials compiled from credential stuffing lists and from information-stealing malware logs. It should be noted that while the Naz.API dataset name includes the word "Naz," it is not related to network attached storage (NAS) devices. This dataset has been floating around the data breach community for quite a while but rose to notoriety after it was used to fuel an open-source intelligence (OSINT) platform called illicit.services. This service allows visitors to search a database of stolen information, including names, phone numbers, email addresses, and other personal data. The service shut down in July 2023 out of concerns it was being used for Doxxing and SIM-swapping attacks. However, the operator enabled the service again in September. Illicit.services use data from various sources, but one of its largest sources of data came from the Naz.API dataset, which was shared privately among a small number of people. Each line in the Naz.API data consists of a login URL, its login name, and an associated password stolen from a person's device, as shown [here].
"Here's the back story: this week I was contacted by a well-known tech company that had received a bug bounty submission based on a credential stuffing list posted to a popular hacking forum," explained Troy Hunt, the creator of Have I Been Pwned, in blog post. "Whilst this post dates back almost 4 months, it hadn't come across my radar until now and inevitably, also hadn't been sent to the aforementioned tech company."

"They took it seriously enough to take appropriate action against their (very sizeable) user base which gave me enough cause to investigate it further than your average cred stuffing list."

To check if your credentials are in the Naz.API dataset, you can visit Have I Been Pwned.
Desktops (Apple)

Beeper Users Say Apple Is Now Blocking Their Macs From Using iMessage Entirely (techcrunch.com) 175

An anonymous reader quotes a report from TechCrunch: The Apple-versus-Beeper saga is not over yet it seems, even though the iMessage-on-Android Beeper Mini was removed from the Play Store last week. Now, Apple customers who used Beeper's apps are reporting that they've been banned from using iMessage on their Macs -- a move Apple may have taken to disable Beeper's apps from working properly, but ultimately penalizes its own customers for daring to try a non-Apple solution for accessing iMessage. The latest follows a contentious game of cat-and-mouse between Apple and Beeper, which Apple ultimately won. [...]

According to users' recounting of their tech support experiences with Apple, the support reps are telling them their computer has been flagged for spam, or for sending too many messages — even though that's not the case, some argued. This has led many Beeper users to believe this is how Apple is flagging them for removal from the iMessage network. One Beeper customer advised others facing this problem to ask Apple if their Mac was in a "throttled status" or if their Apple ID was blocked for spam to get to the root of the issue. Admitting up front that third-party software was to blame would sometimes result in the support rep being able to lift the ban, some noted.

The news of the Mac bans was earlier reported by Apple news site AppleInsider and Times of India, and is being debated on Y Combinator forum site Hacker News. On the latter, some express their belief that the retaliation against Apple's own users is justified as they had violated Apple's terms, while others said that iMessage interoperability should be managed through regulation, not rogue apps. Far fewer argued that Apple is exerting its power in an anticompetitive fashion here.

Wine

Wine 9.0 Released (9to5linux.com) 15

Version 9.0 of Wine, the free and open-source compatibility layer that lets you run Windows apps on Unix-like operating systems, has been released. "Highlights of Wine 9.0 include an experimental Wayland graphics driver with features like basic window management, support for multiple monitors, high-DPI scaling, relative motion events, as well as Vulkan support," reports 9to5Linux. From the report: The Vulkan driver has been updated to support Vulkan 1.3.272 and later, the PostScript driver has been reimplemented to work from Windows-format spool files and avoid any direct calls from the Unix side, and there's now a dark theme option on WinRT theming that can be enabled in WineCfg. Wine 9.0 also adds support for many more instructions to Direct3D 10 effects, implements the Windows Media Video (WMV) decoder DirectX Media Object (DMO), implements the DirectShow Audio Capture and DirectShow MPEG-1 Video Decoder filters, and adds support for video and system streams, as well as audio streams to the DirectShow MPEG-1 Stream Splitter filter.

Desktop integration has been improved in this release to allow users to close the desktop window in full-screen desktop mode by using the "Exit desktop" entry in the Start menu, as well as support for export URL/URI protocol associations as URL handlers to the Linux desktop. Audio support has been enhanced in Wine 9.0 with the implementation of several DirectMusic modules, DLS1 and DLS2 sound font loading, support for the SF2 format for compatibility with Linux standard MIDI sound fonts, Doppler shift support in DirectSound, Indeo IV50 Video for Windows decoder, and MIDI playback in dmsynth.

Among other noteworthy changes, Wine 9.0 brings loader support for ARM64X and ARM64EC modules, along with the ability to run existing Windows binaries on ARM64 systems and initial support for building Wine for the ARM64EC architecture. There's also a new 32-bit x86 emulation interface, a new WoW64 mode that supports running of 32-bit apps on recent macOS versions that don't support 32-bit Unix processes, support for DirectInput action maps to improve compatibility with many old video games that map controller inputs to in-game actions, as well as Windows 10 as the default Windows version for new prefixes. Last but not least, the kernel has been updated to support address space layout randomization (ASLR) for modern PE binaries, better memory allocation performance through the Low Fragmentation Heap (LFH) implementation, and support memory placeholders in the virtual memory allocator to allow apps to reserve virtual space. Wine 9.0 also adds support for smart cards, adds support for Diffie-Hellman keys in BCrypt, implements the Negotiate security package, adds support for network interface change notifications, and fixes many bugs.
For a full list of changes, check out the release notes. You can download Wine 9.0 from WineHQ.
AI

AI Can Convincingly Mimic A Person's Handwriting Style, Researchers Say (bloomberg.com) 26

AI tools already allow people to generate eerily convincing voice clones and deepfake videos. Soon, AI could also be used to mimic a person's handwriting style. Bloomberg: Researchers at Abu Dhabi's Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) say they have developed technology that can imitate someone's handwriting based on just a few paragraphs of written material. To accomplish that, the researchers used a transformer model, a type of neural network designed to learn context and meaning in sequential data. The team at MBZUAI, which calls itself the world's first AI university, has been granted a patent by the US Patent and Trademark Office for the artificial intelligence system.

The researchers have not yet released the feature, but it represents a step forward in an area that has drawn interest from academics for years. There have been apps and even robots that can generate handwriting, but recent advances in AI have accelerated character recognition techniques dramatically. As with other AI tools, however, it's unclear if the benefits will outweigh the harms. The technology could help the injured to write without picking up a pen, but it also risks opening the door to mass forgeries and misuse. The tool will need to be deployed thoughtfully, two of the researchers said in an interview.

Earth

Can Pumping CO2 Into California's Oil Fields Help Stop Global Warming? (yahoo.com) 83

America's Environmental Protection Agency "has signed off on a California oil company's plans to permanently store carbon emissions deep underground to combat global warming," reports the Los Angeles Times: California Resources Corp., the state's largest oil and gas company, applied for permission to send 1.46 million metric tons of carbon dioxide each year into the Elk Hills oil field, a depleted oil reservoir about 25 miles outside of downtown Bakersfield. The emissions would be collected from several industrial sources nearby, compressed into a liquid-like state and injected into porous rock more than one mile underground.

Although this technique has never been performed on a large scale in California, the state's climate plan calls for these operations to be widely deployed across the Central Valley to reduce carbon emissions from industrial facilities. The EPA issued a draft permit for the California Resources Corp. project, which is poised to be finalized in March following public comments. As California transitions away from oil production, a new business model for fossil fuel companies has emerged: carbon management. Oil companies have heavily invested in transforming their vast network of exhausted oil reservoirs into a long-term storage sites for planet-warming gases, including California Resources Corp., the largest nongovernmental owner of mineral rights in California...

[Environmentalists] say that the transportation and injection of CO2 — an asphyxiating gas that displaces oxygen — could lead to dangerous leaks. Nationwide, there have been at least 25 carbon dioxide pipeline leaks between 2002 and 2021, according to the U.S. Department of Transportation. Perhaps the most notable incident occurred in Satartia, Miss., in 2020 when a CO2 pipeline ruptured following heavy rains. The leak led to the hospitalization of 45 people and the evacuation of 200 residents... Under the EPA draft permit, California Resources Corp. must take a number of steps to mitigate these risks. The company must plug 157 wells to ensure the CO2 remains underground, monitor the injection site for leaks and obtain a $33-million insurance policy.

Canadian-based Brookfield Corporation also invested $500 million, according to the article, with California Resources Corp. seeking permits for five projects — more than any company in the nation. "It's kind of reversing the role, if you will," says their chief sustainability officer. "Instead of taking oil and gas out, we're putting carbon in."

Meanwhile, there's applications for "about a dozen" more projects in California's Central Valley that could store millions of tons of carbon emissions in old oil and gas fields — and California Resources Corp says greater Los Angeles is also "being evaluated" as a potential storage site.
Robotics

The Global Project To Make a General Robotic Brain (ieee.org) 23

Generative AI "doesn't easily carry over into robotics," write two researchers in IEEE Spectrum, "because the Internet is not full of robotic-interaction data in the same way that it's full of text and images."

That's why they're working on a single deep neural network capable of piloting many different types of robots... Robots need robot data to learn from, and this data is typically created slowly and tediously by researchers in laboratory environments for very specific tasks... The most impressive results typically only work in a single laboratory, on a single robot, and often involve only a handful of behaviors... [W]hat if we were to pool together the experiences of many robots, so a new robot could learn from all of them at once? We decided to give it a try. In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality...

The question is whether a deep neural network trained on data from a sufficiently large number of different robots can learn to "drive" all of them — even robots with very different appearances, physical properties, and capabilities. If so, this approach could potentially unlock the power of large datasets for robotic learning. The scale of this project is very large because it has to be. The RT-X dataset currently contains nearly a million robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market...

Surprisingly, we found that our multirobot data could be used with relatively simple machine-learning methods, provided that we follow the recipe of using large neural-network models with large datasets. Leveraging the same kinds of models used in current LLMs like ChatGPT, we were able to train robot-control algorithms that do not require any special features for cross-embodiment. Much like a person can drive a car or ride a bicycle using the same brain, a model trained on the RT-X dataset can simply recognize what kind of robot it's controlling from what it sees in the robot's own camera observations. If the robot's camera sees a UR10 industrial arm, the model sends commands appropriate to a UR10. If the model instead sees a low-cost WidowX hobbyist arm, the model moves it accordingly.

"To test the capabilities of our model, five of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot... Remarkably, the single unified model provided improved performance over each laboratory's own best method, succeeding at the tasks about 50 percent more often on average." And they then used a pre-existing vision-language model to successfully add the ability to output robot actions in response to image-based prompts.

"The RT-X project shows what is possible when the robot-learning community acts together... and we hope that RT-X will grow into a collaborative effort to develop data standards, reusable models, and new techniques and algorithms."

Thanks to long-time Slashdot reader Futurepower(R) for sharing the article.
Power

White House Unveils $623 Million In Funding To Boost EV Charging Points (theguardian.com) 101

An anonymous reader quotes a report from The Guardian: Joe Biden's administration has unveiled $623 million in funding to boost the number of electric vehicle charging points in the U.S., amid concerns that the transition to zero-carbon transportation isn't keeping pace with goals to tackle the climate crisis. The funding will be distributed in grants for dozens of programs across 22 states, such as EV chargers for apartment blocks in New Jersey, rapid chargers in Oregon and hydrogen fuel chargers for freight trucks in Texas. In all, it's expected the money, drawn from the bipartisan infrastructure law, will add 7,500 chargers to the US total.

There are about 170,000 electric vehicle chargers in the U.S., a huge leap from a network that was barely visible prior to Biden taking office, and the White House has set a goal for 500,000 chargers to help support the shift away from gasoline and diesel cars. "The U.S. is taking the lead globally on electric vehicles," said Ali Zaidi, a climate adviser to Biden who said the US is on a trajectory to "meet and exceed" the administration's charger goal. "We will continue to see this buildout over the coming years and decades until we've achieved a fully net zero transportation sector," he added.
On Thursday, the House approved legislation to undo a Biden administration rule meant to facilitate the proliferation of EV charging stations. "S. J. Res. 38 from Sen. Marco Rubio (R-Fla.), would scrap a Federal Highway Administration waiver from domestic sourcing requirements for EV chargers funded by the 2021 bipartisan infrastructure law. It already passed the Senate 50-48," reports Politico.

"A waiver undercuts domestic investments and risks empowering foreign nations," said Rep. Sam Graves (R-Mo.), chair of the Transportation and Infrastructure Committee, during House debate Thursday. "If the administration is going to continue to push for a massive transition to EVs, it should ensure and comply with Buy America requirements." The White House promised to veto it and said it would backfire, saying it was so poorly worded it would actually result in fewer new American-made charging stations.

Slashdot Top Deals