×
Privacy

Prison Phone Company Leaked 600,000 Users' Data and Didn't Notify Them (arstechnica.com) 45

An anonymous reader quotes a report from Ars Technica: Prison phone company Global Tel*Link leaked the personal information of nearly 650,000 users and failed to notify most of the users that their personal data was exposed, the Federal Trade Commission said today. The company agreed to a settlement that requires it to change its security practices and offer free credit monitoring and identity protection to affected users, but the settlement doesn't include a fine. "Global Tel*Link and two of its subsidiaries failed to implement adequate security safeguards to protect personal information they collect from users of its services, which enabled bad actors to gain access to unencrypted personal information stored in the cloud and used for testing," the FTC said.

A security researcher notified Global Tel*Link of the breach on August 13, 2020, according to the FTC's complaint (PDF). This happened just after "the company and a third-party vendor copied a large volume of sensitive, unencrypted personal information about nearly 650,000 real users of its products and services into the cloud but failed to take adequate steps to protect the data," the FTC said. The data was copied to an Amazon Web Services test environment to test a new version of a search software product. For about two days, the data was in the test environment and "accessible via the Internet without password protection or other access controls," the FTC said. After hearing from the security researcher, Global Tel*Link reconfigured the test environment to cut off public access. But a few weeks later, the firm was notified by an identity monitoring vendor that the data was available on the dark web. Global Tel*Link didn't notify any users until May 2021, and even then, it only notified a subset of them, according to the FTC. [...]

The complaint said that Global Tel*Link violated the Federal Trade Commission Act's section on unfair or deceptive acts or practices and charged the firm with unfair data security practices, unfair failure to notify affected consumers of the incident, misrepresentations regarding data security, misrepresentations to individual users regarding the incident, misrepresentations to individual users regarding notice, and deceptive representations to prison facilities regarding the incident. To settle the charges, the company agreed to new security protocols, including "'change management' measures to all of its systems to help reduce the risk of human error, use of multifactor authentication, and procedures to minimize the amount of data it collects and stores," the FTC said. Global Tel*Link also has to notify the affected users who were not previously notified of the breach and provide them with credit monitoring and identity protection products. The product must include $1,000,000 worth of identity theft insurance to cover costs related to identity theft or fraud. The company must also notify consumers and prison facilities within 30 days of future data breaches and notify the FTC of the incidents, the agency said. Violations of the settlement could result in fines of $50,120 for each violation, the FTC said.

Programming

Developers Can't Seem To Stop Exposing Credentials in Publicly Accessible Code (arstechnica.com) 59

Despite more than a decade of reminding, prodding, and downright nagging, a surprising number of developers still can't bring themselves to keep their code free of credentials that provide the keys to their kingdoms to anyone who takes the time to look for them. From a report: The lapse stems from immature coding practices in which developers embed cryptographic keys, security tokens, passwords, and other forms of credentials directly into the source code they write. The credentials make it easy for the underlying program to access databases or cloud services necessary for it to work as intended. [...]

The number of studies published since following the revelations underscored just how common the practice had been and remained in the years immediately following Uber's cautionary tale. Sadly, the negligence continues even now. Researchers from security firm GitGuardian this week reported finding almost 4,000 unique secrets stashed inside a total of 450,000 projects submitted to PyPI, the official code repository for the Python programming language. Nearly 3,000 projects contained at least one unique secret. Many secrets were leaked more than once, bringing the total number of exposed secrets to almost 57,000.

Cloud

How Amazon Is Going After Microsoft's Cloud Computing Ambitions (bloomberg.com) 11

Amazon is the driving force behind a trio of advocacy groups working to thwart Microsoft's growing ambition to become a major cloud computing contractor for governments, a Bloomberg analysis shows. From the report: The groups -- the Cloud Infrastructure Services Providers in Europe (CISPE), the Coalition for Fair Software Licensing and the Alliance for Digital Innovation -- want to convince policymakers that Microsoft has improperly locked customers into Azure, its cloud computing service, choking off its rivals and hindering the advancement of technology within the government and beyond. These groups have dozens of members. But Amazon is the biggest funder for two of them and the largest company, measured by revenue, that funds another.

Spokespeople for the groups say no single company determines their agendas. But according to a Bloomberg News review of tax filings, documents and interviews with people familiar with the three groups' operations, Amazon Web Services plays a direct role in shaping their efforts in ways that would boost the cloud giant. Through aggressive lobbying of policymakers, these groups want to ensure that customers can use popular Microsoft products like Office Suite or Windows on any cloud computing system -- and, in particular, on Amazon Web Services, the world's number one cloud infrastructure provider and the retail giant's top profit driver.

To hammer that message, they've filed complaints, lobbied regulators and sought to shape the views of policymakers probing the cloud market. In one case, an Amazon executive is listed as the author of a public comment to the Federal Trade Commission, as well as testimony and letters to Congress on behalf of the group, according to an analysis of the documents' metadata, revealing the tech giant's role in the lobbying campaign. (The group says the documents reflect the consensus position of its members.) Amazon denied it authored statements for the group.

Microsoft

Microsoft Unveils Its First Custom-Designed AI, Cloud Chips (bloomberg.com) 21

Microsoft unveiled its first homegrown AI chip and cloud-computing processor in an attempt to take more control of its technology and ramp up its offerings in the increasingly competitive market for AI computing. The company also announced new software that lets clients design their own AI assistants. From a report: The Maia 100 chip, announced at the company's annual Ignite conference in Seattle on Wednesday, will provide Microsoft Azure cloud customers with a new way run AI programs that generate content. Microsoft is already testing the chip with its Bing and Office AI products, said Rani Borkar, a vice president who oversees Azure's chip unit. Microsoft's main AI partner, ChatGPT maker OpenAI, is also testing the processor. Both Maia and the server chip, Cobalt, will debut in some Microsoft data centers early next year.

Microsoft's multi-year investment shows how critical chips have become to gaining an edge in both AI and the cloud. Making them in-house lets companies wring performance and price benefits from the hardware. The initiative also could insulate Microsoft from becoming overly dependent on any one supplier, a vulnerability currently underscored by the industrywide scramble for Nvidia's AI chips. Microsoft's push into processors follows similar moves by cloud rivals. Amazon.com Inc. acquired a chip maker in 2015 and sells services built on several kinds of cloud and AI chips. Google began letting customers use its AI accelerator processors in 2018.

Bug

Intel Fixes High-Severity CPU Bug That Causes 'Very Strange Behavior' (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: Intel on Tuesday pushed microcode updates to fix a high-severity CPU bug that has the potential to be maliciously exploited against cloud-based hosts. The flaw, affecting virtually all modern Intel CPUs, causes them to "enter a glitch state where the normal rules don't apply," Tavis Ormandy, one of several security researchers inside Google who discovered the bug, reported. Once triggered, the glitch state results in unexpected and potentially serious behavior, most notably system crashes that occur even when untrusted code is executed within a guest account of a virtual machine, which, under most cloud security models, is assumed to be safe from such faults. Escalation of privileges is also a possibility.

The bug, tracked under the common name Reptar and the designation CVE-2023-23583, is related to how affected CPUs manage prefixes, which change the behavior of instructions sent by running software. Intel x64 decoding generally allows redundant prefixes -- meaning those that don't make sense in a given context -- to be ignored without consequence. During testing in August, Ormandy noticed that the REX prefix was generating "unexpected results" when running on Intel CPUs that support a newer feature known as fast short repeat move, which was introduced in the Ice Lake architecture to fix microcoding bottlenecks. The unexpected behavior occurred when adding the redundant rex.r prefixes to the FSRM-optimized rep mov operation. [...]

Intel's official bulletin lists two classes of affected products: those that were already fixed and those that are fixed using microcode updates released Tuesday. An exhaustive list of affected CPUs is available here. As usual, the microcode updates will be available from device or motherboard manufacturers. While individuals aren't likely to face any immediate threat from this vulnerability, they should check with the manufacturer for a fix. People with expertise in x86 instruction and decoding should read Ormandy's post in its entirety. For everyone else, the most important takeaway is this: "However, we simply don't know if we can control the corruption precisely enough to achieve privilege escalation." That means it's not possible for people outside of Intel to know the true extent of the vulnerability severity. That said, anytime code running inside a virtual machine can crash the hypervisor the VM runs on, cloud providers like Google, Microsoft, Amazon, and others are going to immediately take notice.

Earth

Delhi Plans To Unleash Cloud Seeding in Its Battle Against Deadly Smog (wired.com) 35

India's capital, New Delhi, is preparing a new weapon in the fight against deadly air pollution: cloud seeding. From a report: The experiment, which could take place as early as next week, would introduce chemicals like silver iodide into a cloudy sky to create rain and, it's hoped, wash away the fine particulate matter hovering over one of the world's largest cities. The need is desperate. Delhi has already tried traffic restriction measures, multimillion-dollar air filtration towers, and the use of fleets of water-spraying trucks to dissolve the particulate matter in the air -- but to no avail.

The use of cloud seeding, if it goes ahead, would be controversial. "It's not at all a good use of resources because it's not a solution, it's like a temporary relief," says Avikal Somvanshi, a researcher at the Center for Science and Environment in New Delhi. Environmentalists and scientists worry that most of the government's response is focused on mitigating the pollution rather than trying to cut off its source. "There is just no political intent to solve this, that is one of the biggest problems," says Bhavreen Kandhari, an activist and cofounder of Warrior Moms, a network of mothers demanding clean air.

[...] Now, Delhi officials are seeking permission from federal agencies in India to try cloud seeding. The technique involves flying an aircraft to spray clouds with salts like silver or potassium iodide or solid carbon dioxide, also known as dry ice, to induce precipitation. The chemical molecules attach to moisture already in the clouds to form bigger droplets that then fall as rain. China has used artificial rain to tackle air pollution in the past -- but for cloud seeding to work properly, you need significant cloud cover with reasonable moisture content, which Delhi generally lacks during the winter. If weather conditions are favorable, scientists leading the project at the Indian Institute of Technology in Kanpur plan to carry out cloud seeding around November 20.

AI

Nvidia Upgrades Processor as Rivals Challenge Its AI Dominance (bloomberg.com) 39

Nvidia, the world's most valuable chipmaker, is updating its H100 artificial intelligence processor, adding more capabilities to a product that has fueled its dominance in the AI computing market. From a report: The new model, called the H200, will get the ability to use high-bandwidth memory, or HBM3e, allowing it to better cope with the large data sets needed for developing and implementing AI, Nvidia said Monday. Amazon's AWS, Alphabet's Google Cloud and Oracle's Cloud Infrastructure have all committed to using the new chip starting next year.

The current version of the Nvidia processor -- known as an AI accelerator -- is already in famously high demand. It's a prized commodity among technology heavyweights like Larry Ellison and Elon Musk, who boast about their ability to get their hands on the chip. But the product is facing more competition: AMD is bringing its rival MI300 chip to market in the fourth quarter, and Intel claims that its Gaudi 2 model is faster than the H100. With the new product, Nvidia is trying to keep up with the size of data sets used to create AI models and services, it said. Adding the enhanced memory capability will make the H200 much faster at bombarding software with data -- a process that trains AI to perform tasks such as recognizing images and speech.

Businesses

Is Capitalism Dead? Yanis Varoufakis Argues Capitalists are Now Vassals to 'Techno-Feudalists' (theconversation.com) 148

Greek economist/politician Yanis Varoufakis "was briefly Greek finance minister in 2015," remembers the Conversation. Now his new book asks the question, "What killed capitalism," with the title's first word providing an answer.

"Techno-feudalism." Varoufakis argues that we no longer live in a capitalist society... "Today, capitalist relations remain intact, but techno-feudalist relations have begun to overtake them," writes Varoufakis. Traditional capitalists, he proposes, have become "vassal capitalists". They are subordinate and dependent on a new breed of "lords" — the Big Tech companies — who generate enormous wealth via new digital platforms. A new form of algorithmic capital has evolved — what Varoufakis calls "cloud capital" — and it has displaced "capitalism's two pillars: markets and profits".

Markets have been "replaced by digital trading platforms which look like, but are not, markets". The moment you enter amazon.com "you exit capitalism" and enter something that resembles a "feudal fief": a digital world belonging to one man and his algorithm, which determines what products you will see and what products you won't see. If you are a seller, the platform will determine how you can sell and which customers you can approach. The terms in which you interact, share information and trade are dictated by an "algo" that "works for [Jeff Bezos'] bottom line"...

Access to the "digital fief" comes at the cost of exorbitant rents. Varoufakis notes that many third-party developers on the Apple store, for example, pay 30% "on all their revenues", while Amazon charges its sellers "35% of revenues". This, he argues, is like a medieval feudal lord sending round the sheriff to collect a large chunk of his serfs' produce because he owns the estate and everything within it.

There is "no disinterested invisible hand of the market" here. The Big Tech platforms are exempted from free-market competition.

And in the meantime, users are unknowingly training their algorithms for them — so "In this interaction, we are all high-tech 'cloud serfs'... [T]he 'cloud capital' we are generating for them all the time increases their capacity to generate yet more wealth, and thus increases their power — something we have only begun to realise." Approximately 80% of the income of traditional capitalist conglomerates go to salaries and wages, according to Varoufakis, while Big Tech's workers, in contrast, collect "less than 1% of their firms' revenues"... For Varoufakis, we are not just living through a tech revolution, but a tech-driven economic revolution. He challenges us to come to terms with just what has happened to our economies — and our societies — in the era of Big Tech and Big Finance.
Thanks to Slashdot reader ZipNada for sharing the article.
Science

Oldest, Massive Black Hole Discovered With JWST Data. Confirms 'Collapsed Gas Cloud' Theory (nasa.gov) 18

"Scientists have discovered the oldest black hole yet," reports the CBC, calling it "a cosmic beast formed a mere 470 million years after the Big Bang."

"The findings, published Monday, confirm what until now were theories that supermassive black holes existed at the dawn of the universe..." Given the universe is 13.7 billion years old, that puts the age of this black hole at 13.2 billion years. Even more astounding to scientists, this black hole is a whopper — 10 times bigger than the black hole in our own Milky Way. It's believed to weigh anywhere from 10 to 100 per cent the mass of all the stars in its galaxy, said lead author Akos Bogdan of the Harvard-Smithsonian Center for Astrophysics. That is nowhere near the miniscule ratio of the black holes in our Milky Way and other nearby galaxies — an estimated 0.1 per cent, he noted. "It's just really early on in the universe to be such a behemoth," said Yale University's Priyamvada Natarajan, who took part in the study published in the journal Nature Astronomy. A companion article appeared in the Astrophysical Journal Letters...

The researchers believe the black hole formed from colossal clouds of gas that collapsed in a galaxy next door to one with stars. The two galaxies merged, and the black hole took over.

The researchers combined data from NASA's Chandra X-ray Observatory and NASA's James Webb Space Telescope, reports NASA: "We needed Webb to find this remarkably distant galaxy and Chandra to find its supermassive black hole," said Akos Bogdan of the Center for Astrophysics/Harvard & Smithsonian who leads a new paper in the journal Nature Astronomy describing these results. "We also took advantage of a cosmic magnifying glass that boosted the amount of light we detected." This magnifying effect is known as gravitational lensing...

This discovery is important for understanding how some supermassive black holes can reach colossal masses soon after the big bang. Do they form directly from the collapse of massive clouds of gas, creating black holes weighing between about 10,000 and 100,000 Suns? Or do they come from explosions of the first stars that create black holes weighing only between about 10 and 100 Suns...? Bogdan's team has found strong evidence that the newly discovered black hole was born massive... The large mass of the black hole at a young age, plus the amount of X-rays it produces and the brightness of the galaxy detected by Webb, all agree with theoretical predictions in 2017 by co-author Priyamvada Natarajan of Yale University for an "Outsize Black Hole" that directly formed from the collapse of a huge cloud of gas.

"We think that this is the first detection of an 'Outsize Black Hole' and the best evidence yet obtained that some black holes form from massive clouds of gas," said Natarajan. "For the first time we are seeing a brief stage where a supermassive black hole weighs about as much as the stars in its galaxy, before it falls behind." The researchers plan to use this and other results pouring in from Webb and those combining data from other telescopes to fill out a larger picture of the early universe.

Cloud

Microsoft Won't Let You Close OneDrive on Windows Until You Explain Yourself (theverge.com) 245

Microsoft now wants you to explain exactly why you're attempting to close its OneDrive for Windows app before it allows you to do so. From a report: Neowin has spotted that the latest update to OneDrive now includes an annoying dialog box that asks you to select the reason why you're closing the app every single time you attempt to close OneDrive from the taskbar. Closing OneDrive is already buried away and not a simple task, with Microsoft hiding it under a "pause syncing" option when you right-click on OneDrive in the taskbar. But now, the quit option is grayed out until you select a reason for quitting OneDrive from a drop-down box. Here are the options:
1. I don't want OneDrive running all the time
2. I don't know what OneDrive is
3. I don't use OneDrive
4. I'm trying to fix a problem with OneDrive
5. I'm trying to speed up my computer
6. I get too many notifications
7. Other

Open Source

Meta Taps Hugging Face For Startup Accelerator To Spur Adoption of Open Source AI Models (techcrunch.com) 8

An anonymous reader quotes a report from TechCrunch: Facebook parent Meta is teaming up with Hugging Face and European cloud infrastructure company Scaleway to launch a new AI-focused startup program at the Station F startup megacampus in Paris. The underlying goal of the program is to promote a more "open and collaborative" approach to AI development across the French technology world. The timing of the announcement is notable, coming amid a growing push for regulation and a marked conflict between the "open" and "closed" AI realms. [...]

While Meta itself has been open sourcing its own generative AI models, Hugging Face -- a billion-dollar VC-backed startup in its own right -- has set out its stall as a sort of open source alternative to OpenAI, replete with open alternatives to the likes of ChatGPT and spearheading community projects such as BigScience. So in many ways, Meta and Hugging Face's tie-up today makes a great deal of sense, given their respective stances on the whole "open" versus "closed" AI discussion. "For me, open source AI is the most important topic of the decade as it is the cornerstone toward democratizing ethical AI," Hugging Face CEO Clement Delangue said in a statement.

From today through December 1 (2023), startups can apply to join the new "AI Startup Program" at Station F, with five winners proceeding to the accelerator program that will run from January to June. The chosen startups, selected by a panel of judges from Meta, Hugging Face and French cloud company Scaleway, will have at least one thing in common -- they will be working on projects substantively built on open foundation models, or at the very least can demonstrate a "willingness to integrate these models into their products and services," according to the announcement issued by Meta today. "With the proliferation of foundation models and generative artificial intelligence models, the aim is to bring the economic and technological benefits of open, state-of-the-art models to the French ecosystem," the announcement noted. Indeed, the winning startups will receive mentoring from researchers and engineers at Meta, gain access to Hugging Face's various platforms and tools, and compute resources from Scaleway.

Android

Google-led App Defense Alliance Joins Linux Foundation (techcrunch.com) 17

The App Defense Alliance (ADA), an initiative set up by Google back in 2019 to combat malicious Android apps infiltrating the Play app store, has joined the Joint Development Foundation (JDF), a Linux Foundation project focused on helping organizations working on technical specifications, standards, and related efforts. From a report: The App Defense Alliance had, in fact, already expanded beyond its original Android malware detection roots, covering areas such as malware mitigation, mobile app security assessments (MASA), and cloud app security assessments (CASA). And while its founding members included mobile security firms such as ESET, Lookout and Zimperium, it has ushered in new members through the years including Trend Micro and McAfee. Today's news, effectively, sees ADA join an independent foundation, a move designed to open up the appeal to other big tech companies, such as Facebook parent Meta and Microsoft, both of which are now joining the ADA's steering committee. The ultimate goal is to "improve app security" through fostering greater "collaborative implementation of industry standards," according to a joint statement today.
Microsoft

Microsoft Partners With VCs To Give Startups Free AI Chip Access (techcrunch.com) 4

In the midst of an AI chip shortage, Microsoft wants to give a privileged few startups free access to "supercomputing" resources from its Azure cloud for developing AI models. From a report: Microsoft today announced it's updating its startup program, Microsoft for Startups Founders Hub, to include a no-cost Azure AI infrastructure option for "high-end," Nvidia-based GPU virtual machine clusters to train and run generative models, including large language models along the lines of ChatGPT. Y Combinator and its community of startup founders will be the first to gain access to the clusters in private preview. Why Y Combinator? Annie Pearl, VP of growth and ecosystems, Microsoft, called YC the "ideal initial partner," given its track record working with startups "at the earliest stages."

"We're working closely with Y Combinator to prioritize the asks from their current cohort, and then alumni, as part of our initial preview," Pearl said. "The focus will be on tasks like training and fine-tuning use cases that unblock innovation." It's not the first time Microsoft's attempted to curry favor with Y Combinator startups. In 2015, the company said it would give $500,000 in Azure credits to YC's Winter 2015 batch, a move that at the time was perceived as an effort to draw these startups away from rival clouds. One might argue the GPU clusters for AI training and inferencing are along the same self-serving vein.

AI

Microsoft is Bringing AI Characters To Xbox (theverge.com) 24

Microsoft is partnering with Inworld AI to develop Xbox tools that will allow developers to create AI-powered characters, stories, and quests. From a report: The multiyear partnership will include an "AI design copilot" system that Xbox developers can use to create detailed scripts, dialogue trees, quest lines, and more. "At Xbox, we believe that with better tools, creators can make even more extraordinary games," explains Haiyan Zhang, general manager of gaming AI at Xbox. "This partnership will bring together: Inworld's expertise in working with generative AI models for character development, Microsoft's cutting-edge cloud-based AI solutions including Azure OpenAI Service, Microsoft Research's technical insights into the future of play, and Team Xbox's strengths in revolutionizing accessible and responsible creator tools for all developers." The multiplatform AI toolset will include the AI design copilot for scripts and dialogue, and an AI character engine that can be integrated into games and used to dynamically generate stories, quests, and dialogue.
Red Hat Software

How Red Hat Divided the Open Source Community (msn.com) 191

In Raleigh, North Carolina — the home of Red Hat — local newspaper the News & Observer takes an in-depth look at the "announcement that split the open source software community." (Alternate URL here.) [M]any saw Red Hat's decision to essentially paywall Red Hat Enterprise Linux, or RHEL, as sacrilegious... Red Hat employees were also conflicted about the new policy, [Red Hat Vice President Mike] McGrath acknowledged. "I think a lot of even internal associates didn't fully understand what we had announced and why," he said...

At issue, he wrote, were emerging competitors who copied Red Hat Enterprise Linux, down to even the code's mistakes, and then offered these Red Hat-replicas to customers for free. These weren't community members adding value, he contended, but undercutting rivals. And in a year when Red Hat laid off 4% of its total workforce, McGrath said, the company could not justify allowing this to continue. "I feel that while this was a difficult decision between community and business, we're still on the right side of it," he told the News & Observer. Not everyone agrees...

McGrath offered little consolation to customers who were relying on one-for-one versions of RHEL. They could stay with the downstream distributions, find another provider, or pay for Red Hat. "I think (people) were just so used to the way things work," he said. "There's a vocal group of people that probably need Red Hat's level of support, but simply don't want to pay for it. And I don't really have... there's not much we can tell them."

Since its RHEL decision, Red Hat has secured several prominent partnerships. In September, the cloud-based software company Salesforce moved 200,000 of its systems from the free CentOS Linux to Red Hat Enterprise Linux. The same month, Red Hat announced RHEL would begin to support Oracle's cloud infrastructure. Oracle was one of the few major companies this summer to publicly criticize Red Hat for essentially paywalling its most popular code. On Oct. 24, Red Hat notched another win when the data security firm Cohesity said it would also ditch CentOS Linux for RHEL.

The article delves into the history of Red Hat — and of Linux — before culminating with this quote from McGrath. "I think long gone are the times of that sort of romantic view of hobbyists working in their spare time to build open source. I think there's still room for that — we still have that — but quite a lot of open source is now built from people that are paid full time."

Red Hat likes to point out that 90% of Fortune 500 companies use its services, according to the article. But it also quotes Jonathan Wright, infrastructure team lead at the nonprofit AlmaLinux, as saying that Red Hat played "fast and loose" with the GPL. The newspaper then adds that "For many open source believers, such a threat to its hallowed text isn't forgivable."
Programming

Do Programming Certifications Still Matter? (infoworld.com) 101

With programmers in high demand, InfoWorld asks if it's really worthwhile for software developers to pursue certifications? "Based on input from those in the field, company executives, and recruiters, the answer is a resounding yes," "The primary benefit of certifications is to verify your skill sets," says Archie Payne, president of the recruiting firm CalTek Staffing... Certifications can be used to "reinforce the experience on your resume or demonstrate competencies beyond what you've done in the workplace in a prior role." Certifications show that you are committed to your field, invested in career growth, and connected to the broader technology landscape, Payne says. "Obtaining certification indicates that you are interested in learning new skills and continuing your learning throughout your career," he says...

In cases where multiple candidates are equally qualified, having a relevant certification can give one candidate an edge over others, says Aleksa Krstic, CTO at Localizely, a provider of a cloud-based translation platform. "When it comes to certifications in general, when we see a junior to mid-level developer armed with programming certifications, it's a big green light for our hiring team," says MichaÅ Kierul, who is CEO of software company INTechHouse.

"It's not just about the knowledge they have gained," Kierul says. "It speaks volumes about their passion, their drive to excel, and their commitment to continuous learning outside their regular work domain. It underscores a key trait we highly value: the desire to grow, learn, and elevate oneself in the world of technology."

AI

Meta's Free AI Isn't Cheap To Use, Companies Say (theinformation.com) 18

Some companies that pay for OpenAI's artificial intelligence have been looking to cut costs with free, open-source alternatives. But these AI customers are realizing that oftentimes open-source tech can actually be more expensive than buying from OpenAI. The Information: Take Andreas Homer and Ebby Amir, co-founders of Cypher, an app that helps people create virtual versions of themselves in the form of a chatbot. Industry excitement this summer about the release of Llama 2, an open-source large language model from Meta Platforms, prompted the duo to test it for their app, leading to a $1,200 bill in August from Google Cloud, Cypher's cloud provider. Then they tried using GPT-3.5 Turbo, an OpenAI model that underpins services such as ChatGPT, and were surprised to see that it cost around $5 per month to handle the same amount of work.

Baseten, a startup that helps developers use open-source LLMs, says its customers report that using Llama 2 out of the box costs 50% to 100% more than for OpenAI's GPT-3.5 Turbo. The open-source option is cheaper only for companies that want to customize an LLM by training it on their data; in that case, a customized Llama 2 model costs about one-fourth as much as a customized GPT-3.5 Turbo model, Baseten found. Baseten also found that OpenAI's most advanced model, GPT-4, is about 15 times more expensive than Llama 2, but typically it's only needed for the most advanced generative AI tasks like code generation rather than the ones most large enterprises want to incorporate.

Cloud

Matic's Robot Vacuum Maps Spaces Without Sending Data To the Cloud (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: A relatively new venture founded by Navneet Dalal, an ex-Google research scientist, Matic, formerly known as Matician, is developing robots that can navigate homes to clean "more like a human," as Dalal puts it. Matic today revealed that it has raised $29.5 million, inclusive of a $24 million Series A led by a who's who of tech luminaries, including GitHub co-founder Nat Friedman, Stripe co-founders John and Patrick Collison, Quora CEO Adam D'Angelo and Twitter co-founder and Block CEO Jack Dorsey.

Dalal co-founded Matic in 2017 with Mehul Nariyawala, previously a lead product manager at Nest, where he oversaw Nest's security camera portfolio. [...] Early on, Matic focused on building robot vacuums -- but not because Dalal, who serves as the company's CEO, saw Matic competing with the iRobots and Ecovacs of the world. Rather, floor-cleaning robots provided a convenient means to thoroughly map indoor spaces, he and Nariyawala believed. "Robot vacuums became our initial focus due to their need to cover every inch of indoor surfaces, making them ideal for mapping," Dalal said. "Moreover, the floor-cleaning robot market was ripe for innovation." [...] "Matic was inspired by busy working parents who want to live in a tidy home, but don't want to spend their limited free time cleaning," Dalal said. "It's the first fully autonomous floor cleaning robot that continuously learns and adapts to users' cleaning preferences without ever compromising their privacy."

There are a lot of bold claims in that statement. But on the subject of privacy, Matic does indeed -- or at least claims to -- ensure data doesn't leave a customer's home. All processing happens on the robot (on hardware "equivalent to an iPhone 6," Dalal says), and mapping and telemetry data is saved locally, not in the cloud, unless users opt in to sharing. Matic doesn't even require an internet connection to get up and running -- only a smartphone paired over a local Wi-Fi network. The Matic vacuum understands an array of voice commands and gestures for fine-grained control. And -- unlike some robot vacuums in the market -- it can pick up cleaning tasks where it left off in the event that it's interrupted (say, by a wayward pet). Dalal says that Matic can also prioritize areas to clean depending on factors like the time of day and nearby rooms and furniture.
Dalal insists that all this navigational lifting can be accomplished with cameras alone. "In order to run all the necessary algorithms, from 3D depth to semantics to ... controls and navigation, on the robot, we had to vertically integrate and hyper-optimize the entire codebase," Dalal said, "from the modifying kernel to building a first-of-its-kind iOS app with live 3D mapping. This enables us to deliver an affordable robot to our customers that solves a real problem with full autonomy."

The robot won't be cheap. It starts at $1,795 but will be available for a limited time at a discounted price of $1,495.
Microsoft

Microsoft Overhauling Its Software Security After Major Azure Cloud Attacks (theverge.com) 40

An anonymous reader shares a report: Microsoft has had a rough few years of cybersecurity incidents. It found itself at the center of the SolarWinds attack nearly three years ago, one of the most sophisticated cybersecurity attacks we've ever seen. Then, 30,000 organizations' email servers were hacked in 2021 thanks to a Microsoft Exchange Server flaw. If that weren't enough already, Chinese hackers breached US government emails via a Microsoft cloud exploit earlier this year. Something had to give.

Microsoft is now announcing a huge cybersecurity effort, dubbed the Secure Future Initiative (SFI). This new approach is designed to change the way Microsoft designs, builds, tests, and operates its software and services today. It's the biggest change to security efforts inside Microsoft since the company announced its Security Development Lifecycle (SDL) in 2004 after Windows XP fell victim to a huge Blaster worm attack that knocked PCs offline in 2003. That push came just two years after co-founder Bill Gates had called on a trustworthy computing initiative in an internal memo.

Microsoft now plans to use automation and AI during software development to improve the security of its cloud services, cut the time it takes to fix cloud vulnerabilities, enable better security settings out of the box, and harden its infrastructure to protect against encryption keys falling into the wrong hands. In an internal memo to Microsoft's engineering teams today, the company's leadership has outlined its new cybersecurity approach. It comes just months after Microsoft was accused of "blatantly negligent" cybersecurity practices related to a major breach that targeted its Azure platform. Microsoft has faced mounting criticism of its handling of a variety of cybersecurity issues in recent years.

AI

New AWS Service Lets Customers Rent Nvidia GPUs For Quick AI Projects 7

An anonymous reader quotes a report from TechCrunch: More and more companies are running large language models, which require access to GPUs. The most popular of those by far are from Nvidia, making them expensive and often in short supply. Renting a long-term instance from a cloud provider when you only need access to these costly resources for a single job, doesn't necessarily make sense. To help solve that problem, AWS launched Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML today, enabling customers to buy access to these GPUs for a defined amount of time, typically to run some sort of AI-related job such as training a machine learning model or running an experiment with an existing model.

The product gives customers access to NVIDIA H100 Tensor Core GPUs instances in cluster sizes of one to 64 instances with 8 GPUs per instance. They can reserve time for up to 14 days in 1-day increments, up to 8 weeks in advance. When the timeframe is over, the instances will shut down automatically. The new product enables users to sign up for a the number of instances they need for defined block of time, just like reserving a hotel room for a certain number of days (as the company put it). From the customer's perspective, they will know exactly how long the job will run, how many GPUs they'll use and how much it will cost up front, giving them cost certainty. As a users sign up for the service, its displays the total cost for the timeframe and resources. Users can dial that up or down, depending on their resource appetite and budgets before agreeing to buy. The new feature is generally available starting today in the AWS US East (Ohio) region.

Slashdot Top Deals