AI

Europe Spins Up AI Research Hub To Apply Accountability Rules on Big Tech (techcrunch.com) 8

As the European Union gears up to enforce a major reboot of its digital rulebook in a matter of months, a new dedicated research unit is being spun up to support oversight of large platforms under the bloc's flagship Digital Services Act (DSA). From a report: The European Centre for Algorithmic Transparency (ECAT), which was officially inaugurated in Seville, Spain, today, is expected to play a major role in interrogating the algorithms of mainstream digital services -- such as Facebook, Instagram and TikTok.

ECAT is embedded within the EU's existing Joint Research Centre (JRC), a long-established science facility that conducts research in support of a broad range of EU policymaking, from climate change and crisis management to taxation and health sciences. But while the ECAT is embedded within the JRC -- and temporarily housed in the same austere-looking building (Seville's World Trade Centre), ahead of getting more open-plan bespoke digs in the coming years -- it has a dedicated focus on the DSA, supporting lawmakers to gather evidence to build cases so they can act on any platforms that don't take their obligations seriously. Commission officials describe the function of ECAT being to identify "smoking guns" to drive enforcement of the DSA -- say, for example, an AI-based recommender system that can be shown is serving discriminatory content despite the platform in question claiming to have taken steps to "de-bias" output -- with the unit's researchers being tasked with coming up with hard evidence to help the Commission build cases for breaches of the new digital rulebook.

Businesses

Meta Is About To Start Its Next Round of Layoffs (vox.com) 46

An anonymous reader quotes a report from Vox: Meta will conduct another mass round of layoffs on Wednesday, several sources working at the company told Vox. In an internal memo posted to a Meta employee message board on Tuesday evening and viewed by Vox, the company told employees that the layoffs will start on Wednesday and will impact a wide range of technical teams including those working on Facebook, Instagram, Reality Labs, and WhatsApp. A Meta spokesperson confirmed the memo was sent to employees but declined to comment further. The cuts could be in the range of 4,000 jobs, one source said.

"This will be a difficult time as we say goodbye to friends and colleagues who have contributed so much to Meta," Lori Goler, Meta's head of people, said in the memo. Meta employees in North America will be notified by email between 4 am to 5 am PT Wednesday morning, according to Goler's note. Outside of North America, the timelines will vary country to country, and some countries will not be impacted. Meta is also asking employees in North America, whose job allow it, to work from home on Wednesday to give people "space to process the news."
"The layoffs come after Meta CEO Mark Zuckerberg said in March that the company would cut 10,000 more jobs in the coming months, after already cutting 11,000 in November," notes Vox.
Microsoft

Microsoft Readies AI Chip as Machine Learning Costs Surge (theinformation.com) 12

After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips -- which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data -- could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.
Facebook

US Tech Giants Voice Concern Over India's Fact-Checking Rule (techcrunch.com) 37

The Asia Internet Coalition, an influential industry organization representing technology giants such as Facebook, Google, Apple, and Amazon, has voiced concerns over a recent amendment to India's IT rules, saying the changes grant the local government expansive content removal authority without implementing adequate procedural safeguards. From a report: India recently updated its IT rules, barring social media platforms such as Facebook and Twitter from disseminating false or misleading information about the government's business affairs. Under the new regulations, these firms must rely on New Delhi's own fact-checking unit to verify claims. The amendments lack the "sufficient procedural safeguards" to protect people's fundamental rights to access information, said Jeff Paine, Managing Director of AIC in a statement Monday.
AI

How Should AI Be Regulated? (nytimes.com) 153

A New York Times opinion piece argues people in the AI industry "are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down." But how? What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That's where the government comes in — or so they hope... [A]fter talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I'd prioritize.

The first is the question — and it is a question — of interpretability. As I said above, it's not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand... The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It's ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.

The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.

The piece also recommends that AI-design companies "bear at least some liability for what their models." But what legislation should we see — and what legislation will we see? "One thing regulators shouldn't fear is imperfect rules that slow a young industry," the piece argues.

"For once, much of that industry is desperate for someone to help slow it down."
Advertising

Tax-Filing Sites Ask to Blab Your Financial Info to 'Business Partners' (msn.com) 34

Online tax-filing services from TurboTax and H&R Block "want to blab your tax return secrets," warns the Washington Post. "Why? To help them make more money." If you prepare your taxes online with TurboTax or H&R Block software, at some point you'll see a message that I found confusing. "We can help you do more," TurboTax says. In this case, that "help" is funneling the private information from your tax return to Intuit — the company that owns TurboTax, Credit Karma and accounting software QuickBooks. H&R Block offers to "personalize your H&R Block experience."

If you say yes, you're going to see email and other marketing from Intuit and H&R Block or its business partners that are tailored to what's in your tax return.

That might include how much money you make, how much you owe in student loans, the size of your tax return and your charitable contributions. For example, a credit card company might pay Intuit's Credit Karma to show offers to high-income people. Intuit knows that information from your tax return. The Washington Post technology columnist Geoffrey A. Fowler wrote last year about how these two companies grab for your secret tax return information. He dubbed it "the Facebook-ization of personal finance."

In a way, the tax prep companies are more aggressive than Facebook. What they're doing is mission creep. You might already be paying TurboTax and H&R Block to prepare or file your tax return. Now they also want your permission to pass along your secrets to make even more money off you.

Censorship

India Says New IT Fact-Checking Unit Will Not Censor Journalism 27

A proposed Indian government unit to fact-check news on social media is not about censoring journalism nor will it have any impact on media reportage, a federal minister said on Friday. Reuters reports: Recently amended IT regulation requires online platforms like Meta's Facebook and Twitter to "make reasonable efforts" to not "publish, share or host" any information relating to the government that is "fake, false or misleading." Rajeev Chandrasekhar, India minister of state for IT, said in an online discussion it was "not true" that the government-appointed unit, which press freedom advocates strongly oppose, was aimed at "censoring journalism." The Editors Guild of India last week described the move as draconian and akin to censorship.
Businesses

Mass Layoffs and Absentee Bosses Create a Morale Crisis At Meta (nytimes.com) 54

An anonymous reader quotes a report from the New York Times: Mark Zuckerberg, Meta's chief executive, has declared that 2023 will be the "year of efficiency" at his company. So far, efficiency has translated into mass layoffs. He has conducted two rounds of cuts over the past six months, with two more to come; these will eliminate more than 21,000 people. Mr. Zuckerberg is also closing 5,000 open positions, which amounts to 30 percent of his company's work force. At the same time, some of Meta's top executives have moved away and are managing large parts of the Silicon Valley company from their new homes in places like London and Tel Aviv. The layoffs and absentee leadership, along with concerns that Mr. Zuckerberg is making a bad bet on the future, have devastated employee morale at Meta, according to nine current and former employees, as well as messages reviewed by The New York Times.

Employees at Meta, which not long ago was one of the most desirable workplaces in Silicon Valley, face an increasingly precarious future. The company's stock price has dropped 43 percent from its peak 19 months ago. More layoffs, Mr. Zuckerberg has said on his Facebook page, are coming this month. Some of those cuts could be in engineering groups, which would have been unthinkable before the trouble started last year, two employees said. "So many of the employees feel like they're in limbo right now," said Erin Sumner, a global director of human resources at DeleteMe, who was laid off from Facebook in November. "They're saying it's 'Hunger Games' meets 'Lord of the Flies,' where everyone is trying to prove their worth to management."

Meta, which owns Facebook, Instagram and WhatsApp, is not the only big tech company that has hit the brakes on spending. Amazon, Microsoft, Google, Salesforce and others have laid off thousands of workers in recent months, shed office space, dropped perks and pulled back from experimental initiatives. But Meta appears to face the most challenges. Last year, the company reported consecutive quarters of declining revenue -- a first since it became a public company in 2012.

Censorship

The Open Source VPN Out-Maneuvering Russian Censorship (wired.com) 16

An anonymous reader quotes a report from Wired: The Russian government has banned more than 10,000 websites for content about the war in Ukraine since Moscow launched the full-scale invasion in February 2022. The blacklist includes Facebook, Twitter, Instagram, and independent news outlets. Over the past year, Russians living inside the country have turned to censorship circumvention tools such as VPNs to pierce through the information blockade. But as dozens of virtual private networks get blocked, leaving users scrambling to maintain their access to free information, local activists and developers are coming up with new solutions. One of them is Amnezia VPN, a free, open source VPN client.

"We even do not advertise and promote it, and new users are still coming by the hundreds every day," says Mazay Banzaev, Amnezia VPN's founder. Unlike commercial VPNs that route users through company servers, which can be blocked, Amnezia VPN makes it simple for users to buy and set up their own servers. This allows them to choose their own IP address and use protocols that are harder to block. "More than half of the commercial VPNs in Russia have been blocked because it's easy enough to block them: They do not block them by protocols, but by IP addresses," says Banzaev. "[Amnezia] is an order of magnitude more resilient than a typical commercial VPN." Amnezia VPN is similar to Outline, a free and open source tool developed by Jigsaw, a subsidiary of Google. Amnezia was created in 2020 during a hackathon supported by Russian digital rights organization Roskomsvoboda. Even then, "it was clear that things were moving toward stricter censorship," says Banzaev. [...]

It is unclear how many users the service has, since the organization doesn't have a way to monitor user numbers, Banzaev says. However, Amnezia offers a Telegram bot called AmneziaFree, which shares VPN configurations that help users access blocked platforms and news; it has almost 100,000 users. The bot is currently struggling with overload, and users are complaining about spotty service. Banzaev says the Amnezia team is working to add new servers on a limited budget, and that they are also working on a new version of the service.
"Amnezia is not only used in Russia," notes Wired. "The service has spread to Turkmenistan, Iran, China, and other countries where users have been struggling with free access to the web."
Social Networks

Arkansas House Wants You To Show ID To Use Social Media (arktimes.com) 42

With no discussion, the Arkansas House of Representatives overwhelmingly approved a bill that would require social media users in The Natural State to verify they're 18 years old or older to use the platforms. Arkansas Times reports: The proposal, backed by Gov. Sarah Sanders, is aimed at shielding minors from the harmful effects of social media. Young folks could use the platforms, but only if parents provide consent. Senate Bill 396, sponsored by Sen. Tyler Dees (R-Springdale) and Rep. Jon Eubanks (R-Paris), would require social media companies including Facebook, Instagram, Twitter and TikTok to contract with third-party companies to perform age verification. Users would have to provide the third-party company with a digital driver's license. Dees also sponsored a bill, now law, that requires anyone who wants to watch online pornography to verify they're an adult.

The social media bill squeaked through the Senate with 18 yes votes, the bare minimum, but passed the House 82-10 with four voting present (same as no). No one asked any questions of Eubanks -- who assured his colleagues that Facebook had "the AI and algorithms" to keep track of what users had parental consent without holding on to sensitive data -- but because it was amended (to among other things exempt LinkedIn, the most boring social media platform), the bill has to go back to the Senate, where perhaps it will meet some resistance.
Utah's governor signed two bills into law last month requiring companies like Meta, Snap and TikTok to get parents permission before teens could create accounts on their platforms. "The laws also require curfew, parental controls and age verification features," adds Engadget.
AI

New AI Model Can 'Cut Out' Any Object Within an Image (arstechnica.com) 19

Meta has announced an AI model called the Segment Anything Model (SAM) that can identify individual objects in images and videos, even those not encountered during training. From a report: According to a blog post from Meta, SAM is an image segmentation model that can respond to text prompts or user clicks to isolate specific objects within an image. Image segmentation is a process in computer vision that involves dividing an image into multiple segments or regions, each representing a specific object or area of interest. The purpose of image segmentation is to make an image easier to analyze or process. Meta also sees the technology as being useful for understanding webpage content, augmented reality applications, image editing, and aiding scientific study by automatically localizing animals or objects to track on video.

Typically, Meta says, creating an accurate segmentation model "requires highly specialized work by technical experts with access to AI training infrastructure and large volumes of carefully annotated in-domain data." By creating SAM, Meta hopes to "democratize" this process by reducing the need for specialized training and expertise, which it hopes will foster further research into computer vision. In addition to SAM, Meta has assembled a dataset it calls "SA-1B" that includes 11 million images licensed from "a large photo company" and 1.1 billion segmentation masks produced by its segmentation model. Meta will make SAM and its dataset available for research purposes under an Apache 2.0 license. Currently, the code (without the weights) is available on GitHub, and Meta has created a free interactive demo of its segmentation technology.

Facebook

India To Require Social Media Firms Rely on Government's Own Fact Checking (techcrunch.com) 48

India amended its IT law on Thursday to prohibit Facebook, Twitter and other social media firms from publishing, hosting or sharing false or misleading information about "any business" of the government and said the firms will be required to rely on New Delhi's own fact-check unit to determine the authenticity of any claim in a blow to many American giants that identify the South Asian market as their largest by users. From a report: Failure to comply with the rule, which also impacts internet service providers such as Jio and Airtel, risks the firms losing their safe harbour protections. The rule, first proposed in January this year, gives a unit of the government arbitrary and overbroad powers to determine the authenticity of online content and bypasses the principles of natural justice, said New Delhi-headquartered digital rights group Internet Freedom Foundation.
Electronic Frontier Foundation

'The Broad, Vague RESTRICT Act Is a Dangerous Substitute For Comprehensive Data Privacy Legislation' (eff.org) 76

The recently introduced RESTRICT Act, otherwise known as the "TikTok ban," is a dangerous substitute for comprehensive data privacy legislation, writes the Electronic Frontier Foundation in a blog post. From the post: As we wrote in our initial review of the bill, the RESTRICT Act would authorize the executive branch to block 'transactions' and 'holdings' of 'foreign adversaries' that involve 'information and communication technology' and create 'undue or unacceptable risk' to national security and more. We've explained our opposition to the RESTRICT Act and urged everyone who agrees to take action against it. But we've also been asked to address some of the concerns raised by others. We do that here in this post. At its core, RESTRICT would exempt certain information services from the federal statute, known as the Berman Amendments, which protects the free flow of information in and out of the United States and supports the fundamental freedom of expression and human rights concerns. RESTRICT would give more power to the executive branch and remove many of the commonsense restrictions that exist under the Foreign Intelligence Services Act (FISA) and the aforementioned Berman Amendments. But S. 686 also would do a lot more.

EFF opposes the bill, and encourages you to reach out to your representatives to ask them not to pass it. Our reasons for opposition are primarily that this bill is being used as a cudgel to protect data from foreign adversaries, but under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. Separately, handing relatively unchecked power over to the executive branch to make determinations about what sort of information technologies and technology services are allowed to enter the U.S. is dangerous. If Congress is concerned about foreign powers collecting our data, it should focus on comprehensive consumer data privacy legislation that will have a real impact, and protect our data no matter what platform it's on -- TikTok, Facebook, Twitter, or anywhere else that profits from our private information. That's why EFF supports such consumer data privacy legislation. Foreign adversaries won't be able to get our data from social media companies if the social media companies aren't allowed to collect, retain, and sell it in the first place.
EFF says it's not clear if the RESTRICT Act will even result in a "ban" on TikTok. It does, however, have potential to punish people for using a VPN to access TikTok if it is restricted. In conclusion, the group says the bill is similar to a surveillance bill and is "far too broad in the power it gives to investigate potential user data."
Facebook

Meta To Debut Ad-Creating Generative AI this Year, CTO Says (nikkei.com) 29

Facebook owner Meta intends to commercialize its proprietary generative artificial intelligence by December, joining Google in finding practical applications for the tech. From a report: The company, which began full-scale AI research in 2013, stands out along with Google in the number of studies published. "We've been investing in artificial intelligence for over a decade, and have one of the leading research institutes in the world," Andrew Bosworth, Meta's chief technology officer, told Nikkei in an exclusive interview on Wednesday in Tokyo. "We certainly have a large research organization, hundreds of people." Meta announced in February that it would establish a new organization to develop generative AI, but this is the first time it has indicated a timeline for commercialization. The technology, which can instantly create sentences and graphics, has already been commercialized by ChatGPT creator OpenAI of the U.S. But Bosworth insists Meta remains on the technology's cutting edge.

"We feel very confident that ... we are at the very forefront," he said. "Quite a few of the techniques that are in large language model development were pioneered [by] our teams. "[I] expect we'll start seeing some of them [commercialization of the tech] this year. We just created a new team, the generative AI team, a couple of months ago; they are very busy. It's probably the area that I'm spending the most time [in], as well as Mark Zuckerberg and [Chief Product Officer] Chris Cox." Bosworth believes Meta's artificial intelligence can improve an ad's effectiveness partly by telling the advertiser what tools to use in making it. He said that instead of a company using a single image in an advertising campaign, it can "ask the AI, 'Make images for my company that work for different audiences.' And it can save a lot of time and money."

Privacy

Alcohol Recovery Startups Shared Patients' Private Data With Advertisers (techcrunch.com) 46

An anonymous reader quotes a report from TechCrunch: For years, online alcohol recovery startups Monument and Tempest were sharing with advertisers the personal information and health data of their patients without their consent. Monument, which acquired Tempest in 2022, confirmed the extensive years-long leak of patients' information in a data breach notification filed with California's attorney general last week, blaming their use of third-party tracking systems developed by ad giants including Facebook, Google, Microsoft and Pinterest. When reached for comment, Monument CEO Mike Russell confirmed more than 100,000 patients are affected.

In its disclosure, the companies confirmed their use of website trackers, which are small snippets of code that share with tech giants information about visitors to their websites, and often used for analytics and advertising. The data shared with advertisers includes patient names, dates of birth, email and postal addresses, phone numbers and membership numbers associated with the companies and patients' insurance provider. The data also included the person's photo, unique digital ID, which services or plan the patient is using, appointment information and assessment and survey responses submitted by the patient, which includes detailed responses about a person's alcohol consumption and used to determine their course of treatment.

Monument's own website says these survey answers are "protected" and "used only" by its care team. Monument confirmed that it shared patients' sensitive data with advertisers since January 2020, and Tempest since November 2017. Both companies say they have removed the tracking code from their websites. But the tech giants are not obligated to delete the data that Monument and Tempest shared with them.

AI

With Easy AI-Generated Deepfakes, Is Every Day April Fool's Day Now? (vice.com) 60

"Every day is April Fool's Day now, requiring a low but constant effort," argues Motherboard's senior editor, in a post shared by Slashdot reader samleecole.

"As AI-generated shitposting becomes easier, it's inevitable that one of these will catch you with your guard down, or appeal to some basic emotion you are too eager to believe..." Even if you're trained in recognizing fake imagery and can immediately spot the difference between copy written by a language model and a human (content that's increasingly sneaking into online articles), doing endless fact-checking and performing countless micro-decisions about reality and fraud is mentally draining. Every year, our brains are tasked with processing five percent more information per day than the last. Add to this cognitive load a constant, background-level effort to decide whether that data is a lie. The disinformation apocalypse is already here, but not in the form of the Russian "dezinformatsiya" we feared. Wading through what's real and fake online has never been harder, not because each individual deepfake is impossible to distinguish from reality, but because the volume of low effort gags is outpacing our ability to process them....

Hany Farid, a professor at the University of California, Berkeley who's been studying manipulated media since long before deepfakes, told me that while he's used to getting a few calls every week from reporters asking him to take a look at images or videos that seem manipulated, over the past few weeks, he's gotten dozens of requests a day. "I don't even know how to put words to it. It really feels like it's unraveling," Farid told me in a phone call.

When AI generated fakes started cropping up online years ago, he recalled, he warned that this would change the future, and some of his colleagues told him that he was overreacting. "The one thing that has surprised me is that it has gone much, much faster than I expected," he said. "I always thought, I agree that it is not the biggest problem today. But what's that Wayne Gretzky line? Don't skate to where the puck is, skate to where the puck is going. You've got to follow the puck. In this case, I don't think this was hard to predict."

Buzzfeed noted that a viral image of the Pope in a white "puffer" coat" was created by a 31-year-old construction worker who created it while tripping on mushrooms, then posted it to Facebook.

But Motherboard's article concludes with a quote from Peter Eckersley, the chief computer scientist for the Electronic Frontier Foundation who died in 2022. "There's a large and growing fraction of machine learning and AI researchers who are worried about the societal implications of their work on many fronts, but are also excited for the enormous potential for good that this technology processes." Eckersley said in a 2018 phone call. "So I think a lot of people are starting to ask, 'How do we do this the right way?'

"It turns out that that's a very hard question to answer. Or maybe a hard question to answer correctly... How do we put our thumbs on the scale to ensure machine learning is producing a saner and more stable world, rather than one where more systems can be broken into more quickly?"
Microsoft

These Angry Dutch Farmers Really Hate Microsoft Over Data Centers (wired.com) 97

Wired pays a visit to a half-finished Microsoft data center that rises out of the flat North Holland farmland — where the security guard tells a local councillor he's not allowed to visit the site, and "Within minutes, the argument has escalated, and the guard has his hand around Ruiter's throat." The security guard lets go of Ruiter within a few seconds, and the councillor escapes with a red mark across his neck. Back in his car, Ruiter insists he's fine. But his hands shake when he tries to change gears. He says the altercation — which he will later report to the police — shows the fog of secrecy that surrounds the Netherlands' expanding data center business.

"We regret an interaction that took place outside our data center campus, apparently involving one of Microsoft's subcontractors," says Craig Cincotta, general manager at Microsoft, adding that the company would cooperate with the authorities.

The heated exchange between Ruiter and Microsoft's security guard shows how contentious Big Tech's data centers have become in rural parts of the Netherlands. As the Dutch government sets strict environmental targets to cut emissions, industries are being forced to compete for space on Dutch farmland — pitting big tech against the increasingly political population of Dutch farmers.

There are around 200 data centers in the Netherlands, most of them renting out server space to several different companies. But since 2015, the country has also witnessed the arrival of enormous "hyperscalers," buildings that generally span at least 10,000 square feet and are set up to service a single (usually American) tech giant. Lured here by the convergence of European internet cables, temperate climates, and an abundance of green energy, Microsoft and Google have built hyperscalers; Meta has tried and failed.

Against the backdrop of an intensifying Dutch nitrogen crisis, building these hyperscalers is becoming more controversial. Nitrogen, produced by cars, agriculture, and heavy machinery used in construction, can be a dangerous pollutant, damaging ecosystems and endangering people's health. The Netherlands produces four times more nitrogen than the average across the EU. The Dutch government has pledged to halve emissions by 2030, partly by persuading farmers to reduce their livestock herds or leave the industry altogether. Farmers have responded with protests, blockading roads with tractors and manure and dumping slurry outside the nature minister's home.

Farmers object that Microsoft is building its data center before it's even received government permits certifying that it won't worsen the nitrogen problem, according to the article. In response the Farmer Citizen Movement has sprung up, and last month it became the joint-largest party in the Dutch Senate. One party leader tells Wired, "It is a waste of fertile soil to put the data centers boxes here."

And Wired adds that opposition to datacenter development is also growing elsewhere in Europe.
Upgrades

Glitch In System Upgrade Identified As Cause of Delays At Singapore Immigration (zdnet.com) 5

Technical glitch during a scheduled upgrade affected all automated immigration clearance systems and led to rare delays at Singapore's Changi Airport, which recently was again named the world's best airport. ZDNet reports: Long lines were spotted Thursday morning at the country's airport where travelers usually would not need more than mere minutes to clear immigration. In a series of posts on Facebook and Twitter, Singapore's Immigration & Checkpoints Authority (ICA) said it was experiencing "system slowness" at several passenger clearance checkpoints, including all automated departure lanes at all terminals at Changi Airport. Selected automated systems at the Woodlands and Tuas border checkpoints, through which travelers would enter neighboring country Malaysia, also were affected. Immigration systems at coastal checkpoints were the only ones that were not disrupted.

Passengers were advised to postpone non-essential travel and expect delays, as they would be redirected to manual lanes for immigration clearance. By 4pm the same day, automated immigration clearance at all checkpoints were back up and running. ICA said in a statement late-Thursday that preliminary investigations revealed a "technical glitch" had occurred during a pre-scheduled system upgrade, causing an "unanticipated system overload". This brought down the automated immigration clearance systems, which affected all departure terminals at Changi Airport and arrival terminals at Terminals 2 and 4. ICA did not provide details on the system upgrade or whether the procedure was tested before the scheduled live rollout.

Facebook

Meta Wants EU Users To Apply For Permission To Opt Out of Data Collection (arstechnica.com) 27

Meta announced that starting next Wednesday, some Facebook and Instagram users in the European Union will for the first time be able to opt out of sharing first-party data used to serve highly personalized ads, The Wall Street Journal reported. The move marks a big change from Meta's current business model, where every video and piece of content clicked on its platforms provides a data point for its online advertisers. Ars Technica reports: People "familiar with the matter" told the Journal that Facebook and Instagram users will soon be able to access a form that can be submitted to Meta to object to sweeping data collection. If those requests are approved, those users will only allow Meta to target ads based on broader categories of data collection, like age range or general location. This is different from efforts by other major tech companies like Apple and Google, which prompt users to opt in or out of highly personalized ads with the click of a button. Instead, Meta will review objection forms to evaluate reasons provided by individual users to end such data collection before it will approve any opt-outs. It's unclear what cause Meta may have to deny requests.

A Meta spokesperson told Ars that Meta is not sharing the objection form publicly at this time but that it will be available to EU users in its Help Center starting on April 5. That's the deadline Meta was given to comply with an Irish regulator's rulings that it was illegal in the EU for Meta to force Facebook and Instagram users to give consent to data collection when they signed contracts to use the platforms. Meta still plans to appeal those Irish Data Protection Commission (DPC) rulings, believing that its prior contract's legal basis complies with the EU's General Data Protection Regulation (GDPR). In the meantime, though, the company must change the legal basis for data collection. Meta announced in a blog post today that it will now argue that it does not need to directly obtain user consent because it has a "legitimate interest" to collect data to operate its social platforms. "We believe that our previous approach was compliant under GDPR, and our appeal on both the substance of the rulings and the fines continues," Meta's blog said. "However, this change ensures that we comply with the DPC's decision."

Businesses

Amazon Seller Consultant Admits To Bribing Employees To Help Clients (cnbc.com) 6

An influential consultant for Amazon sellers has admitted to bribing employees of the e-commerce giant for information to help his clients boost sales and to get their suspended accounts reinstated. From a report: Ephraim "Ed" Rosenberg wrote in a LinkedIn post that he will plead guilty in federal court to a criminal charge, stemming from a 2020 indictment that charged six people with conspiring to give sellers an unfair competitive advantage on Amazon's third-party marketplace. Four of the defendants have already pleaded guilty, including one former Amazon employee who was sentenced last year to 10 months in prison.

Rosenberg, who's based in Brooklyn, is a well-known figure in the world of Amazon third-party sellers. He runs a consultancy business that advises entrepreneurs on how to sell products on the online marketplace, and navigate unforeseen issues with their Amazon account. Rosenberg's Facebook group for sellers, ASGTG, has over 68,000 members, and he hosts a popular conference for sellers each year. "For a time, some years ago, I began to obtain and use Amazon's internal annotations -- Amazon's private property -- to learn the reasons for sellers' suspensions, in order to assist them in getting reinstated, if possible," wrote Rosenberg, who is due to appear in U.S. District Court in Seattle on March 30, for a change of plea hearing, according to court records. "On some occasions, I paid bribes, directly and indirectly, to Amazon employees to obtain annotations and reinstate suspended accounts. These actions were against the law."

Slashdot Top Deals