AI

Anthropic Deploys Multiple Claude Agents for 'Research' Tool - Says Coding is Less Parallelizable (anthropic.com) 4

In April Anthorpic introduced a new AI trick: multiple Claude agents combine for a "Research" feature that can "search across both your internal work context and the web" (as well as Google Workspace "and any integrations...")

But a recent Anthropic blog post notes this feature "involves an agent that plans a research process based on user queries, and then uses tools to create parallel agents that search for information simultaneously," which brings challenges "in agent coordination, evaluation, and reliability.... The model must operate autonomously for many turns, making decisions about which directions to pursue based on intermediate findings." Multi-agent systems work mainly because they help spend enough tokens to solve the problem.... This finding validates our architecture that distributes work across agents with separate context windows to add more capacity for parallel reasoning. The latest Claude models act as large efficiency multipliers on token use, as upgrading to Claude Sonnet 4 is a larger performance gain than doubling the token budget on Claude Sonnet 3.7. Multi-agent architectures effectively scale token usage for tasks that exceed the limits of single agents.

There is a downside: in practice, these architectures burn through tokens fast. In our data, agents typically use about 4Ã-- more tokens than chat interactions, and multi-agent systems use about 15Ã-- more tokens than chats. For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance. Further, some domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.

For instance, most coding tasks involve fewer truly parallelizable tasks than research, and LLM agents are not yet great at coordinating and delegating to other agents in real time. We've found that multi-agent systems excel at valuable tasks that involve heavy parallelization, information that exceeds single context windows, and interfacing with numerous complex tools.

Thanks to Slashdot reader ZipNada for sharing the news.
Science

Axolotl Discovery Brings Us Closer Than Ever To Regrowing Human Limbs (sciencealert.com) 41

alternative_right shares a report from ScienceAlert: A team of biologists from Northeastern University and the University of Kentucky has found one of the key molecules involved in axolotl regeneration. It's a crucial component in ensuring the body grows back the right parts in the right spot: for instance, growing a hand, from the wrist. "The cells can interpret this cue to say, 'I'm at the elbow, and then I'm going to grow back the hand' or 'I'm at the shoulder... so I'm going to then enable those cells to grow back the entire limb'," biologist James Monaghan explains.

That molecule, retinoic acid, is arranged through the axolotl body in a gradient, signaling to regenerative cells how far down the limb has been severed. Closer to the shoulder, axolotls have higher levels of retinoic acid, and lower levels of the enzyme that breaks it down. This ratio changes the further the limb extends from the body. The team found this balance between retinoic acid and the enzyme that breaks it down plays a crucial role in 'programming' the cluster of regenerative cells that form at an injury site. When they added surplus retinoic acid to the hand of an axolotl in the process of regenerating, it grew an entire arm instead.

In theory, the human body has the right molecules and cells to do this too, but our cells respond to the signals very differently, instead forming collagen-based scars at injury sites. Next, Monaghan is keen to find out what's going on inside cells -- the axolotl's, and our own -- when those retinoic acid signals are received.
The research is published in Nature Communications.
AI

How Do Olympiad Medalists Judge LLMs in Competitive Programming? 23

A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. LiveCodeBench Pro, unveiled in a 584-problem study [PDF] drawn from Codeforces, ICPC and IOI contests, shows the best frontier model clears just 53% of medium-difficulty tasks on its first attempt and none of the hard ones, while grandmaster-level humans routinely solve at least some of those highest-tier problems.

The researchers measured models and humans on the same Elo scale used by Codeforces and found that OpenAI's o4-mini-high, when stripped of terminal tools and limited to one try per task, lands at an Elo rating of 2,116 -- hundreds of points below the grandmaster cutoff and roughly the 1.5 percentile among human contestants. A granular tag-by-tag autopsy identified implementation-friendly, knowledge-heavy problems -- segment trees, graph templates, classic dynamic programming -- as the models' comfort zone; observation-driven puzzles such as game-theory endgames and trick-greedy constructs remain stubborn roadblocks.

Because the dataset is harvested in real time as contests conclude, the authors argue it minimizes training-data leakage and offers a moving target for future systems. The broader takeaway is that impressive leaderboard jumps often reflect tool use, multiple retries or easier benchmarks rather than genuine algorithmic reasoning, leaving a conspicuous gap between today's models and top human problem-solvers.
Programming

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift (infoq.com) 109

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage."

This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency."

Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

"While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."
Transportation

17-Year-Old Student Builds 3D-printed Drone In Garage, Interests DoD and MIT (yahoo.com) 63

"Cooper Taylor is only 17 years old, but he's already trying to revolutionize the drone industry," writes Business Insider: His design makes the drone more efficient, customizable, and less expensive to construct, he says. He's built six prototypes, 3D printing every piece of hardware, programming the software, and even soldering the control circuit board. He says building his drone cost one-fifth of the price of buying a comparable machine, which sells for several thousand dollars. Taylor told Business Insider he hopes that "if you're a first responder or a researcher or an everyday problem solver, you can have access to this type of drone."

His innovation won him an $8,000 scholarship in April at the Junior Science and Humanities Symposium, funded by the Defense Department. Then, on May 16, he received an even bigger scholarship of $15,000 from the US Navy, which he won after presenting his research at the Regeneron International Science and Engineering Fair...

It all started when Taylor's little sister got a drone, and he was disappointed to see that it could fly for only about 30 minutes before running out of power. He did some research and found that a vertical take-off and landing, or VTOL, drone would last longer. This type of drone combines the multi-rotor helicopter style with the fixed wings of an airplane, making it extremely versatile. It lifts off as a helicopter, then transitions into plane mode. That way, it can fly farther than rotors alone could take it, which was the drawback to Taylor's sister's drone. Unlike a plane-style drone, though, it doesn't need a runway, and it can hover with its helicopter rotors.

Taylor designed a motor "that could start out helicopter-style for liftoff, then tilt back to become an airplane-style motor," according to the article.

And now this summer he'll be "working on a different drone project through a program with the Reliable Autonomous Systems Lab at the Massachusetts Institute of Technology."

Thanks to Slashdot reader Agnapot for sharing the news.
Python

Python Creator Guido van Rossum Asks: Is 'Worse is Better' Still True for Programming Languages? (blogspot.com) 67

In 1989 a computer scientist argued that more functionality in software actually lowers usability and practicality — leading to the counterintuitive proposition that "worse is better". But is that still true?

Python's original creator Guido van Rossum addressed the question last month in a lightning talk at the annual Python Language Summit 2025. Guido started by recounting earlier periods of Python development from 35 years ago, where he used UNIX "almost exclusively" and thus "Python was greatly influenced by UNIX's 'worse is better' philosophy"... "The fact that [Python] wasn't perfect encouraged many people to start contributing. All of the code was straightforward, there were no thoughts of optimization... These early contributors also now had a stake in the language; [Python] was also their baby"...

Guido contrasted early development to how Python is developed now: "features that take years to produce from teams of software developers paid by big tech companies. The static type system requires an academic-level understanding of esoteric type system features." And this isn't just Python the language, "third-party projects like numpy are maintained by folks who are paid full-time to do so.... Now we have a huge community, but very few people, relatively speaking, are contributing meaningfully."

Guido asked whether the expectation for Python contributors going forward would be that "you had to write a perfect PEP or create a perfect prototype that can be turned into production-ready code?" Guido pined for the "old days" where feature development could skip performance or feature-completion to get something into the hands of the community to "start kicking the tires". "Do we have to abandon 'worse is better' as a philosophy and try to make everything as perfect as possible?" Guido thought doing so "would be a shame", but that he "wasn't sure how to change it", acknowledging that core developers wouldn't want to create features and then break users with future releases.

Guido referenced David Hewitt's PyO3 talk about Rust and Python, and that development "was using worse is better," where there is a core feature set that works, and plenty of work to be done and open questions. "That sounds a lot more fun than working on core CPython", Guido paused, "...not that I'd ever personally learn Rust. Maybe I should give it a try after," which garnered laughter from core developers.

"Maybe we should do more of that: allowing contributors in the community to have a stake and care".

AI

Salesforce Blocks AI Rivals From Using Slack Data (theinformation.com) 9

An anonymous reader shares a report: Slack, an instant-messaging service popular with businesses, recently blocked other software firms from searching or storing Slack messages even if their customers permit them to do so, according to a public disclosure from Slack's owner, Salesforce.

The move, which hasn't previously been reported, could hamper fast-growing artificial intelligence startups that have used such access to power their services, such as Glean. Since the Salesforce change, Glean and other applications can no longer index, copy or store the data they access via the Slack application programming interface on a long-term basis, according to the disclosure. Salesforce will continue allowing such firms to temporarily use and store their customers' Slack data, but they must delete the data, the company said.

Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 50

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
Programming

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours (msn.com) 88

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code.

The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase.

The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Programming

How Stack Overflow's Reputation System Led To Its Own Downfall (infoworld.com) 103

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis.

The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.
Programming

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey (stackoverflow.blog) 10

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before."

For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...?

They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".]

In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary.

And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)...

Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

AI

Does Anthropic's Success Prove Businesses are Ready to Adopt AI? (reuters.com) 19

AI company Anthropic (founded in 2021 by a team that left OpenAI) is now making about $3 billion a year in revenue, reports Reuters (citing "two sources familiar with the matter.") The sources said December's projections had been for just $1 billion a year, but it climbed to $2 billion by the end of March (and now to $3 billion) — a spectacular growth rate that one VC says "has never happened." A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and Amazon, is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models.
Anthropic sells AI models as a service to other companies, according to the article, and Reuters calls Anthropic's success "an early validation of generative AI use in the business world" — and a long-awaited indicator that it's growing. (Their rival OpenAI earns more than half its revenue from ChatGPT subscriptions and "is shaping up to be a consumer-oriented company," according to their article, with "a number of enterprises" limiting their rollout of ChatGPT to "experimentation.")

Then again, in February OpenAI's chief operating officer said they had 2 million paying enterprise users, roughly doubling from September, according to CNBC. The latest figures from Reuters...
  • Anthropic's valuation: $61.4 billion.
  • OpenAI's valuation: $300 billion.

AI

Will 'Vibe Coding' Transform Programming? (npr.org) 116

A 21-year-old's startup got a $500,000 investment from Y Combinator — after building their web site and prototype mostly with "vibe coding".

NPR explores vibe coding with Tom Blomfield, a Y Combinator group partner: "It really caught on, this idea that people are no longer checking line by line the code that AI is producing, but just kind of telling it what to do and accepting the responses in a very trusting way," Blomfield said. And so Blomfield, who knows how to code, also tried his hand at vibe coding — both to rejig his blog and to create from scratch a website called Recipe Ninja. It has a library of recipes, and cooks can talk to it, asking the AI-driven site to concoct new recipes for them. "It's probably like 30,000 lines of code. That would have taken me, I don't know, maybe a year to build," he said. "It wasn't overnight, but I probably spent 100 hours on that."

Blomfield said he expects AI coding to radically change the software industry. "Instead of having coding assistance, we're going to have actual AI coders and then an AI project manager, an AI designer and, over time, an AI manager of all of this. And we're going to have swarms of these things," he said. Where people fit into this, he said, "is the question we're all grappling with." In 2021, Blomfield said in a podcast that would-be start-up founders should, first and foremost, learn to code. Today, he's not sure he'd give that advice because he thinks coders and software engineers could eventually be out of a job. "Coders feel like they are tending, kind of, organic gardens by hand," he said. "But we are producing these superhuman agents that are going to be as good as the best coders in the world, like very, very soon."

The article includes an alternate opinion from Adam Resnick, a research manager at tech consultancy IDC. "The vast majority of developers are using AI tools in some way. And what we also see is that a reasonably high percentage of the code output from those tools needs further curation by people, by experienced people."

NPR ends their article by noting that this further curation is "a job that AI can't do, he said. At least not yet."
AI

Stack Overflow's Radical New Plan To Fight AI-Induced Death Spiral (thenewstack.io) 75

DevNull127 writes: Stack Overflow will test paying experts to answer questions. That's one of many radical experiments they're now trying to stave off an AI-induced death spiral. Questions and answers to the site have plummeted more than 90% since April of 2020. So here's what Stack Overflow will try next.

1. They're bringing back Chat, according to their CEO (to foster "even more connections between our community members" in "an increasingly AI-driven world").

2. They're building a "new Stack Overflow" meant to feel like a personalized portal. "It might collect videos, blogs, Q&A, war stories, jokes, educational materials, jobs... and fold them together into one personalized destination."

3. They're proposing areas more open to discussion, described as "more flexible Stack Exchanges... where users can explore ideas or share opinions."

4. They're also licensing Stack Overflow content to AI companies for training their models.

5. Again, they will test paying experts to answer questions.

AI

At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work (nytimes.com) 207

Amazon software engineers are reporting that AI tools are transforming their jobs into something resembling the company's warehouse work, with managers pushing faster output and tighter deadlines while teams shrink in size, according to the New York Times.

Three Amazon engineers told the New York Times that the company has raised productivity goals over the past year and expects developers to use AI assistants that suggest code snippets or generate entire program sections. One engineer said his team was cut roughly in half but still expected to produce the same amount of code by relying on AI tools.

The shift mirrors historical workplace changes during industrialization, the Times argues, where technology didn't eliminate jobs but made them more routine and fast-paced. Engineers describe feeling like "bystanders in their own jobs" as they spend more time reviewing AI-generated code rather than writing it themselves. Tasks that once took weeks now must be completed in days, with less time for meetings and collaborative problem-solving, according to the engineers.
Programming

Is AI Turning Coders Into Bystanders in Their Own Jobs? (msn.com) 101

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work."

And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI.

Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000.

The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control.

"It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."

"This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..."

"While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."
Programming

Python Can Now Call Code Written in Chris Lattner's Mojo (modular.com) 26

Mojo (the programming language) reached a milestone today.

The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI.

And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi.

One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing...

It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on.

"We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes.

Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.

Slashdot Top Deals