×
AI

Apple's Hidden AI Prompts Discovered In macOS Beta 46

A Reddit user discovered the backend prompts for Apple Intelligence in the developer beta of macOS 15.1, offering a rare glimpse into the specific guidelines for Apple's AI functionalities. Some of the most notable instructions include: "Do not write a story that is religious, political, harmful, violent, sexual, filthy, or in any way negative, sad, or provocative"; "Do not hallucinate"; and "Do not make up factual information." MacRumors reports: For the Smart Reply feature, the AI is programmed to identify relevant questions from an email and generate concise answers. The prompt for this feature is as follows: "You are a helpful mail assistant which can help identify relevant questions from a given mail and a short reply snippet. Given a mail and the reply snippet, ask relevant questions which are explicitly asked in the mail. The answer to those questions will be selected by the recipient which will help reduce hallucination in drafting the response. Please output top questions along with set of possible answers/options for each of those questions. Do not ask questions which are answered by the reply snippet. The questions should be short, no more than 8 words. The answers should be short as well, around 2 words. Present your output in a json format with a list of dictionaries containing question and answers as the keys. If no question is asked in the mail, then output an empty list. Only output valid json and nothing else."

The Memories feature in Apple Photos, which creates video stories from user photos, follows another set of detailed guidelines. The AI is instructed to generate stories that are positive and free of any controversial or harmful content. The prompt for this feature is: "A conversation between a user requesting a story from their photos and a creative writer assistant who responds with a story. Respond in JSON with these keys and values in order: traits: list of strings, visual themes selected from the photos; story: list of chapters as defined below; cover: string, photo caption describing the title card; title: string, title of story; subtitle: string, safer version of the title. Each chapter is a JSON with these keys and values in order: chapter: string, title of chapter; fallback: string, generic photo caption summarizing chapter theme; shots: list of strings, photo captions in chapter. Here are the story guidelines you must obey: The story should be about the intent of the user; The story should contain a clear arc; The story should be diverse, that is, do not overly focus the entire story on one very specific theme or trait; Do not write a story that is religious, political, harmful, violent, sexual, filthy or in any way negative, sad or provocative. Here are the photo caption list guidelines you must obey.

Apple's AI tools also include a general directive to avoid hallucination. For instance, the Writing Tools feature has the following prompt: "You are an assistant which helps the user respond to their mails. Given a mail, a draft response is initially provided based on a short reply snippet. In order to make the draft response nicer and complete, a set of question and its answer are provided. Please write a concise and natural reply by modifying the draft response to incorporate the given questions and their answers. Please limit the reply within 50 words. Do not hallucinate. Do not make up factual information."
Robotics

Figure AI's Humanoid Robot Helped Assemble BMWs At US Factory (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Unlike Tesla, which hopes to develop its own bipedal 'bot to work on its production line sometime next year, BMW has brought in a robot from Figure AI. The Figure 02 robot has hands with sixteen degrees of freedom and human-equivalent strength. "We are excited to unveil Figure 02, our second-generation humanoid robot, which recently completed successful testing at the BMW Group Plant Spartanburg. Figure 02 has significant technical advancements, which enable the robot to perform a wide range of complex tasks fully autonomously," said Brett Adcock, founder and CEO of Figure AI.

BMW wanted to test how to integrate a humanoid robot into its production process -- how to have the robot communicate with the production line software and human workers and determine what requirements would be necessary to add robots to the mix. The Figure robot was given the job of inserting sheet metal parts into fixtures as part of the process of making a chassis. BMW says this required particular dexterity and that it's an ergonomically awkward and tiring task for humans.

Now that the trial is over, Figure's robot is no longer working at Spartanburg, and BMW says it has "no definite timetable established" to add humanoid robots to its production lines. "The developments in the field of robotics are very promising. With an early-test operation, we are now determining possible applications for humanoid robots in production. We want to accompany this technology from development to industrialization," said Milan Nedeljkovi, BMW's board member responsible for production.
BMW Group published a video of the Figure 02 robot on YouTube.
Intel

Intel Foundry Achieves Major Milestones (intel.com) 28

Intel has announced significant progress on its 18A process technology, with lead products successfully powering on and booting operating systems. The company's Panther Lake client processor and Clearwater Forest server chip, both built on 18A, achieved these milestones less than two quarters after tape-out. The 18A node, featuring RibbonFET gate-all-around transistors and PowerVia backside power delivery, is on track for production in 2025.

Intel released the 18A Process Design Kit 1.0 in July, enabling foundry customers to leverage these advanced technologies in their designs. "Intel is out ahead of everyone else in the industry with these innovations," Kevin O'Buckley, Intel's new head of Foundry Services stated, highlighting the node's potential to drive next-generation AI solutions. Clearwater Forest will be the industry's first mass-produced, high-performance chip combining RibbonFET, PowerVia, and Foveros Direct 3D packaging technology. It also utilizes Intel's 3-T base-die technology, showcasing the company's systems foundry approach. Intel expects its first external customer to tape out on 18A in the first half of 2025. EDA and IP partners are updating their tools to support customer designs on the new node. The success of 18A is crucial for Intel's ambitions to regain process leadership and grow its foundry business.
Google

Google Unveils $99 TV Streamer To Replace Chromecast (theverge.com) 63

Google today unveiled its new Google TV Streamer, a $99.99 set-top box replacing the Chromecast. The device, shipping September 24, boasts improved performance with a 22% faster processor (over its predecessor), doubled RAM, and 32GB storage. It integrates Thread and Matter for smart home control, featuring a side-panel accessible via the remote. The Streamer supports Dolby Vision, Dolby Atmos and includes an Ethernet port. Design changes include a low-profile form factor in two colors and a redesigned remote with a finder function. Software enhancements use Gemini AI for content summaries and custom screensavers.
AI

Mainframes Find New Life in AI Era (msn.com) 56

Mainframe computers, stalwarts of high-speed data processing, are finding new relevance in the age of AI. Banks, insurers, and airlines continue to rely on these industrial-strength machines for mission-critical operations, with some now exploring AI applications directly on the hardware, WSJ reported in a feature story. IBM, commanding over 96% of the mainframe market, reported 6% growth in its mainframe business last quarter. The company's latest zSystem can process up to 30,000 transactions per second and hold 40 terabytes of data. WSJ adds: Globally, the mainframe market was valued at $3.05 billion in 2023, but new mainframe sales are expected to decline through 2028, IDC said. Of existing mainframes, however, 54% of enterprise leaders in a 2023 Forrester survey said they would increase their usage over the next two years.

Mainframes do have limitations. They are constrained by the computing power within their boxes, unlike the cloud, which can scale up by drawing on computing power distributed across many locations and servers. They are also unwieldy -- with years of old code tacked on -- and don't integrate well with new applications. That makes them costly to manage and difficult to use as a platform for developing new applications.

AI

OpenAI Co-Founder John Schulman Is Joining Anthropic (cnbc.com) 3

OpenAI co-founder John Schulman announced Monday that he is leaving to join rival AI startup Anthropic. CNBC reports: The move comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks. Schulman had been a co-leader of OpenAI's post-training team that refined AI models for the ChatGPT chatbot and a programming interface for third-party developers, according to a biography on his website. In June, OpenAI said Schulman, as head of alignment science, would join a safety and security committee that would provide advice to the board. Schulman has only worked at OpenAI since receiving a Ph.D. in computer science in 2016 from the University of California, Berkeley.

"This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work," Schulman wrote in the social media post. He said he wasn't leaving because of a lack of support for new work on the topic at OpenAI. "On the contrary, company leaders have been very committed to investing in this area," he said. The leaders of the superalignment team, Jan Leike and company co-founder Ilya Sutskever, both left this year. Leike joined Anthropic, while Sutskever said he was helping to start a new company, Safe Superintelligence Inc. "Very excited to be working together again!" Leike wrote in reply to Schulman's message.

Education

Silicon Valley Parents Are Sending Kindergarten Kids To AI-Focused Summer Camps 64

Silicon Valley's fascination with AI has led to parents enrolling children as young as five in AI-focused summer camps. "It's common for kids on summer break to attend space, science or soccer camp, or even go to coding school," writes Priya Anand via the San Francisco Standard. "But the growing effort to teach kindergarteners who can barely spell their names lessons in 'Advanced AI Robot Design & AR Coding' shows how far the frenzy has extended." From the report: Parents who previously would opt for coding camps are increasingly interested in AI-specific programming, according to Eliza Du, CEO of Integem, which makes holographic augmented reality technology in addition to managing dozens of tech-focused kids camps across the country. "The tech industry understands the value of AI," she said. "Every year it's increasing." Some Bay Area parents are so eager to get their kids in on AI's ground floor that they try to sneak toddlers into advanced courses. "Sometimes they'll bring a 4-year-old, and I'm like, you're not supposed to be here," Du said.

Du said Integem studied Common Core education standards to ensure its programming was suitable for those as young as 5. She tries to make sure parents understand there's only so much kids can learn across a week or two of camp. "Either they set expectations too high or too low," Du said of the parents. As an example, she recounted a confounding comment in a feedback survey from the parent of a 5-year-old. "After one week, the parent said, "My child did not learn much. My cousin is a Google engineer, and he said he's not ready to be an intern at Google yet.' What do I say to that review?" Du said, bemused. "That expectation is not realistic." Even less tech-savvy parents are getting in on the hype. Du tells of a mom who called the company to get her 12-year-old enrolled in "AL" summer camp. "She misread it," Du said, explaining that the parent had confused the "I" in AI with a lowercase "L."
AI

Video Game Actors Are Officially On Strike Over AI (theverge.com) 52

Members of the Screen Actors Guild (SAG-AFTRA) are striking against the video game industry due to failed negotiations over AI-related worker protections. "The guild began striking on Friday, July 26th, preventing over 160,000 SAG-AFTRA members from taking new video game projects and impeding games already in development from the biggest publishers to the smallest indie studios," notes The Verge. From the report: Negotiations broke down due to disagreements over worker protections around AI. The actors union, SAG-AFTRA, negotiates the terms of the interactive media agreement, or IMA, with a bargaining committee of video game publishers, including Activision, Take-Two, Insomniac Games, WB Games, and others that represent a total of 30 signatory companies. Though SAG-AFTRA and the video game bargaining group were able to agree on a number of proposals, AI remained the final stumbling block resulting in the strike.

SAG-AFTRA's provisions on AI govern both voice and movement performers with respect to digital replicas -- or using an existing performance as the foundation to create new ones without the original performer -- and the use of generative AI to create performances without any initial input. However, according to SAG-AFTRA, the bargaining companies disagreed about which type of performer should be eligible for AI protections. SAG-AFTRA chief contracts officer Ray Rodriguez said that the bargaining companies initially wanted to offer protections to voice, not motion performers. "So anybody doing a stunt or creature performance, all those folks would have been left unprotected under the employers' offer," Rodriguez said in an interview with Aftermath. Rodriguez said that the companies later extended protections to motion performers, but only if "the performer is identifiable in the output of the AI digital replica."

SAG-AFTRA rejected this proposal as it would potentially exclude a majority of movement performances. "Their proposal would carve out anything that doesn't look and sound identical to me," said Andi Norris, a member of SAG-AFTRA's IMA negotiating committee, during a press conference. "[The proposal] would leave movement specialists, including stunts, entirely out in the cold, to be replaced ... by soulless synthetic performers trained on our actual performances." The bargaining game companies argued that the terms went far enough and would require actors' approval. "Our offer is directly responsive to SAG-AFTRA's concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the IMA. These terms are among the strongest in the entertainment industry," wrote Audrey Cooling, a representative working on behalf of the video game companies on the bargaining committee in a statement to The Verge.

AI

Nvidia Allegedly Scraped YouTube, Netflix Videos for AI Training Data 37

Nvidia scraped videos from YouTube, Netflix and other online platforms to compile training data for its AI products, 404 Media reported Monday, citing internal documents. The tech giant used this content to develop various AI projects, including its Omniverse 3D world generator and self-driving car systems, the report said. Some employees expressed concerns about potential legal issues surrounding the use of such content, the report said, adding that the management assured them of executive-level approval. Nvidia defended its actions, asserting they were "in full compliance with the letter and the spirit of copyright law" and emphasizing that copyright protects specific expressions rather than facts or ideas.
AI

Elon Musk Revives Lawsuit Against OpenAI and Sam Altman 47

Elon Musk has reignited his legal battle against OpenAI, the creators of ChatGPT, by filing a new lawsuit in a California federal court. The suit, which revives a six-year-old dispute, accuses OpenAI founders Sam Altman and Greg Brockman of breaching the company's founding principles by prioritizing commercial interests over public benefit.

Musk's complaint alleges that OpenAI's multibillion-dollar partnership with Microsoft contradicts the original mission to develop AI responsibly for humanity's benefit. The lawsuit describes the alleged betrayal in dramatic terms, claiming "perfidy and deceit... of Shakespearean proportions." OpenAI has not yet commented on the new filing. In response to Musk's previous lawsuit, which was withdrawn seven weeks ago, the company stated its commitment to building safe artificial general intelligence for the benefit of humanity.
AI

OpenAI Grapples With Unreleased AI Detection Tool Amid Cheating Concerns (msn.com) 27

OpenAI has developed a sophisticated anticheating tool for detecting AI-generated content, particularly essays and research papers, but has refrained from releasing it due to internal debates and ethical considerations, according to WSJ.

This tool, which has been ready for deployment for approximately a year, utilizes a watermarking technique that subtly alters token selection in ChatGPT's output, creating an imperceptible pattern detectable only by OpenAI's technology. While boasting a 99.9% effectiveness rate for substantial AI-generated text, concerns persist regarding potential workarounds and the challenge of determining appropriate access to the detection tool, as well as its potential impact on non-native English speakers and the broader AI ecosystem.
Social Networks

Founder of Collapsed Social Media Site 'IRL' Charged With Fraud Over Faked Users (bbc.com) 22

This week America's Securities and Exchange Commission filed fraud charges against the former CEO of the startup social media site "IRL"

The BBC reports: IRL — which was once considered a potential rival to Facebook — took its name from its intention to get its online users to meet up in real life. However, the initial optimism evaporated after it emerged most of IRL's users were bots, with the platform shutting in 2023...

The SEC says it believes [CEO Abraham] Shafi raised about $170m by portraying IRL as the new success story in the social media world. It alleges he told investors that IRL had attracted the vast majority its supposed 12 million users through organic growth. In reality, it argues, IRL was spending millions of dollars on advertisements which offered incentives to prospective users to download the IRL app. That expenditure, it is alleged, was subsequently hidden in the company's books.

IRL received multiple rounds of venture capital financing, eventually reaching "unicorn status" with a $1.17 billion valuation, according to TechCrunch. But it shut down in 2023 "after an internal investigation by the company's board found that 95% of the app's users were 'automated or from bots'."

TechCrunch notes it's the second time in the same week — and at least the fourth time in the past several months — that the SEC has charged a venture-backed founder on allegations of fraud... Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity "DiamondHands" to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya's Social Capital, Coinbase Ventures and Winklevoss Capital.

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko's customers, the number of candidates on its platform and the startup's revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images.

Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Stats

What's the 'Smartest' City in America - Based on Tech Jobs, Connectivity, and Sustainability? (newsweek.com) 66

Seattle is the smartest city in America, with Miami and then Austin close behind. That's according to a promotional study from smart-building tools company ProptechOS. Newsweek reports: The evaluation of tech infrastructure and connectivity was based on several factors, including the number of free Wi-Fi hot spots, the quantity and density of AI and IoT companies, average broadband download speeds, median 5G coverage per network provider, and the number of airports. Meanwhile, green infrastructure was assessed based on air quality, measured by exposure to PM2.5, tiny particles in the air that can harm health. Other factors include 10-year changes in tree coverage, both loss and gain; the number of electric vehicle charging points and their density per 100,000 people; and the number of LEED-certified green buildings. The tech job market was evaluated on the number of tech jobs advertised per 100,000 people.
Seattle came in first after assessing 16 key indicators across connectivity/infrastructure, sustainability, and tech jobs — "boasting 34 artificial intelligence companies and 13 Internet of Things companies per 100,000 residents." In terms of sustainability, Seattle has enhanced its tree coverage by 13,700 hectares from 2010 to 2020 and has established the equivalent of 10 electric vehicle charging points per 100,000 residents. Seattle has edged out last year's top city, Austin, to claim the title of the smartest city in the U.S., with an overall score of 75.7 out of 100. Miami wasn't far behind, achieving a score of 75.4. However, Austin still came out on top for smart city infrastructure, scoring 86.2 out of 100. This is attributed to its high broadband download speed of 275.60 Mbps — well above the U.S. average of 217.14 Mbps — and its concentration of 337 AI companies, or 35 per 100,000 people.
You can see the full listings here. The article notes that the same study also ranked Paris as the smartest city in Europe — slipping ahead of London — thanks to Paris's 99.5% 5G coverage, plus "the second-highest number of AI companies in Europe and the third-highest number of free Wi-Fi hot spots. Paris is also recognized for its traffic management systems, which monitor noise levels and air quality."

Newsweek also shares this statement from ProptechOS's founder/chief ecosystem officer. "Advancements in smart cities and future technologies such as next-generation wireless communication and AI are expected to reduce environmental impacts and enhance living standards."

In April CNBC reported on an alternate list of the smartest cities in the world, created from research by the World Competitiveness Center. It defined smart cities as "an urban setting that applies technology to enhance the benefits and diminish the shortcomings of urbanization for its citizens." And CNBC reported that based on the list, "Smart cities in Europe and Asia are gaining ground globally while North American cities have fallen down the ranks... Of the top 10 smart cities on the list, seven were in Europe." Here are the top 10 smart cities, according to the 2024 Smart City Index.

- Zurich, Switzerland
- Oslo, Norway
- Canberra, Australia
- Geneva, Switzerland
- Singapore
- Copenhagen, Denmark
- Lausanne, Switzerland
- London, England
- Helsinki, Finland
- Abu Dhabi, United Arab Emirates

Notably, for the first time since the index's inception in 2019, there is an absence of North American cities in the top 20... The highest ranking U.S. city this year is New York City which ranked 34th, followed by Boston at 36th and Washington DC, coming in at 50th place.

AI

NIST Releases an Open-Source Platform for AI Safety Testing (scmagazine.com) 4

America's National Institute of Standards and Technology (NIST) has released a new open-source software tool called Dioptra for testing the resilience of machine learning models to various types of attacks.

"Key features that are new from the alpha release include a new web-based front end, user authentication, and provenance tracking of all the elements of an experiment, which enables reproducibility and verification of results," a NIST spokesperson told SC Media: Previous NIST research identified three main categories of attacks against machine learning algorithms: evasion, poisoning and oracle. Evasion attacks aim to trigger an inaccurate model response by manipulating the data input (for example, by adding noise), poisoning attacks aim to impede the model's accuracy by altering its training data, leading to incorrect associations, and oracle attacks aim to "reverse engineer" the model to gain information about its training dataset or parameters, according to NIST.

The free platform enables users to determine to what degree attacks in the three categories mentioned will affect model performance and can also be used to gauge the use of various defenses such as data sanitization or more robust training methods.

The open-source testbed has a modular design to support experimentation with different combinations of factors such as different models, training datasets, attack tactics and defenses. The newly released 1.0.0 version of Dioptra comes with a number of features to maximize its accessibility to first-party model developers, second-party model users or purchasers, third-party model testers or auditors, and researchers in the ML field alike. Along with its modular architecture design and user-friendly web interface, Dioptra 1.0.0 is also extensible and interoperable with Python plugins that add functionality... Dioptra tracks experiment histories, including inputs and resource snapshots that support traceable and reproducible testing, which can unveil insights that lead to more effective model development and defenses.

NIST also published final versions of three "guidance" documents, according to the article. "The first tackles 12 unique risks of generative AI along with more than 200 recommended actions to help manage these risks. The second outlines Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, and the third provides a plan for global cooperation in the development of AI standards."

Thanks to Slashdot reader spatwei for sharing the news.
Programming

Coders Don't Fear AI, Reports Stack Overflow's Massive 2024 Survey (thenewstack.io) 134

Stack Overflow says over 65,000 developers took their annual survey — and "For the first time this year, we asked if developers felt AI was a threat to their job..."

Some analysis from The New Stack: Unsurprisingly, only 12% of surveyed developers believe AI is a threat to their current job. In fact, 70% are favorably inclined to use AI tools as part of their development workflow... Among those who use AI tools in their development workflow, 81% said productivity is one of its top benefits, followed by an ability to learn new skills quickly (62%). Much fewer (30%) said improved accuracy is a benefit. Professional developers' adoption of AI tools in the development process has risen rapidly, going from 44% in 2023 to 62% in 2024...

Seventy-one percent of developers with less than five years of experience reported using AI tools in their development process, as compared to just 49% of developers with 20 years of experience coding... At 82%, [ChatGPT] is twice as likely to have been used than GitHub Copilot. Among ChatGPT users, 74% want to continue using it.

But "only 43% said they trust the accuracy of AI tools," according to Stack Overflow's blog post, "and 45% believe AI tools struggle to handle complex tasks."

More analysis from The New Stack: The latest edition of the global annual survey found full-time employment is holding steady, with over 80% reporting that they have full-time jobs. The percentage of unemployed developers has more than doubled since 2019 but is still at a modest 4.4% worldwide... The median annual salary of survey respondents declined significantly. For example, the average full-stack developer's median 2024 salary fell 11% compared to the previous year, to $63,333... Wage pressure may be the result of more competition from an increase in freelancing.

Eighteen percent of professional developers in the 2024 survey said they are independent contractors or self-employed, which is up from 9.5% in 2020. Part-time employment has also risen, presenting even more pressure on full-time salaries... Job losses at tech companies have contributed to a large influx of talent into the freelance market, noted Stack Overflow CEO Prashanth Chandrasekar in an interview with The New Stack. Since COVID-19, he added, the emphasis on remote work means more people value job flexibility. In the 2024 survey, only 20% have returned to full-time in-person work, 38% are full-time remote, while the remainder are in a hybrid situation. Anticipation of future productivity growth due to AI may also be creating uncertainty about how much to pay developers.

Two stats jumped out for Visual Studio magazine: In this year's big Stack Overflow developer survey things are much the same for Microsoft-centric data points: VS Code and Visual Studio still rule the IDE roost, while .NET maintains its No. 1 position among non-web frameworks. It's been this way for years, though in 2021 it was .NET Framework at No. 1 among IDEs, while the new .NET Core/.NET 5 entry was No. 3. Among IDEs, there has been less change. "Visual Studio Code is used by more than twice as many developers than its nearest (and related) alternative, Visual Studio," said the 2024 Stack Overflow Developer survey, the 14th in the series of massive reports.
Stack Overflow shared some other interesting statistics:
  • "Javascript (62%), HTML/CSS (53%), and Python (51%) top the list of most used languages for the second year in a row... [JavaScript] has been the most popular language every year since the inception of the Developer Survey in 2011."
  • "Python is the most desired language this year (users that did not indicate using this year but did indicate wanting to use next year), overtaking JavaScript."
  • "The language that most developers used and want to use again is Rust for the second year in a row with an 83% admiration rate. "
  • "Python is most popular for those learning to code..."
  • "Technical debt is a problem for 62% of developers, twice as much as the second- and third-most frustrating problems for developers: complex tech stacks for building and deployment."

Government

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge (msn.com) 43

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.]

But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties.

If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them.

"Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding."

40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article.

"Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."
AI

Journalists at 'The Atlantic' Demand Assurances Their Jobs Will Be Protected From OpenAI (msn.com) 57

"As media bosses scramble to decide if and how they should partner with AI companies, workers are increasingly concerned that the technology could imperil their jobs or degrade their work..." reports the Washington Post.

The latest example? "Two months after the Atlantic reached a licensing deal with OpenAI, staffers at the storied magazine are demanding the company ensure their jobs and work are protected." (Nearly 60 journalists have now signed a letter demanding the company "stop prioritizing its bottom line and champion the Atlantic's journalism.") The unionized staffers want the Atlantic bosses to include AI protections in the union contract, which the two sides have been negotiating since 2022. "Our editorial leaders say that The Atlantic is a magazine made by humans, for humans," the letter says. "We could not agree more..."

The Atlantic's new deal with OpenAI grants the tech firm access to the magazine's archives to train its AI tools. While the Atlantic in return will have special access to experiment with these AI tools, the magazine says it is not using AI to create journalism. But some journalists and media observers have raised concerns about whether AI tools are accurately and fairly manipulating the human-written text they work with. The Atlantic staffers' letter noted a pattern by ChatGPT of generating gibberish web addresses instead of the links intended to attribute the reporting it has borrowed, as well as sending readers to sites that have summarized Atlantic stories rather than the original work...

Atlantic spokeswoman Anna Bross said company leaders "agree with the general principles" expressed by the union. For that reason, she said, they recently proposed a commitment to not to use AI to publish content "without human review and editorial oversight." Representatives from the Atlantic Union bargaining committee told The Washington Post that "the fact remains that the company has flatly refused to commit to not replacing employees with AI."

The article also notes that last month the union representing Lifehacker, Mashable and PCMag journalists "ratified a contract that protects union members from being laid off because AI has impacted their roles and requires the company to discuss any such plans to implement AI tools ahead of time."
Programming

Go Tech Lead Russ Cox Steps Down to Focus on AI-Powered Open-Source Contributor Bot (google.com) 12

Thursday Go's long-time tech lead Russ Cox made an announcement: Starting September 1, Austin Clements will be taking over as the tech lead of Go: both the Go team at Google and the overall Go project. Austin is currently the tech lead for what we sometimes call the "Go core", which encompasses compiler toolchain, runtime, and releases. Cherry Mui will be stepping up to lead those areas.

I am not leaving the Go project, but I think the time is right for a change... I will be shifting my focus to work more on Gaby [or "Go AI bot," an open-source contributor agent] and Oscar [an open-source contributor agent architecture], trying to make useful contributions in the Go issue tracker to help all of you work more productively. I am hopeful that work on Oscar will uncover ways to help open source maintainers that will be adopted by other projects, just like some of Go's best ideas have been adopted by other projects. At the highest level, my goals for Oscar are to build something useful, learn something new, and chart a path for other projects. These are the same broad goals I've always had for our work on Go, so in that sense Oscar feels like a natural continuation.

The post notes that new tech lead Austin Clements "has been working on Go at Google since 2014" (and Mui since 2016). "Their judgment is superb and their knowledge of Go and the systems it runs on both broad and deep. When I have general design questions or need to better understand details of the compiler, linker, or runtime, I turn to them." It's important to remember that tech lead — like any position of leadership — is a service role, not an honorary title. I have been leading the Go project for over 12 years, serving all of you, and trying to create the right conditions for all of you to do your best work. Large projects like Go absolutely benefit from stable leadership, but they can also benefit from leadership changes. New leaders bring new strengths and fresh perspectives. For Go, I think 12+ years of one leader is enough stability; it's time for someone new to serve in this role.

In particular, I don't believe that the "BDFL" (benevolent dictator for life) model is healthy for a person or a project. It doesn't create space for new leaders. It's a single point of failure. It doesn't give the project room to grow. I think Python benefited greatly from Guido stepping down in 2018 and letting other people lead, and I've had in the back of my mind for many years that we should have a Go leadership change eventually....

I am going to consciously step back from decision making and create space for Austin and the others to step forward, but I am not disappearing. I will still be available to talk about Go designs, review CLs, answer obscure history questions, and generally help and support you all in whatever way I can. I will still file issues and send CLs from time to time, I have been working on a few potential new standard libraries, I will still advocate for Go across the industry, and I will be speaking about Go at GoLab in Italy in November...

I am incredibly proud of the work we have all accomplished together, and I am confident in the leaders both on the Go team at Google and in the Go community. You are all doing remarkable work, and I know you will continue to do that.

Power

Could AI Speed Up the Design of Nuclear Reactors? (byu.edu) 156

A professor at Brigham Young University "has figured out a way to shave critical years off the complicated design and licensing processes for modern nuclear reactors," according to an announcement from the university.

"AI is teaming up with nuclear power." The typical time frame and cost to license a new nuclear reactor design in the United States is roughly 20 years and $1 billion. To then build that reactor requires an additional five years and between $5 and $30 billion. By using AI in the time-consuming computational design process, [chemical engineering professor Matt] Memmott estimates a decade or more could be cut off the overall timeline, saving millions and millions of dollars in the process — which should prove critical given the nation's looming energy needs.... "Being able to reduce the time and cost to produce and license nuclear reactors will make that power cheaper and a more viable option for environmentally friendly power to meet the future demand...."

Engineers deal with elements from neutrons on the quantum scale all the way up to coolant flow and heat transfer on the macro scale. [Memmott] also said there are multiple layers of physics that are "tightly coupled" in that process: the movement of neutrons is tightly coupled to the heat transfer which is tightly coupled to materials which is tightly coupled to the corrosion which is coupled to the coolant flow. "A lot of these reactor design problems are so massive and involve so much data that it takes months of teams of people working together to resolve the issues," he said... Memmott's is finding AI can reduce that heavy time burden and lead to more power production to not only meet rising demands, but to also keep power costs down for general consumers...

Technically speaking, Memmott's research proves the concept of replacing a portion of the required thermal hydraulic and neutronics simulations with a trained machine learning model to predict temperature profiles based on geometric reactor parameters that are variable, and then optimizing those parameters. The result would create an optimal nuclear reactor design at a fraction of the computational expense required by traditional design methods. For his research, he and BYU colleagues built a dozen machine learning algorithms to examine their ability to process the simulated data needed in designing a reactor. They identified the top three algorithms, then refined the parameters until they found one that worked really well and could handle a preliminary data set as a proof of concept. It worked (and they published a paper on it) so they took the model and (for a second paper) put it to the test on a very difficult nuclear design problem: optimal nuclear shield design.

The resulting papers, recently published in academic journal Nuclear Engineering and Design, showed that their refined model can geometrically optimize the design elements much faster than the traditional method.

In two days Memmott's AI algorithm determined an optimal nuclear-reactor shield design that took a real-world molten salt reactor company spent six months. "Of course, humans still ultimately make the final design decisions and carry out all the safety assessments," Memmott says in the announcement, "but it saves a significant amount of time at the front end....

"Our demand for electricity is going to skyrocket in years to come and we need to figure out how to produce additional power quickly. The only baseload power we can make in the Gigawatt quantities needed that is completely emissions free is nuclear power."

Thanks to long-time Slashdot reader schwit1 for sharing the article.

Slashdot Top Deals