×
Ubuntu

Canonical Says Qualcomm Has Joined Ubuntu's 'Silicon Partner' Program (webpronews.com) 8

Intel, Nvidia, AMD, and Arm are among Canonical's "silicon partners," a program that "ensures maximum Ubuntu compatibility and long-term support with certified hardware," according to Web Pro News.

And now Qualcomm is set to be Canonical's next silicon partner, "giving Qualcomm access to optimized versions of Ubuntu for its processors." Companies looking to use Ubuntu on Qualcomm chips will benefit from an OS that provides 10 years of support and security updates.

The collaboration is expected to be a boon for AI, edge computing, and IoT applications. "The combination of Qualcomm Technologies' processors with the popularity of Ubuntu among AI and IoT developers is a game changer for the industry," commented Dev Singh, Vice President, Business Development and Head of Building, Enterprise & Industrial Automation, Qualcomm Technologies, Inc...

"Optimised Ubuntu and Ubuntu Core images will be available for Qualcomm SoCs," according to the announcement, "enabling enterprises to meet their regulatory, compliance and security demands for AI at the edge and the broader IoT market with a secure operating system that is supported for 10 years." Qualcomm Technologies chose to partner with Canonical to create an optimised Ubuntu for Qualcomm IoT chipsets, giving developers an easy path to create safe, compliant, security-focused, and high-performing applications for multiple industries including industrial, robotics and edge automation...

Developers and enterprises can benefit from the Ubuntu Certified Hardware program, which features a growing list of certified ODM boards and devices based on Qualcomm SoCs. These certified devices deliver an optimised Ubuntu experience out-of-the-box, enabling developers to focus on developing applications and bringing products to market.

AI

AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes (sciencealert.com) 315

An anonymous reader shared this report from ScienceAlert: The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the 'Great Filter.' The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise....

[H]ow about the rapid development of AI?

A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper's title is "Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?"

"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," the paper explains... The author says their projects "underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats."

"The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and
AI

Adobe Firefly Used Thousands of Midjourney Images In Training Its 'Ethical AI' Model (tomsguide.com) 11

According to Bloomberg, Adobe used images from its competitor Midjourney to train its own artificial intelligence image generator, Firefly -- contradicting the "commercially safe" ethical standards the company promotes. Tom's Guide reports: The startup has never declared the source of its training data but many suspect it is from images it scraped from the internet without licensing. Adobe says only about 5% of the millions of images used to train Firefly fell into this category and all of them were part of the Adobe Stock library, which meant they'd been through a "rigorous moderation process."

When Adobe first launched Firefly it offered an indemnity against copyright theft claims for its enterprise customers as a way to convince them it was safe. Adobe also sold Firefly as the safe alternative to the likes of Midjourney and DALL-E as all the data had been licensed and cleared for use in training the model. Not all artists were that keen at the time and felt they were coerced into agreeing to let their work be used by the creative tech giant -- but the sense was any image made with Firefly was safe to use without risk of being sued for copyright theft.

Despite the revelation some of the images came from potentially less reputable sources, Adobe says all of the non-human pictures are still safe. A spokesperson told Bloomberg: "Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists' names." The company seems to be taking a slightly more rigorous step with its plans to build an AI video generator. Rumors suggest it is paying artists per minute for video clips.

AI

Adobe Is Buying Videos for $3 Per Minute To Build AI Model (yahoo.com) 18

Adobe has begun to procure videos to build its AI text-to-video generator, trying to catch up to competitors after OpenAI demonstrated a similar technology. From a report: The software company is offering its network of photographers and artists $120 to submit videos of people engaged in everyday actions such as walking or expressing emotions including joy and anger, according to documents seen by Bloomberg. The goal is to source assets for artificial intelligence training, the company wrote.

Over the past year, Adobe has focused on adding generative AI features to its portfolio of software for creative professionals, including Photoshop and Illustrator. [...] Adobe is requesting more than 100 short clips of people engaged in actions and showing emotions as well as simple anatomy shots of feet, hands or eyes. The company also wants video of people "interacting with objects" such as smartphones or fitness equipment. It cautions against providing copyrighted material, nudity or other "offensive content." Pay for the submission works out, on average, to about $2.62 per minute of submitted video, although it could be as much as about $7.25 per minute.

AI

Many AI Products Still Rely on Humans To Fill the Performance Gaps (bloomberg.com) 51

An anonymous reader shares a report: Recent headlines have made clear: If AI is doing an impressively good job at a human task, there's a good chance that the task is actually being done by a human. When George Carlin's estate sued the creators of a podcast who said they used AI to create a standup routine in the late comedian's style, the podcasters claimed that the script had actually been generated by a human named Chad. (The two sides recently settled the suit.) A company making AI-powered voice interfaces for fast-food drive-thrus can only complete 30% of jobs without the help of a human reviewing its work. Amazon is dropping its automated "Just Walk Out" checkout systems from new stores -- a system that relied on far more human verification than it was hoping for.

We've seen this before -- though it may already be lost to Silicon Valley's pathologically short memory. Back in 2015, AI chatbots were the hot thing. Tech giants and startups alike pitched them as always-available, always-chipper, always-reliable assistants. One startup, x.ai, advertised an AI assistant who could read your emails and schedule your meetings. Another, GoButler, offered to book your flights or order your fries through a delivery app. Facebook also tested a do-anything concierge service called M, which could answer seemingly any question, do almost any task, and draw you pictures on demand. But for all of those services, the "AI assistant" was often just a person. Back in 2016, I wrote a story about this and interviewed workers whose job it was to be the human hiding behind the bot, making sure the bot never made a mistake or spoke nonsense.

Cloud

Irish Power Crunch Could Be Prompting AWS To Ration Compute Resources (theregister.com) 16

Datacenter power issues in Ireland may be coming to a head amid reports from customers that Amazon is restricting resources users can spin up in that nation, even directing them to other AWS regions across Europe instead. From a report: Energy consumed by datacenters is a growing concern, especially in places such as Ireland where there are clusters of facilities around Dublin that already account for a significant share of the country's energy supply. This may be leading to restrictions on how much infrastructure can be used, given the power requirements. AWS users have informed The Register that there are sometimes limits on the resources that they can access in its Ireland bit barn, home to Amazon's eu-west-1 region, especially with power-hungry instances that make use of GPUs to accelerate workloads such as AI.

"You cannot spin up GPU nodes in AWS Dublin as those locations are maxed out power-wise. There is reserved capacity for EC2 just in case," one source told us. "If you have a problem with that, AWS Europe will point you at spare capacity in Sweden and other parts of the EU." We asked AWS about these issues, but when it finally responded the company was somewhat evasive. "Ireland remains core to our global infrastructure strategy, and we will continue to work with customers to understand their needs, and help them to scale and grow their business," a spokesperson told us. Ireland's power grid operator, EirGrid, was likewise less than direct when we asked if they were limiting the amount of power datacenters could consume.

Canada

Canadian Legislators Accused of Using AI To Produce 20,000 Amendments (www.cbc.ca) 62

sinij shares a report: Members of Parliament in Canada are expected to vote for up to 15 hours in a row Thursday and Friday on more than 200 Conservative amendments to the government's sustainable jobs bill. The amendments are what's left of nearly 20,000 changes the Conservatives proposed to Bill C-50 last fall at a House of Commons committee. Liberals now contend the Conservatives came up with the amendments using artificial intelligence in order to gum up the government's agenda. The Conservatives deny that accusation.
AI

OpenAI Makes ChatGPT 'More Direct, Less Verbose' (techcrunch.com) 36

Kyle Wiggers reports via TechCrunch: OpenAI announced today that premium ChatGPT users -- customers paying for ChatGPT Plus, Team or Enterprise -- can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience. This new model ("gpt-4-turbo-2024-04-09") brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off. "When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language," OpenAI writes in a post on X.
Education

Students Are Likely Writing Millions of Papers With AI 115

Amanda Hoover reports via Wired: Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows. A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.
Education

Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership (illinois.edu) 16

theodp writes: From a Wednesday press release: "Code.org, in collaboration with The Piech Lab at Stanford University, launched today its AI Teaching Assistant, ushering in a new era of computer science instruction to support teachers in preparing students with the foundational skills necessary to work, live and thrive in an AI world. [...] Launching as a part of Code.org's leading Computer Science Discoveries (CSD) curriculum [for grades 6-10], the tool is designed to bolster teacher confidence in teaching computer science." EdWeek reports that in a limited pilot project involving twenty teachers nationwide, the AI computer science grading tool cut one middle school teacher's grading time in half. Code.org is now inviting an additional 300 teachers to give the tool a try. "Many teachers who lead computer science courses," EdWeek notes, "don't have a degree in the subject -- or even much training on how to teach it -- and might be the only educator in their school leading a computer science course."

Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million).
In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.
AI

Humane AI Pin Review Roundup 41

The embargo has lifted for reviews of Humane's AI Pin and the general consensus appears to be that this device isn't ready to usher us into the all-but-inevitable AI future. Starting at $699 with a pricy $24-a-month subscription, the wearable device is designed to incorporate artificial intelligence into everyday scenarios, with the ability to make calls, translate languages, recommend nearby restaurants, and capture photos and videos. "The best description so far is that it's a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm," writes Cherlynn Low via Engadget. While full of potential, the AI Pin creates more problems than it solves and many of the features you'd intuitively expect from it aren't supported at launch.

Here's a roundup of some of the first reviews:

Engadget: The Humane AI Pin is the solution to none of technology's problems
The Verge: Humane AI Pin review: not even close
Wired: Humane Ai Pin Review: Too Clunky, Too Limited
The Washington Post: I've been living with a $699 AI Pin on my chest. You probably shouldn't.
CNET: Humane AI Hands-On: My Life So Far With a Wearable AI Pin
AI

US Lawmaker Proposes a Public Database of All AI Training Material 30

An anonymous reader quotes a report from Ars Technica: Amid a flurry of lawsuits over AI models' training data, US Representative Adam Schiff (D-Calif.) has introduced (PDF) a bill that would require AI companies to disclose exactly which copyrighted works are included in datasets training AI systems. The Generative AI Disclosure Act "would require a notice to be submitted to the Register of Copyrights prior to the release of a new generative AI system with regard to all copyrighted works used in building or altering the training dataset for that system," Schiff said in a press release.

The bill is retroactive and would apply to all AI systems available today, as well as to all AI systems to come. It would take effect 180 days after it's enacted, requiring anyone who creates or alters a training set not only to list works referenced by the dataset, but also to provide a URL to the dataset within 30 days before the AI system is released to the public. That URL would presumably give creators a way to double-check if their materials have been used and seek any credit or compensation available before the AI tools are in use. All notices would be kept in a publicly available online database.

Currently, creators who don't have access to training datasets rely on AI models' outputs to figure out if their copyrighted works may have been included in training various AI systems. The New York Times, for example, prompted ChatGPT to spit out excerpts of its articles, relying on a tactic to identify training data by asking ChatGPT to produce lines from specific articles, which OpenAI has curiously described as "hacking." Under Schiff's law, The New York Times would need to consult the database to ID all articles used to train ChatGPT or any other AI system. Any AI maker who violates the act would risk a "civil penalty in an amount not less than $5,000," the proposed bill said.
Schiff described the act as championing "innovation while safeguarding the rights and contributions of creators, ensuring they are aware when their work contributes to AI training datasets."

"This is about respecting creativity in the age of AI and marrying technological progress with fairness," Schiff said.
Desktops (Apple)

Apple Plans To Overhaul Entire Mac Line With AI-Focused M4 Chips 107

Apple, aiming to boost sluggish computer sales, is preparing to overhaul its entire Mac line with a new family of in-house processors designed to highlight AI. Bloomberg News: The company, which released its first Macs with M3 chips five months ago, is already nearing production of the next generation -- the M4 processor -- according to people with knowledge of the matter. The new chip will come in at least three main varieties, and Apple is looking to update every Mac model with it, said the people, who asked not to be identified because the plans haven't been announced.

The new Macs are underway at a critical time. After peaking in 2022, Mac sales fell 27% in the last fiscal year, which ended in September. In the holiday period, revenue from the computer line was flat. Apple attempted to breathe new life into the Mac business with an M3-focused launch event last October, but those chips didn't bring major performance improvements over the M2 from the prior year. Apple also is playing catch-up in AI, where it's seen as a laggard to Microsoft, Alphabet's Google and other tech peers. The new chips are part of a broader push to weave AI capabilities into all its products. Apple is aiming to release the updated computers beginning late this year and extending into early next year.
AI

Amazon Adds AI Expert Andrew Ng To Board as GenAI Race Heats Up (reuters.com) 11

Amazon on Thursday added Andrew Ng, the computer scientist who led AI projects at Alphabet's Google and China's Baidu, to its board amid rising competition among Big Techs to add users for their GenAI products. From a report: Amazon's cloud unit is facing pressure from Microsoft's early pact with ChatGPT-maker OpenAI and integration of its technology into Azure, while Alexa voice assistant is in race with genAI chat tools from OpenAI and Google.

The appointment, effective April 9, also follows job cuts across Amazon, which has seen enterprise cloud spending and e-commerce sales moderate due to macroeconomic factors such as inflation and high interest rates. "As we look toward 2024 (and beyond), we're not done lowering our cost to serve," CEO Andy Jassy said in a letter to shareholders on Thursday.

AI

UK To Deploy Facial Recognition For Shoplifting Crackdown (theguardian.com) 113

Bruce66423 shares a report from The Guardian, with the caption: "The UK is hyperventilating about stories of shoplifting; though standing outside a shop and watching as a guy calmly gets off his bike, parks it, walks in and walks out with a pack of beer and cycles off -- and then seeing staff members rushing out -- was striking. So now it's throwing technical solutions at the problem..." From the report: The government is investing more than 55 million pounds in expanding facial recognition systems -- including vans that will scan crowded high streets -- as part of a renewed crackdown on shoplifting. The scheme was announced alongside plans for tougher punishments for serial or abusive shoplifters in England and Wales, including being forced to wear a tag to ensure they do not revisit the scene of their crime, under a new standalone criminal offense of assaulting a retail worker.

The new law, under which perpetrators could be sent to prison for up to six months and receive unlimited fines, will be introduced via an amendment to the criminal justice bill that is working its way through parliament. The change could happen as early as the summer. The government said it would invest 55.5 million pounds over the next four years. The plan includes 4 million pounds for mobile units that can be deployed on high streets using live facial recognition in crowded areas to identify people wanted by the police -- including repeat shoplifters.
"This Orwellian tech has no place in Britain," said Silkie Carlo, director of civil liberties at campaign group Big Brother Watch. "Criminals should be brought to justice, but papering over the cracks of broken policing with Orwellian tech is not the solution. It is completely absurd to inflict mass surveillance on the general public under the premise of fighting theft while police are failing to even turn up to 40% of violent shoplifting incidents or to properly investigate many more serious crimes."
AI

Google's AI Photo Editing Tools Are Expanding To a Lot More Phones (theverge.com) 7

Starting May 15th, almost all Google Photos users will be able to access the AI photo editing features previously limited to Pixel owners and Google One subscribers. All you'll need is a device with at least a 64-bit chip, 4GB of RAM, and either iOS 15 or Android 8.0. The Verge reports: Magic Editor is Google's generative AI photo editing tool, and it debuted as one of the headline AI features on the Pixel 8 and 8 Pro. Those kinds of features typically remain exclusive to new Pixels for six months after launch, and right on time, Google's bringing it to previous Pixel phones. But it's not stopping there; any Google Photos user with an Android or iOS device that meets the minimum requirements will be able to use it without a Google One subscription -- you'll just be limited to 10 saved edits per month. Pixel owners and paid subscribers, however, will get unlimited use.

Older features like Photo Unblur and Magic Eraser -- which used to be available only to Pixel owners and certain Google One subscribers -- will be free for all Photos users. Google has a full list of these features on its Photos community site, and it includes things like editing portrait mode blur and lighting effects (useful, but not the cutting-edge stuff, for better or worse). Other generative AI features that launched with the Pixel 8 series, like Best Take and Audio Magic Eraser, are remaining exclusive to those newest Pixels, at least for now.

United States

New Bill Would Force AI Companies To Reveal Use of Copyrighted Art (theguardian.com) 57

A bill introduced in the US Congress on Tuesday intends to force AI companies to reveal the copyrighted material they use to make their generative AI models. From a report: The legislation adds to a growing number of attempts from lawmakers, news outlets and artists to establish how AI firms use creative works like songs, visual art, books and movies to train their software-and whether those companies are illegally building their tools off copyrighted content.

The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users' prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies.

"AI has the disruptive potential of changing our economy, our political system, and our day-to-day lives. We must balance the immense potential of AI with the crucial need for ethical guidelines and protections," Schiff said in a statement. Whether major AI companies worth billions have made illegal use of copyrighted works is increasingly the source of litigation and government investigation. Schiff's bill would not ban AI from training on copyrighted material, but would put a sizable onus on companies to list the massive swath of works that they use to build tools like ChatGPT -- data that is usually kept private.

United States

The US is Right To Target TikTok, Says Vinod Khosla (ft.com) 90

Vinod Khosla, the founder of venture capital firm Khosla Ventures, opines on the bill that seeks to ban TikTok or force its parent firm to divest the U.S. business: Even if one could argue that this bill strikes at the First Amendment, there is legal precedent for doing so. In 1981, Haig vs Agee established that there are circumstances under which the government can lawfully impinge upon an individual's First Amendment rights if it is necessary to protect national security and prevent substantial harm. TikTok and the AI that can be channelled through it are national and homeland security issues that meet these standards.

Should this bill turn into law, the president would have the power to force any foreign-owned social media to be sold if US intelligence agencies deem them a national security threat. This broader scope should protect against challenges that this is a bill of attainder. Similar language helped protect effective bans on Huawei and Kaspersky Lab. As for TikTok's value as a boon to consumers and businesses, there are many companies that could quickly replace it. In 2020, after India banned TikTok amid geopolitical tensions between Beijing and New Delhi, services including Instagram Reels, YouTube Shorts, MX TakaTak, Chingari and others filled the void.Â

Few appreciate that TikTok is not available in China. Instead, Chinese consumers use Douyin, the sister app that features educational and patriotic videos, and is limited to 40 minutes per day of total usage. Spinach for Chinese kids, fentanyl -- another chief export of China's -- for ours. Worse still, TikTok is a programmable fentanyl whose effects are under the control of the CCP.

AI

AI Hardware Company From Jone Ive, Sam Altman Seeks $1 Billion In Funding 52

An anonymous reader quotes a report from Ars Technica: Former Apple design lead Jony Ive and current OpenAI CEO Sam Altman are seeking funding for a new company that will produce an "artificial intelligence-powered personal device," according to The Information's sources, who are said to be familiar with the plans. The exact nature of the device is unknown, but it will not look anything like a smartphone, according to the sources. We first heard tell of this venture in the fall of 2023, but The Information's story reveals that talks are moving forward to get the company off the ground.

Ive and Altman hope to raise at least $1 billion for the new company. The complete list of potential funding sources they've spoken with is unknown, but The Information's sources say they are in talks with frequent OpenAI investor Thrive Capital as well as Emerson Collective, a venture capital firm founded by Laurene Powell Jobs. SoftBank CEO and super-investor Masayoshi Son is also said to have spoken with Altman and Ive about the venture. Financial Times previously reported that Son wanted Arm (another company he has backed) to be involved in the project. [...] Altman already has his hands in several other AI ventures besides OpenAI. The Information reports that there is no indication yet that OpenAI would be directly involved in the new hardware company.
AI

Texas Will Use Computers To Grade Written Answers On This Year's STAAR Tests 41

Keaton Peters reports via the Texas Tribune: Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the state's standardized tests will be graded automatically by computers. The Texas Education Agency is rolling out an "automated scoring engine" for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing technology like artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students' understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions -- known as constructed response items. After the redesign, there are six to seven times more constructed response items. "We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score," said Jose Rios, director of student assessment at the Texas Education Agency. In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given. This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans. When the computer has "low confidence" in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.
"In addition to 'low confidence' scores and responses that do not fit in the computer's programming, a random sample of responses will also be automatically handed off to humans to check the computer's work," notes Peters. While similar to ChatGPT, TEA officials have resisted the suggestion that the scoring engine is artificial intelligence. They note that the process doesn't "learn" from the responses and always defers to its original programming set up by the state.

Slashdot Top Deals