×
AI

OpenAI Exec Says Today's ChatGPT Will Be 'Laughably Bad' In 12 Months (businessinsider.com) 68

At the 27th annual Milken Institute Global Conference on Monday, OpenAI COO Brad Lightcap said today's ChatGPT chatbot "will be laughably bad" compared to what it'll be capable of a year from now. "We think we're going to move toward a world where they're much more capable," he added. Business Insider reports: Lightcap says large language models, which people use to help do their jobs and meet their personal goals, will soon be able to take on "more complex work." He adds that AI will have more of a "system relationship" with users, meaning the technology will serve as a "great teammate" that can assist users on "any given problem." "That's going to be a different way of using software," the OpenAI exec said on the panel regarding AI's foreseeable capabilities.

In light of his predictions, Lightcap acknowledges that it can be tough for people to "really understand" and "internalize" what a world with robot assistants would look like. But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm. "I think that's a profound shift that we haven't quite grasped," he said, referring to his 10-year forecast. "We're just scratching the surface on the full kind of set of capabilities that these systems have," he said at the Milken Institute conference. "That's going to surprise us."
You can watch/listen to the talk here.
Transportation

UK Startup 'Wayve' Gets $1 Billion Funding For Self-Driving Car Tech (bbc.com) 3

Wayve, a UK-based AI firm focused on developing self-driving car technology, has secured a record $1.05 billion in funding, with Microsoft and Nvidia participating in the round led by SoftBank. According to the BBC, this investment is the largest for an AI company in Europe. The BBC reports: Wayve says the funding will allow it to help build the autonomous cars of the future. [...] Wayve is developing technology intended to power future self-driving vehicles by using what it calls "embodied AI." Unlike AI models carrying out cognitive or generative tasks such as answering questions or creating pictures, this new technology interacts with and learns from real-world surroundings and environments. "[The investment] sends a crucial signal to the market of the strength of the UK's AI ecosystem, and we look forward to watching more AI companies here thrive and scale," said Wayve head Alex Kendall.
Hardware

Apple Announces M4 With More CPU Cores and AI Focus (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: In a major shake-up of its chip roadmap, Apple has announced a new M4 processor for today's iPad Pro refresh, barely six months after releasing the first MacBook Pros with the M3 and not even two months after updating the MacBook Air with the M3. Apple says the M4 includes "up to" four high-performance CPU cores, six high-efficiency cores, and a 10-core GPU. Apple's high-level performance estimates say that the M4 has 50 percent faster CPU performance and four times as much graphics performance. Like the GPU in the M3, the M4 also supports hardware-accelerated ray-tracing to enable more advanced lighting effects in games and other apps. Due partly to its "second-generation" 3 nm manufacturing process, Apple says the M4 can match the performance of the M2 while using just half the power.

As with so much else in the tech industry right now, the M4 also has an AI focus; Apple says it's beefing up the 16-core Neural Engine (Apple's equivalent of the Neural Processing Unit that companies like Qualcomm, Intel, AMD, and Microsoft have been pushing lately). Apple says the M4 runs up to 38 trillion operations per second (TOPS), considerably ahead of Intel's Meteor Lake platform, though a bit short of the 45 TOPS that Qualcomm is promising with the Snapdragon X Elite and Plus series. The M3's Neural Engine is only capable of 18 TOPS, so that's a major step up for Apple's hardware. Apple's chips since 2017 have included some version of the Neural Engine, though to date, those have mostly been used to enhance and categorize photos, perform optical character recognition, enable offline dictation, and do other oddities. But it may be that Apple needs something faster for the kinds of on-device large language model-backed generative AI that it's expected to introduce in iOS and iPadOS 18 at WWDC next month.
A separate report from the Wall Street Journal says Apple is developing a custom chip to run AI software in datacenters. "Apple's server chip will likely be focused on running AI models, also known as inference, rather than in training AI models, where Nvidia is dominant," reports Reuters.

Further reading: Apple Quietly Kills the Old-school iPad and Its Headphone Jack
Google

Google's Pixel 8A is a Midrange Phone That Might Actually Go the Distance (theverge.com) 35

The Pixel 8A is officially here. The 8A gets Google's latest processor, adds a bunch of new AI features, and still starts at $499 in the US. But the very best news is that the 8A adopts the Pixel 8 and 8 Pro's seven years of software support, which is just unheard of in a midrange phone. From a report: The 8A retains the same general shape and size as its predecessor. But its 6.1-inch screen gets a couple of significant updates: the top refresh rate is now 120Hz, up from 90Hz, and the panel gets up to 40 percent brighter, up to 2,000 nits in peak brightness mode. They're important upgrades, especially since the 8A's main competition in the US, the OnePlus 12R, comes with an excellent display.

It comes with the same generative AI photo and video features that made a splash on the Pixel 8 and 8 Pro, including Best Take, Magic Editor, and Audio Magic Eraser. Circle to Search is also available, and the 8A will be able to run Google's mobile-optimized on-device AI model, Gemini Nano. As on the Pixel 8, it'll be a developer option delivered via feature drop. Other specs are either unchanged or slightly boosted compared to the last generation. There's still 8GB of RAM and 128GB of storage, though there's now a 256GB option. Camera hardware is unchanged from the 7A, including a stabilized 64-megapixel main sensor. There's an IP67 rating, consistent with the 7A, and battery capacity is a little higher at 4,492mAh compared to 4,385mAh. Wireless charging is available via Qi 1.3 at up to 7.5W -- no Qi2 here.

AI

Microsoft Creates Top Secret Generative AI Service Divorced From the Internet for US Spies (bloomberg.com) 42

Microsoft has deployed a generative AI model entirely divorced from the internet, saying US intelligence agencies can now safely harness the powerful technology to analyze top-secret information. From a report: It's the first time a major large language model has operated fully separated from the internet, a senior executive at the US company said. Most AI models including OpenAI's ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the US intelligence community.

Spy agencies around the world want generative AI to help them understand and analyze the growing amounts of classified information generated daily, but must balance turning to large language models with the risk that data could leak into the open -- or get deliberately hacked. Microsoft has deployed the GPT4-based model and key elements that support it onto a cloud with an "air-gapped" environment that is isolated from the internet, said William Chappell, Microsoft's chief technology officer for strategic missions and technology.

AI

The Rabbit R1 Could've Just Been a Mobile App (androidauthority.com) 36

The Rabbit R1 is one of the first standalone AI companion devices to hit the market, offering the ability to translate languages, identify objects in your environment, and order DoorDash, among other things. It's been in the news last week for its all around poor reviews that cite poor battery life, painfully slow responses, and missing features (sound familiar?). Now, it's been confirmed that the Rabbit R1 is powered by an Android app that can run on existing Android phones. Android Authority reports: What ended up souring a lot of people's opinions on the product was the revelation -- in an Android Authority original report -- that the R1 is basically an Android app in a box. Many consumers who believed that the product would be better suited as a mobile app felt validated after our report, but there was one stickler in it that we needed to address: how we got the R1 launcher up and running on an Android phone. See, in our preliminary report, we mentioned that the Rabbit R1's launcher app is intended to be preinstalled in the firmware and be granted several privileged, system-level permissions. While that statement is still true, we should've clarified that the R1 launcher doesn't actually need those permissions. In fact, none of the system-level permissions that the R1 launcher requests are at all necessary for the app to perform its core functionality.

To prove this, we got the Rabbit R1 launcher up and running again on a stock, unrooted Android device (a Xiaomi 13T Pro), thanks to help from a team of reverse engineers including ChromMob, EmilyLShepherd, marceld505, thel3l, and uwukko. We were able to go through the entire setup process as if our device was an actual Rabbit R1. Afterwards, we were able to talk to ChatGPT, use the Vision function to identify objects, play music from Spotify, and even record voice notes. As demonstrated in our hands-on video at the top of this article, all of the existing core functionality that the Rabbit R1 offers would work as an Android or even iOS app. The only functions that wouldn't work are unrelated to the product's core functionality and are things your phone can already do, such as powering off or rebooting the device, toggling Bluetooth, connecting to a cellular or Wi-Fi network, or setting a screen lock.

During our research, Android Authority was also able to obtain a copy of the Rabbit R1's firmware. Our analysis reveals that Rabbit did not make significant modifications to the BSP (Board Support Package) provided by MediaTek. The R1, in fact, still ships with all the standard apps included in AOSP, as well as the many apps provided by MediaTek. This is despite the fact that none of these apps are needed nor ever shown to the user, obviously. Rabbit only made a few changes to the AOSP build that MediaTek provided them, such as adding the aforementioned R1 launcher app, adding a fork of the open-source "AnySoftKeyboard" app with a custom theme, adding an OTA updater app, and adding a custom boot animation. [...] Yes, it's true that all the R1 launcher does is act as a local client to the cloud services offered by Rabbit, which is what truly handles the core functionality. It's also true that there's nothing wrong or unusual with companies using AOSP for their own hardware. But the fact of the matter is that Rabbit does little to justify its use of custom hardware except by making the R1 have an eye-catching design.

Cloud

Alternative Clouds Are Booming As Companies Seek Cheaper Access To GPUs (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: CoreWeave, the GPU infrastructure provider that began life as a cryptocurrency mining operation, this week raised $1.1 billion in new funding from investors, including Coatue, Fidelity and Altimeter Capital. The round brings its valuation to $19 billion post-money and its total raised to $5 billion in debt and equity -- a remarkable figure for a company that's less than 10 years old. It's not just CoreWeave. Lambda Labs, which also offers an array of cloud-hosted GPU instances, in early April secured a "special purpose financing vehicle" of up to $500 million, months after closing a $320 million Series C round. The nonprofit Voltage Park, backed by crypto billionaire Jed McCaleb, last October announced that it's investing $500 million in GPU-backed data centers. And Together AI, a cloud GPU host that also conducts generative AI research, in March landed $106 million in a Salesforce-led round.

So why all the enthusiasm for -- and cash pouring into -- the alternative cloud space? The answer, as you might expect, is generative AI. As the generative AI boom times continue, so does the demand for the hardware to run and train generative AI models at scale. GPUs, architecturally, are the logical choice for training, fine-tuning and running models because they contain thousands of cores that can work in parallel to perform the linear algebra equations that make up generative models. But installing GPUs is expensive. So most devs and organizations turn to the cloud instead. Incumbents in the cloud computing space -- Amazon Web Services (AWS), Google Cloud and Microsoft Azure -- offer no shortage of GPU and specialty hardware instances optimized for generative AI workloads. But for at least some models and projects, alternative clouds can end up being cheaper -- and delivering better availability.

On CoreWeave, renting an Nvidia A100 40GB -- one popular choice for model training and inferencing -- costs $2.39 per hour, which works out to $1,200 per month. On Azure, the same GPU costs $3.40 per hour, or $2,482 per month; on Google Cloud, it's $3.67 per hour, or $2,682 per month. Given generative AI workloads are usually performed on clusters of GPUs, the cost deltas quickly grow. "Companies like CoreWeave participate in a market we call specialty 'GPU as a service' cloud providers," Sid Nag, VP of cloud services and technologies at Gartner, told TechCrunch. "Given the high demand for GPUs, they offers an alternate to the hyperscalers, where they've taken Nvidia GPUs and provided another route to market and access to those GPUs." Nag points out that even some Big Tech firms have begun to lean on alternative cloud providers as they run up against compute capacity challenges.
Microsoft signed a multi-billion-dollar deal with CoreWeave last June to help provide enough power to train OpenAI's generative AI models.

"Nvidia, the furnisher of the bulk of CoreWeave's chips, sees this as a desirable trend, perhaps for leverage reasons; it's said to have given some alternative cloud providers preferential access to its GPUs," reports TechCrunch.
AI

OpenAI and Stack Overflow Partner To Bring More Technical Knowledge Into ChatGPT (theverge.com) 18

OpenAI and the developer platform Stack Overflow have announced a partnership that could potentially improve the performance of AI models and bring more technical information into ChatGPT. From a report: OpenAI will have access to Stack Overflow's API and will receive feedback from the developer community to improve the performance of AI models. OpenAI, in turn, will give Stack Overflow attribution -- aka link to its contents -- in ChatGPT. Users of the chatbot will see more information from Stack Overflow's knowledge archive if they ask ChatGPT coding or technical questions. The companies write in the press release that this will "foster deeper engagement with content." Stack Overflow will use OpenAI's large language models to expand its Overflow AI, the generative AI application it announced last year. Further reading: Stack Overflow Cuts 28% Workforce as the AI Coding Boom Continues (October 2023).
AI

40,000 AI-Narrated Audiobooks Flood Audible (techspot.com) 93

A new breed of audiobook is taking over digital bookshelves -- ones narrated not by professional voice actors, but by artificial intelligence voices. It's an AI audiobook revolution that has been turbo-charged by Amazon. From a report: Since announcing a beta tool last year allowing self-published authors to generate AI "virtual voice" narrations for their ebooks, over 40,000 AI-narrated titles have flooded onto Audible, Amazon's audiobook platform. The eye-popping stat, revealed in a recent Bloomberg report, has many authors celebrating but is also raising red flags for human narrators.

For indie writers wanting to crack the lucrative audiobook market without paying hefty professional voiceover fees, Amazon's free virtual narration tool is a game-changer. One blogger cited in the report claimed converting an ebook to audio using the AI narration took just 52 minutes, bypassing the expensive studio recording route. Others have mixed reactions. Last month, an author named George Steffanos launched an audiobook version of his existing book, posting that while he prefers human-generated works to those generated by AI, "the modest sales of my work were never going to support paying anyone for all those hours of narration."

Microsoft

Microsoft Readies New AI Model To Compete With Google, OpenAI (theinformation.com) 26

For the first time since it invested more than $10 billion into OpenAI in exchange for the rights to reuse the startup's AI models, Microsoft is training a new, in-house AI model large enough to compete with state-of-the-art models from Google, Anthropic and OpenAI itself. The Information: The new model, internally referred to as MAI-1, is being overseen by Mustafa Suleyman, the ex-Google AI leader who most recently served as CEO of the AI startup Inflection before Microsoft hired the majority of the startup's staff and paid $650 million for the rights to its intellectual property in March. But this is a Microsoft model, not one carried over from Inflection, although it may build on training data and other tech from the startup. It is separate from the Pi models that Inflection previously released, according to two Microsoft employees with knowledge of the effort.

MAI-1 will be far larger than any of the smaller, open source models that Microsoft has previously trained, meaning it will require more computing power and training data and will therefore be more expensive, according to the people. MAI-1 will have roughly 500 billion parameters, or settings that can be adjusted to determine what models learn during training. By comparison, OpenAI's GPT-4 has more than 1 trillion parameters, while smaller open source models released by firms like Meta Platforms and Mistral have 70 billion parameters. That means Microsoft is now pursuing a dual trajectory of sorts in AI, aiming to develop both "small language models" that are inexpensive to build into apps and that could run on mobile devices, alongside larger, state-of-the-art AI models.

Twitter

Elon Musk's X Launches Grok AI-Powered 'Stories' Feature (techcrunch.com) 71

An anonymous reader shared this report from Mint: Elon Musk-owned social media platform X (formerly Twitter) has launched a new Grok AI-powered feature called 'Stories', which allows users to read summaries of a trending post on the social media platform. The feature is currently only available to X Premium subscribers on the iOS and web versions, and hasn't found its way to the Android application just yet... instead of reading the whole post, they'll have Grok AI summarise it to get the gist of those big news stories. However, since Grok, like other AI chatbots on the market, is prone to hallucination (making things up), X provides a warning at the end of these stories that says: "Grok can make mistakes, verify its outputs."
"Access to xAI's chatbot Grok is meant to be a selling point to push users to buy premium subscriptions," reports TechCrunch: A snarky and "rebellious" AI, Grok's differentiator from other AI chatbots like ChatGPT is its exclusive and real-time access to X data. A post published to X on Friday by tech journalist Alex Kantrowitz lays out Elon Musk's further plan for AI-powered news on X, based on an email conversation with the X owner. Kantrowitz says that conversations on X will make up the core of Grok's summaries. Grok won't look at the article text, in other words, even if that's what people are discussing on the platform.
The article notes that some AI companies have been striking expensive licensing deals with news publishers. But in X's case, "it's able to get at the news by way of the conversation around it — and without having to partner to access the news content itself."
Microsoft

Microsoft's 'Responsible AI' Chief Worries About the Open Web (msn.com) 41

From the Washington Post's "Technology 202" newsletter: As tech giants move toward a world in which chatbots supplement, and perhaps supplant, search engines, the Microsoft executive assigned to make sure AI is used responsibly said the industry has to be careful not to break the business model of the wider web. Search engines citing and linking to the websites they draw from is "part of the core bargain of search," [Microsoft's chief Responsible AI officer] said in an interview Monday....

"It's really important to maintain a healthy information ecosystem and recognize it is an ecosystem. And so part of what I will continue to guide our Microsoft teams toward is making sure that we are citing back to the core webpages from which the content is sourced. Making sure that we've got that feedback loop happening. Because that is part of the core bargain of search, right? And I think it's critical to make sure that we are both providing users with new engaging ways to interact, to explore new ideas — but also making sure that we are building and supporting the great work of our creators."

Asked about lawsuits alleging copyright use without permission, they said "We believe that there are strong grounds under existing laws to train models."

But they also added those lawsuits are "asking legitimate questions" about where the boundaries are, "for which the courts will provide answers in due course."
IT

Some San Francisco Tech Workers are Renting Cheap 'Bed Pods' (sfgate.com) 184

An anonymous reader shared this report from SFGate: Late last year, tales of tech workers paying $700 a month for tiny "bed pods" in downtown San Francisco went viral. The story provided a perfect distillation of SF's wild (and wildly expensive) housing market — and inspired schadenfreude when the city deemed the situation illegal. But the provocative living situation wasn't an anomaly, according to a city official.

"We've definitely seen an uptick of these 'pod'-type complaints," Kelly Wong, a planner with San Francisco's code enforcement and zoning and compliance team, told SFGATE... Wong stressed that it's not that San Francisco is inherently against bed pod-type arrangements, but that the city is responsible for making sure these spaces are safe and legally zoned.


So Brownstone Shared Housing is still renting one bed pod location — but not accepting new tenants — after citations for failing to get proper permits and having a lock on the front door that required a key to exit.

And SFGate also spoke to Alex Akel, general manager of Olive Rooms, which opened up a co-living and co-working space in SoMa earlier this year (and also faced "a flurry of complaints.") "Unfortunately, we had complaints from neighbors because of foot traffic and noise, and since then we cut the number of people to fit the ordinance by the city," Akel wrote. Olive Rooms describes its space as targeted at "tech founders from Central Asia, giving them opportunities to get involved in the current AI boom." Akel added that its residents are "bringing new energy to SF," but that the program "will not accept new residents before we clarify the status with the city."

In April, the city also received a complaint about a group called Let's Be Buds, which rents out 14 pods in a loft on Divisadero Street that start at $575 per month for an upper bunk.

While this recent burst of complaints is new, bed pods in San Francisco have been catching flak for years... a company called PodShare, which rents — you guessed it — bed pods, squared itself away with the city and has operated in SF since 2019.

Brownstone's CEO told SFGate "A lot of people want to be here for AI, or for school, or different opportunities." He argues that "it's literally impossible without a product like ours," and that their residents had said the option "positively changed the trajectory of their lives."
AI

AI-Operated F-16 Jet Carries Air Force Official Into 550-MPH Aerial Combat Test (apnews.com) 113

The Associated Press reports that an F-16 performing aerial combat tests at 550 miles per hour was "controlled by artificial intelligence, not a human pilot."

And riding in the front seat was the U.S. Secretary of the Air Force... AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.

It was fitting that the dogfight took place at [California's] Edwards Air Force Base, a vast desert facility where Chuck Yeager broke the speed of sound and the military has incubated its most secret aerospace advances. Inside classified simulators and buildings with layers of shielding against surveillance, a new test-pilot generation is training AI agents to fly in war. [U.S. Secretary of the Air Force] Frank Kendall traveled here to see AI fly in real time and make a public statement of confidence in its future role in air combat.

"It's a security risk not to have it. At this point, we have to have it," Kendall said in an interview with The Associated Press after he landed... At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he'd seen enough during his flight that he'd trust this still-learning AI with the ability to decide whether or not to launch weapons in war... [T]he software first learns on millions of data points in a simulator, then tests its conclusions during actual flights. That real-world performance data is then put back into the simulator where the AI then processes it to learn more.

"Kendall said there will always be human oversight in the system when weapons are used," the article notes.

But he also said looked for to the cost-savings of smaller and cheaper AI-controlled unmanned jets.

Slashdot reader fjo3 shared a link to this video. (More photos at Sky.com.)
AI

Microsoft Details How It's Developing AI Responsibly (theverge.com) 40

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

"When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report."

They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."
Government

The US Just Mandated Automated Emergency Braking Systems By 2029 (caranddriver.com) 286

Come 2029, all cars sold in the U.S. "must be able to stop and avoid contact with a vehicle in front of them at speeds up to 62 mph," reports Car and Driver.

"Additionally, the system must be able to detect pedestrians in both daylight and darkness. As a final parameter, the federal standard will require the system to apply the brakes automatically up to 90 mph when a collision is imminent, and up to 45 mph when a pedestrian is detected." Notably, the federal standardization of automated emergency braking systems includes pedestrian-identifying emergency braking, too. Once implemented, the NHTSA projects that this standard will save at least 360 lives a year and prevent at least 24,000 injuries annually. Specifically, the federal agency claims that rear-end collisions and pedestrian injuries will both go down significantly...

"Automatic emergency braking is proven to save lives and reduce serious injuries from frontal crashes, and this technology is now mature enough to require it in all new cars and light trucks. In fact, this technology is now so advanced that we're requiring these systems to be even more effective at higher speeds and to detect pedestrians," said NHTSA deputy administrator Sophie Shulman.

Thanks to long-time Slashdot reader sinij for sharing the article.
AI

AI-Powered 'HorseGPT' Fails to Predict This Year's Kentucky Derby Winner (decrypt.co) 40

In 2016, an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby — naming all four top finishers, in order. (But the next year their predictions weren't even close, with TechRepublic suggesting 2016's race had an unusual cluster of just a few top racehorses.)

So this year Decrypt.co tried crafting their own system "that can be called up when the next Kentucky Derby draws near. There are a variety of ways to enlist artificial intelligence in horse racing. You could process reams of data based on your own methodology, trust a third-party pre-trained model, or even build a bespoke solution from the ground up. We decided to build a GPT we named HorseGPT to crunch the numbers and make the picks for us...

We carefully curated prompts to instill HorseGPT with expertise in data science specific to horse racing: how weather affects times, the role of jockeys and riding styles, the importance of post positions, and so on. We then fed it a mix of research papers and blogs covering the theoretical aspects of wagering, and layered on practical knowledge: how to read racing forms, what the statistics mean, which factors are most predictive, expert betting strategies, and more. Finally, we gave HorseGPT a wealth of historical Kentucky Derby data, arming it with the raw information needed to put its freshly imparted skills to use.

We unleashed HorseGPT on official racing forms for this year's Derby. We asked HorseGPT to carefully analyze each race's form, identify the top contenders, and recommend wager types and strategies based on deep background knowledge derived from race statistics.

So how did it do? HorseGPT picked two horses to win — both of which failed to do so. (Sierra Leone did finish second — in a rare three-way photo finish. But Fierceness finished... 15th.) It also recommended the same two horses if you were trying to pick the top two finishers in the correct order — a losing bet, since, again, Fierceness finished 15th.

But even worse, HorseGPT recommended betting on Just a Touch to finish in either first or second place. When the race was over, that horse finished dead last. (And when asked to pick the top three finishers in correct order, HorseGPT stuck with its choices for the top two — which finished #2 and #15 — and, again, Just a Touch, who came in last.)

When Google Gemini was asked to pick the winner by The Athletic, it first chose Catching Freedom (who finished 4th). But it then gave an entirely different answer when asked to predict the winner "with an Italian accent."

"The winner of the Kentucky Derby will be... Just a Touch! Si, that's-a right, the underdog! There will be much-a celebrating in the piazzas, thatta-a I guarantee!"
Again, Just a Touch came in last.

Decrypt noticed the same thing. "Interestingly enough, our HorseGPT AI agent and the other out-of-the-box chatbots seemed to agree with each other," the site notes, adding that HorseGPT also seemed to agree "with many expert analysts cited by the official Kentucky Derby website."

But there was one glimmer of insight into the 20-horse race. When asked to choose the top four finishers in order, HorseGPT repeated those same losing picks — which finished #2, #15, and #20. But then it added two more underdogs for fourth place finishers, "based on their potential to outperform expectations under muddy conditions." One of those two horses — Domestic Product — finished in 13th place.

But the other of the two horses was Mystik Dan — who came in first.

Mystik Dan appeared in only one of the six "Top 10 Finishers" lists (created by humans) at the official Kentucky Derby site... in the #10 position.
The Military

US Official Urges China, Russia To Declare AI Will Not Control Nuclear Weapons 85

Senior Department arms control official Paul Dean on Thursday urged China and Russia to declare that artificial intelligence would never make decisions on deploying nuclear weapons. Washington had made a "clear and strong commitment" that humans had total control over nuclear weapons, said Dean. Britain and France have made similar commitments. Reuters reports: "We would welcome a similar statement by China and the Russian Federation," said Dean, principal deputy assistant secretary in the Bureau of Arms Control, Deterrence and Stability. "We think it is an extremely important norm of responsible behaviour and we think it is something that would be very welcome in a P5 context," he said, referring to the five permanent members of the United Nations Security Council.
The Internet

Humans Now Share the Web Equally With Bots, Report Warns (independent.co.uk) 32

An anonymous reader quotes a report from The Independent, published last month: Humans now share the web equally with bots, according to a major new report -- as some fear that the internet is dying. In recent months, the so-called "dead internet theory" has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts. Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its "Bad Bot Report" indicates. That is up 2 percent in comparison with last year, and is the highest number ever seen since the report began in 2013. In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated, Imperva warned. More and more of them come from residential internet connections, which makes them look more legitimate. "Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications," said Nanhi Singh, general manager for application security at Imperva. "As more AI-enabled tools are introduced, bots will become omnipresent."

AI

AI Engineers Report Burnout, Rushed Rollouts As 'Rat Race' To Stay Competitive Hits Tech Industry (cnbc.com) 36

An anonymous reader quotes a report from CNBC: Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. But it was all for nothing. The project was ultimately "deprioritized," the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.

The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature's software. AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

Slashdot Top Deals