×
Oracle

'Oracle's Missteps in Cloud Computing Are Paying Dividends in AI' (msn.com) 26

Oracle missed the tech industry's move to cloud computing last decade and ended up an also-ran. Now the AI boom has given it another shot. WSJ: The 47-year-old company that made its name on relational database software has emerged as an attractive cloud-computing provider for AI developers such as OpenAI, sending its long-stagnant stock to new heights. Oracle shares are up 34% since January, well outpacing the Nasdaq's 14% rise and those of bigger competitors Microsoft, Amazon.com and Google.

It is a surprising revitalization for a company many in the tech industry had dismissed as a dinosaur of a bygone, precloud era. Oracle appears to be successfully making a case to investors that it has become a strong fourth-place player in a cloud market surging thanks to AI. Its lateness to the game may have played to its advantage, as a number of its 162 data centers were built in recent years and are designed for the development of AI models, known as training.

In addition, Oracle isn't developing its own large AI models that compete with potential clients. The company is considered such a neutral and unthreatening player that it now has partnerships with Microsoft, Google and Amazon, all of which let Oracle's databases run in their clouds. Microsoft is also running its Bing AI chatbot on Oracle's servers.

AI

Roblox Announces Open Source AI Tool That Builds 3D Environments From Text 16

Scott J Mulligan writes for MIT Technology Review: Roblox plans to roll out a generative AI tool that will let creators make whole 3D scenes just using text prompts, it announced today. Once it's up and running, developers on the hugely popular online game platform will be able to simply write "Generate a race track in the desert," for example, and the AI will spin one up. Users will also be able to modify scenes or expand their scope -- say, to change a daytime scene to night or switch the desert for a forest. Although developers can already create similar scenes like this manually in the platform's creator studio, Roblox claims its new generative AI model will make the changes happen in a fraction of the time. It also claims that it will give developers with minimal 3D art skills the ability to craft more compelling environments. The firm didn't give a specific date for when the tool will be live.

[...] The new tool is part of Roblox's push to integrate AI into all its processes. The company currently has 250 AI models live. One AI analyzes voice chat in real time and screens for bad language, instantly issuing reprimands and possible bans for repeated infractions. Roblox plans to open-source its 3D foundation model so that it can be modified and used as a basis for innovation. "We're doing it in open source, which means anybody, including our competitors, can use this model," says [Anupam Singh, vice president of AI and growth engineering at Roblox]. Getting it into as many hands as possible also opens creative possibilities for developers who are not as skilled at creating Roblox environments. "There are a lot of developers that are working alone, and for them, this is going to be a game changer, because now they don't have to try to find someone else to work with," says [Marcus Holmstrom, CEO of The Gang, a company that builds some of the top games on Roblox].
Government

US Proposes Requiring Reporting For Advanced AI, Cloud Providers (reuters.com) 11

An anonymous reader quotes a report from Reuters: The U.S. Commerce Department said Monday it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters. It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team." [...] Commerce said the information collected under the proposal "will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors." Further reading: Biden Signs Executive Order To Oversee and Invest in AI
Operating Systems

Apple Will Release iOS 18, macOS 15, iPadOS 18, Other Updates on September 16 9

Apple plans to release the next versions of iOS, iPadOS, macOS, and watchOS to the general public on September 16, the company announced via its website following its iPhone-centric product event earlier today. From a report: We should also see updates for tvOS and the HomePod operating system on the same date. The new releases bring a number of new features and refinements to Apple's platforms: better texting with Android devices thanks to support for the RCS standard, iPhone Mirroring that allows you to interact with your iPhone via your Mac, more UI customization options for iPhones and iPads, and other improvements besides. What won't be included in these initial releases is any hint of Apple Intelligence, the batch of generative AI and machine learning features that Apple announced at its Worldwide Developers Conference in June. Apple is testing some of the Apple Intelligence features in betas of iOS 18.1, iPadOS 18.1, and macOS 15.1, updates that will be released later this fall.
AI

Audible To Start Generating AI Voice Replicas of Select Audiobook Narrators (msn.com) 38

Amazon's Audible will begin inviting a select group of US-based audiobook narrators to train AI on their voices, the clones of which can then be used to make audiobook recordings. From a report: The effort, which kicks off next week, is designed to add more audiobooks to the service, quickly and cheaply -- and to welcome traditional narrators into the evolving world of audiobook automation which, to date, many have regarded warily. Last year, Audible began offering US-based, self-published authors who make their books available on the Kindle Store the option of having their works narrated by a generic "virtual voice." The initiative has been popular. As of May, more than 40,000 books in Audible were marked as having made use of the technology. Under the new arrangement, rather than limiting the audio work entirely to company-owned synthetic voices, Audible will be encouraging professional narrators to get in on the action.
AI

'AI May Not Steal Many Jobs After All' (apnews.com) 62

Alorica — which runs customer-service centers around the world — has introduced an AI translation tool that lets its representatives talk with customers in 200 different languages. But according to the Associated Press, "Alorica isn't cutting jobs. It's still hiring aggressively." The experience at Alorica — and at other companies, including furniture retailer IKEA — suggests that AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy. Nick Bunker, an economist at the Indeed Hiring Lab, said he thinks AI "will affect many, many jobs — maybe every job indirectly to some extent. But I don't think it's going to lead to, say, mass unemployment.... "

[T]he widespread assumption that AI chatbots will inevitably replace service workers, the way physical robots took many factory and warehouse jobs, isn't becoming reality in any widespread way — not yet, anyway. And maybe it never will. The White House Council of Economic Advisers said last month that it found "little evidence that AI will negatively impact overall employment.'' The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways... The outplacement firm Challenger, Gray & Christmas, which tracks job cuts, said it has yet to see much evidence of layoffs that can be attributed to labor-saving AI. "I don't think we've started seeing companies saying they've saved lots of money or cut jobs they no longer need because of this,'' said Andy Challenger, who leads the firm's sales team. "That may come in the future. But it hasn't played out yet.''

At the same time, the fear that AI poses a serious threat to some categories of jobs isn't unfounded. Consider Suumit Shah, an Indian entrepreneur who caused a uproar last year by boasting that he had replaced 90% of his customer support staff with a chatbot named Lina. The move at Shah's company, Dukaan, which helps customers set up e-commerce sites, shrank the response time to an inquiry from 1 minute, 44 seconds to "instant." It also cut the typical time needed to resolve problems from more than two hours to just over three minutes. "It's all about AI's ability to handle complex queries with precision,'' Shah said by email. The cost of providing customer support, he said, fell by 85%....

Similarly, researchers at Harvard Business School, the German Institute for Economic Research and London's Imperial College Business School found in a study last year that job postings for writers, coders and artists tumbled within eight months of the arrival of ChatGPT.

On the other hand, after Ikea introduced a customer-service chatbot in 2021 to handle simple inquiries, it didn't result in massive layoffs according to the article. Instead Ikea ended up retraining 8,500 customer-service workers to handle other tasks like advising customers on interior design and fielding complicated customer calls.
AI

Videogame Performers' Union Hails New 80-Game Agreement as Preserving Human Creativity (apnews.com) 18

This week after striking for over a month, videogame performers reached agreements with 80 games this week, reports the Associated Press. "SAG-AFTRA announced the agreements with the 80 individual video games on Thursday. Performers impacted by the work stoppage can now work on those projects.

"The strike against other major video game publishers, including Disney and Warner Bros.' game companies and Electronic Arts Productions Inc., will continue." The interim agreement secures wage improvements, protections around "exploitative uses" of artificial intelligence and safety precautions that account for the strain of physical performances, as well as vocal stress. The tiered budget agreement aims to make working with union talent more feasible for independent game developers or smaller-budget projects while also providing performers the protections under the interim agreement.
Duncan Crabtree-Ireland, SAG-AFTRA's national executive director and chief negotiator, said in a statement that companies signing the agreements are "helping to preserve the human art, ingenuity and creativity that fuels interactive storytelling."

"These agreements signal that the video game companies in the collective bargaining group do not represent the will of the larger video game industry," Crabtree-Ireland continued. "The many companies that are happy to agree to our AI terms prove that these terms are not only reasonable, but feasible and sustainable for businesses."

Deadline calls the agreement "a blow for major developers." As Deadline previously reported, AI is the one and only issue at the crux of this strike, as the union has managed to find common ground with the developers on every other provision. More specifically, the union has said that the sticking point in these negotiations is encompassing all performers in any AI provisions, without loopholes related to whether an actors' likeness is recognizable. In video games, similar to other forms of animated content, motion capture performers and voice actors are often performing as creatures or other non-human characters that make their voice and likeness unrecognizable.
Government

Is the Tech World Now 'Central' to Foreign Policy? (wired.com) 41

Wired interviews America's foreign policy chief, Secretary of State Antony Blinken, about U.S. digital polices, starting with a new "cybersecurity bureau" created in 2022 (which Wired previously reported includes "a crash course in cybersecurity, telecommunications, privacy, surveillance, and other digital issues.") Look, what I've seen since coming back to the State Department three and a half years ago is that everything happening in the technological world and in cyberspace is increasingly central to our foreign policy. There's almost a perfect storm that's come together over the last few years, several major developments that have really brought this to the forefront of what we're doing and what we need to do. First, we have a new generation of foundational technologies that are literally changing the world all at the same time — whether it's AI, quantum, microelectronics, biotech, telecommunications. They're having a profound impact, and increasingly they're converging and feeding off of each other.

Second, we're seeing that the line between the digital and physical worlds is evaporating, erasing. We have cars, ports, hospitals that are, in effect, huge data centers. They're big vulnerabilities. At the same time, we have increasingly rare materials that are critical to technology and fragile supply chains. In each of these areas, the State Department is taking action. We have to look at everything in terms of "stacks" — the hardware, the software, the talent, and the norms, the rules, the standards by which this technology is used.

Besides setting up an entire new Bureau of Cyberspace and Digital Policy — and the bureaus are really the building blocks in our department — we've now trained more than 200 cybersecurity and digital officers, people who are genuinely expert. Every one of our embassies around the world will have at least one person who is truly fluent in tech and digital policy. My goal is to make sure that across the entire department we have basic literacy — ideally fluency — and even, eventually, mastery. All of this to make sure that, as I said, this department is fit for purpose across the entire information and digital space.

Wired notes it was Blinken's Department that discovered China's 2023 breach of Microsoft systems. And on the emerging issue of AI, Blinken cites "incredible work done by the White House to develop basic principles with the foundational companies." The voluntary commitments that they made, the State Department has worked to internationalize those commitments. We have a G7 code of conduct — the leading democratic economies in the world — all agreeing to basic principles with a focus on safety. We managed to get the very first resolution ever on artificial intelligence through the United Nations General Assembly — 192 countries also signing up to basic principles on safety and a focus on using AI to advance sustainable development goals on things like health, education, climate. We also have more than 50 countries that have signed on to basic principles on the responsible military use of AI. The goal here is not to have a world that is bifurcated in any way. It's to try to bring everyone together.
Social Networks

GPT-Fabricated Scientific Papers Found on Google Scholar by Misinformation Researchers (harvard.edu) 81

Harvard's school of public policy is publishing a Misinformation Review for peer-reviewed, scholarly articles promising "reliable, unbiased research on the prevalence, diffusion, and impact of misinformation worldwide."

This week it reported that "Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI." They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing.

The resulting enhanced potential for malicious manipulation of society's evidence base, particularly in politically divisive domains, is a growing concern... [T]he abundance of fabricated "studies" seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar. However small, this possibility and awareness of it risks undermining the basis for trust in scientific knowledge and poses serious societal risks.

"Our analysis shows that questionable and potentially manipulative GPT-fabricated papers permeate the research infrastructure and are likely to become a widespread phenomenon..." the article points out.

"Google Scholar's central position in the publicly accessible scholarly communication infrastructure, as well as its lack of standards, transparency, and accountability in terms of inclusion criteria, has potentially serious implications for public trust in science. This is likely to exacerbate the already-known potential to exploit Google Scholar for evidence hacking..."
Education

MIT CS Professor Tests AI's Impact on Educating Programmers (acm.org) 84

Long-time Slashdot reader theodp writes: "The Impact of AI on Computer Science Education" recounts an experiment Eric Klopfer conducted in his undergrad CS class at MIT. He divided the class into three groups and gave them a programming task to solve in the Fortran language, which none of them knew. Reminiscent of how The Three Little Pigs used straw, sticks, and bricks to build their houses with very different results, Klopfer allowed one group to use ChatGPT to solve the problem, while the second group was told to use Meta's Code Llama LLM, and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group "remembered nothing, and they all failed," recalled Klopfer. Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.

"This is an important educational lesson," said Klopfer. "Working hard and struggling is actually an important way of learning. When you're given an answer, you're not struggling and you're not learning. And when you get more of a complex problem, it's tedious to go back to the beginning of a large language model and troubleshoot it and integrate it." In contrast, breaking the problem into components allows you to use an LLM to work on small aspects, as opposed to trying to use the model for an entire project, he says. "These skills, of how to break down the problem, are critical to learn."

AI

1,000 Autonomous AI Agents Collaborating? Altera Simulates It In Minecraft (readwrite.com) 21

Altera AI's home page says their mission is "to create digital human beings that live, care, and grow with us," adding that their company builds machines "with fundamental human qualities, starting with friends that can play video games with you."

And while their agents can function in many different games and apps, Altera used Minecraft to launch "the first-ever simulation of over 1,000 collaborating autonomous AI agents," reports ReadWrite, "working together in a Minecraft world, all of which can operate for hours or days without intervention from humans." The agents have already started to develop their own economy, culture, religion, and government, with the AI already working on establishing its own systems. The CEO Robert Yang took to X to share the news and introduce Project Sid...

So far, the agents have already formed a merchant hub, have voted in a democracy, spread religions, and collected five times more distinct items than before... "Though starting in games, we're solving the deepest issues facing agents: coherence, multi-agent collaboration, and long-term progression," said the CEO.

According to the video, the most active trader in their simulation was the priest — because he was bribing the other townsfolk to convert to his religion. (Which apparently involved the Flying Spaghetti Monster...) "We run these worlds every day, and they're always different," the video's narrator says, while pointing out that their agents had collected 32% of all the items in Minecraft — five times more than anything ever reported for an individual agent.

"Sid starts in Minecraft, but we are already going beyond," CEO Yang says in the video, calling it "the first-ever agent civilization."
Privacy

Signal is More Than Encrypted Messaging. It Wants to Prove Surveillance Capitalism Is Wrong (wired.com) 70

Slashdot reader echo123 shared a new article from Wired titled "Signal Is More Than Encrypted Messaging. Under Meredith Whittaker, It's Out to Prove Surveillance Capitalism Wrong." ("On its 10th anniversary, Signal's president wants to remind you that the world's most secure communications platform is a nonprofit. It's free. It doesn't track you or serve you ads. It pays its engineers very well. And it's a go-to app for hundreds of millions of people.") Ten years ago, WIRED published a news story about how two little-known, slightly ramshackle encryption apps called RedPhone and TextSecure were merging to form something called Signal. Since that July in 2014, Signal has transformed from a cypherpunk curiosity — created by an anarchist coder, run by a scrappy team working in a single room in San Francisco, spread word-of-mouth by hackers competing for paranoia points — into a full-blown, mainstream, encrypted communications phenomenon... Billions more use Signal's encryption protocols integrated into platforms like WhatsApp...

But Signal is, in many ways, the exact opposite of the Silicon Valley model. It's a nonprofit funded by donations. It has never taken investment, makes its product available for free, has no advertisements, and collects virtually no information on its users — while competing with tech giants and winning... Signal stands as a counterfactual: evidence that venture capitalism and surveillance capitalism — hell, capitalism, period — are not the only paths forward for the future of technology.

Over its past decade, no leader of Signal has embodied that iconoclasm as visibly as Meredith Whittaker. Signal's president since 2022 is one of the world's most prominent tech critics: When she worked at Google, she led walkouts to protest its discriminatory practices and spoke out against its military contracts. She cofounded the AI Now Institute to address ethical implications of artificial intelligence and has become a leading voice for the notion that AI and surveillance are inherently intertwined. Since she took on the presidency at the Signal Foundation, she has come to see her central task as working to find a long-term taproot of funding to keep Signal alive for decades to come — with zero compromises or corporate entanglements — so it can serve as a model for an entirely new kind of tech ecosystem...

Meredith Whittaker: "The Signal model is going to keep growing, and thriving and providing, if we're successful. We're already seeing Proton [a startup that offers end-to-end encrypted email, calendars, note-taking apps, and the like] becoming a nonprofit. It's the paradigm shift that's going to involve a lot of different forces pointing in a similar direction."

Key quotes from the interview:
  • "Given that governments in the U.S. and elsewhere have not always been uncritical of encryption, a future where we have jurisdictional flexibility is something we're looking at."
  • "It's not by accident that WhatsApp and Apple are spending billions of dollars defining themselves as private. Because privacy is incredibly valuable. And who's the gold standard for privacy? It's Signal."
  • "AI is a product of the mass surveillance business model in its current form. It is not a separate technological phenomenon."
  • "...alternative models have not received the capital they need, the support they need. And they've been swimming upstream against a business model that opposes their success. It's not for lack of ideas or possibilities. It's that we actually have to start taking seriously the shifts that are going to be required to do this thing — to build tech that rejects surveillance and centralized control — whose necessity is now obvious to everyone."

AI

The Underground World of Black-Market AI Chatbots is Thriving (fastcompany.com) 46

An anonymous reader shares a report: ChatGPT's 200 million weekly active users have helped propel OpenAI, the company behind the chatbot, to a $100 billion valuation. But outside the mainstream there's still plenty of money to be made -- especially if you're catering to the underworld. Illicit large language models (LLMs) can make up to $28,000 in two months from sales on underground markets, according to a study published last month in arXiv, a preprint server owned by Cornell University. That's just the tip of the iceberg, according to the study, which looked at more than 200 examples of malicious LLMs (or malas) listed on underground marketplaces between April and October 2023.

The LLMs fall into two categories: those that are outright uncensored LLMs, often based on open-source standards, and those that jailbreak commercial LLMs out of their guardrails using prompts. "We believe now is a good stage to start to study these because we don't want to wait until the big harm has already been done," says Xiaofeng Wang, a professor at Indiana University Bloomington, and one of the coauthors of the paper. "We want to head off the curve and before attackers can incur huge harm to us." While hackers can at times bypass mainstream LLMs' built-in limitations meant to prevent illegal or questionable activity, such instances are few and far between. Instead, to meet demand, illicit LLMs have cropped up. And unsurprisingly, those behind them are keen to make money off the back of that interest.

AI

OpenAI Japan Exec Teases 'GPT-Next' (tweaktown.com) 58

OpenAI plans to launch a new AI model, GPT-Next, by year-end, promising a 100-fold increase in power over GPT-4 without significantly higher computing demands, according to a leaked presentation by an OpenAI Japan executive. The model, codenamed "Strawberry," incorporates "System 2 thinking," allowing for deliberate reasoning rather than mere token prediction, according to previous reports. GPT-Next will also generate high-quality synthetic training data, addressing a key challenge in AI development. Tadao Nagasaki of OpenAI Japan unveiled plans for the model, citing architectural improvements and learning efficiency as key factors in its enhanced performance.
EU

US, UK, EU Sign 'Legally Binding' AI Treaty 51

The United States, United Kingdom and European Union have signed the first "legally binding" international AI treaty on Thursday, the Council of Europe human rights organization said. Called the AI Convention, the treaty promotes responsible innovation and addresses the risks AI may pose. Reuters reports: The AI Convention mainly focuses on the protection of human rights of people affected by AI systems and is separate from the EU AI Act, which entered into force last month. The EU's AI Act entails comprehensive regulations on the development, deployment, and use of AI systems within the EU internal market. The Council of Europe, founded in 1949, is an international organization distinct from the EU with a mandate to safeguard human rights; 46 countries are members, including all the 27 EU member states. An ad hoc committee in 2019 started examining the feasibility of an AI framework convention and a Committee on Artificial Intelligence was formed in 2022 which drafted and negotiated the text. The signatories can choose to adopt or maintain legislative, administrative or other measures to give effect to the provisions.

Francesca Fanucci, a legal expert at ECNL (European Center for Not-for-Profit Law Stichting) who contributed to the treaty's drafting process alongside other civil society groups, told Reuters the agreement had been "watered down" into a broad set of principles. "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability," she said. Fanucci highlighted exemptions on AI systems used for national security purposes, and limited scrutiny of private companies versus the public sector, as flaws. "This double standard is disappointing," she added.
AI

New AI Model 'Learns' How To Simulate Super Mario Bros. From Video Footage (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: Last month, Google's GameNGen AI model showed that generalized image diffusion techniques can be used to generate a passable, playable version of Doom. Now, researchers are using some similar techniques with a model called MarioVGG to see if an AI model can generate plausible video of Super Mario Bros. in response to user inputs. The results of the MarioVGG model -- available as a pre-print paper (PDF) published by the crypto-adjacent AI company Virtuals Protocol -- still display a lot of apparent glitches, and it's too slow for anything approaching real-time gameplay at the moment. But the results show how even a limited model can infer some impressive physics and gameplay dynamics just from studying a bit of video and input data. The researchers hope this represents a first step toward "producing and demonstrating a reliable and controllable video game generator," or possibly even "replacing game development and game engines completely using video generation models" in the future.
AI

OpenAI Considering Monthly Subscription Prices as High as $2000 for New AI Models (theinformation.com) 61

OpenAI executives are considering premium subscriptions for advanced language models, including the reasoning-focused Strawberry and flagship Orion, with potential monthly prices ranging up to $2,000, The Information reported Thursday, citing a source.

The move reflects growing concerns about covering operational costs for ChatGPT, which currently generates approximately $2 billion annually from $20 monthly subscriptions, the report added. More sophisticated models like Strawberry and Orion may require additional computing power, potentially increasing expenses, the report added.
United States

Feds Indict Musician on Landmark Massive Streaming Fraud Charges (rollingstone.com) 87

Federal investigators have indicted a North Carolina man over a scheme in which he allegedly used bot accounts and hundreds of thousands of AI-generated songs to earn more than $10 million in royalty payments from the major streaming services. RollingStone: The case is a landmark development in the still-developing music streaming market, with the U.S. Attorney's Office for the Southern District of New York calling it the first criminal case involving artificially inflated music streaming. In the indictment, the prosecutors say that for the past seven years, North Carolina musician Michael Smith had been running a complex music streaming manipulation scheme to fraudulently profit off of billions of streams from bot accounts. "At a certain point in the charged time period, Smith estimated that he could use the Bot Accounts to generate approximately 661,440 streams per day, yielding annual royalties of $1,207,128," the prosecutors said in the indictment announcement.

Smith, 52, was charged with wire fraud conspiracy, wire fraud and money laundering conspiracy, totaling to a combined maximum of 60 years in prison if convicted. "Through his brazen fraud scheme, Smith stole millions in royalties that should have been paid to musicians, songwriters, and other rights holders whose songs were legitimately streamed," said Damian Williams, U.S. Attorney for the Southern District of New York. "Today, thanks to the work of the FBI and the career prosecutors of this Office, it's time for Smith to face the music."

AI

OpenAI Co-Founder Raises $1 Billion For New Safety-Focused AI Startup 21

Safe Superintelligence (SSI), co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion to develop safe AI systems that surpass human capabilities. The company, valued at $5 billion, plans to use the funds to hire top talent and acquire computing power, with investors including Andreessen Horowitz, Sequoia Capital, and DST Global. Reuters reports: Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on."

SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs.

Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
United Kingdom

Microsoft's Inflection Acquihire Is Too Small To Matter, Say UK Regulators (theregister.com) 3

The Register's Brandon Vigliarolo reports: Microsoft's "acquihire" of Inflection AI was today cleared by UK authorities on the grounds that the startup isn't big enough for its absorption by Microsoft to affect competition in the enterprise AI space. The Competition and Markets Authority (CMA) confirmed the conclusion of its investigation by publishing a summary of its decision. While the CMA found that Microsoft's recruitment of Inflection co-founders Mustafa Suleyman and Karen Simonyan, along with other Inflection employees, in March 2024 to lead Microsoft's new AI division did create a relevant merger situation, a bit of digging indicated everything was above board.

As we explained when the CMA kicked off its investigation in July, the agency's definition of relevant merger situations includes instances where two or more enterprises have ceased to be distinct, and when the deal either exceeds 70 million pounds or 25 percent of the national supply of a good or service. In both cases, the CMA determined [PDF], the Microsoft/Inflection deal met the criteria. As to whether the matter could lead to a substantial lessening of competition, that's where the CMA decided everything was OK.

"Prior to the transaction, Inflection had a very small share of UK domain visits for chatbots and conversational AI tools and ... had not been able to materially increase or sustain its chatbot user numbers," the CMA said. "Competitors did not regard Inflection's capabilities with regard to EQ [emotional intelligence, which was an Inflection selling point] or other product innovation as a material competitive constraint." In addition, the CMA said Inflection's foundational model offering wouldn't exert any "material competitive constraint" on Microsoft or other enterprise foundational model suppliers as none of the potential Inflection customers the CMA spoke with during its probe identified any features that made Inflection's software more attractive than other brands. Ouch.

Slashdot Top Deals