×
Power

Could Sand Be the Next Lithium? Searching for Better Renewable Energy-Storing Batteries (msn.com) 135

"The green energy revolution still faces a huge obstacle: a lack of long-term, cost-efficient renewable storage," writes the Washington Post.

But then they check in on a Finnish start-up running the world's first commercial-scale sand battery, which uses solar panels and wind turbines to heat sand-filled vats (up to 1,000 degrees) to back up district heating networks: The sand can hold onto the power for weeks or months at a time — a clear advantage over the lithium ion battery, the giant of today's battery market, which usually can hold energy for only a number of hours.

Unlike fossil fuels, which can be easily transported and stored, solar and wind supplies fluctuate. Most of the renewable power that isn't used immediately is lost. The solution is storage innovation, many industry experts agree. In addition to their limited capacity, lithium ion batteries, which are used to power everything from mobile phones to laptops to electric vehicles, tend to fade with every recharge and are highly flammable, resulting in a growing number of deadly fires across the world. The extraction of cobalt, the lucrative raw material used in lithium ion batteries, also relies on child labor. U.N. agencies have estimated that 40,000 boys and girls work in the industry, with few safety measures and paltry compensation. These serious environmental and human rights challenges pose a problem for the electric vehicle industry, which requires a huge supply of critical minerals.

So investors are now pouring money into even bigger battery ventures. More than $900 million has been invested in clean storage technologies since 2021, up from $360 million the year before, according to the Long Duration Energy Storage Council, an organization launched after that year's U.N. climate conference to oversee the world's decarbonization. The group predicts that by 2040, large-scale, renewable energy storage investments could reach $3 trillion. That includes efforts to turn natural materials into batteries. Once-obscure start-ups, experimenting with once-humble commodities, are suddenly receiving millions in government and private funding. There's the multi-megawatt CO2 battery in Sardinia, a rock-based storage system in Tuscany, and a Swiss company that's moving massive bricks along a 230-foot tall building to store and generate renewable energy. One Danish battery start-up, which stores energy from molten salt, is sketching out plans to deploy power plants in decommissioned coal mines across three continents...

But in order to succeed, natural batteries will need to provide the same kind of steady power as fossil fuels, at scale. Whether that can be achieved remains to be seen, say energy experts. And the industry may be subject to the same pitfalls that loom over the renewables energy sector at large: Projects will need to be constructed from scratch, and they might only be adopted in developed countries that can afford such experimentation. Lovschall-Jensen, the CEO of a Danish molten salt-based storage start-up called Hyme, says the challenge will be maintaining the same standards to which the modern world has become accustomed: receiving power, on demand, with the flip of a switch.

He believes that natural batteries, though still in their infancy, can serve that goal.

Programming

72-Year-Old C++ Creator Bjarne Stroustrup Shares Life Advice (youtube.com) 47

72-year-old Bjarne Stroustrup invented C++ (first released in 1985). 38 years later, he gave a short interview for Honeypot.io (which calls itself "Europe's largest tech-focused job platform") offering his own advice for life: Don't overspecialize. Don't be too sure that you know the future. Be flexible, and remember that careers and jobs are a long-term thing. Too many young people think they can optimize something, and then they find they've spent a couple of years or more specializing in something that may not have been the right thing. And in the process they burn out, because they haven't spent enough time building up friendships and having a life outside computing.

I meet a lot of sort of — I don't know what you call them, "junior geeks"? — that just think that the only thing that matters is the speciality of computing — programming or AI or graphics or something like that. And — well, it isn't... And if they do nothing else, well — if you don't communicate your ideas, you can just as well do Sudoku... You have to communicate. And a lot of sort of caricature nerds forget that. They think that if they can just write the best code, they'll change the world. But you have to be able to listen. You have to be able to communicate with your would-be users and learn from them. And you have to be able to communicate your ideas to them.

So you can't just do code. You have to do something about culture and how to express ideas. I mean, I never regretted the time I spent on history and on math. Math sharpens your mind, history gives you some idea of your limitations and what's going on in the world. And so don't be too sure. Take time to have a balanced life.

And be ready for the opportunity. I mean, a broad-based education, a broad-based skill set — which is what you build up when you educate, you're basically building a portfolio of skills — means that you can take advantage of an opportunity when it comes along. You can recognize it sometimes. We have lots of opportunities. But a lot of them, we either can't take advantage of, or we don't notice. It was my fairly broad education — I've done standard computer science, I've done compilers, I've done multiple languages... I think I knew two dozen at the time. And I have done machine architecture, I've done operating systems. And that skill set turned out to be useful.

At the beginning of the video, Stroustrup jokes that it's hard to give advice — and that it's at least as difficult as it is to take advice.

Earlier this year, Bjarne also told the same site the story of how he became a programmer by mistake — misreading a word when choosing what to study afer his high school exams. Stroustrup had thought he was signing up for an applied mathematics course, which instead turned to be a class in computer science...
AMD

AMD Announces Radeon RX 7800 XT and Radeon RX 7700 XT (arstechnica.com) 9

AMD on Friday announced its long-awaited middle members of the Radeon RX 7000 series, the Radeon RX 7800 XT and Radeon RX 7700 XT. From a report: Today, the company is finally filling in that gap with the new Radeon RX 7800 XT and RX 7700 XT, both advertised as 1440p graphics cards and available starting at $449 and $499, respectively. Both cards will be available on September 6. And most Radeon RX 6000 and RX 7000 GPUs sold between now and September 30 will come with a free copy of Bethesda's upcoming "Skyrim in space" title, Starfield.

The RX 7700 XT and 7800 XT are based on the same RDNA 3 graphics architecture as the other 7000-series GPUs, which means a more efficient manufacturing process than the RX 6000 series, DisplayPort 2.1 support, and hardware acceleration for encoding with the AV1 video codec, which promises game streamers either higher-quality video at the same bitrate as older codecs or the same quality with a lower bitrate. AMD compared the 7800 XT and 7700 XT favorably to Nvidia's $600 upper-midrange RTX 4070 and the $500 16GB version of the RTX 4060 Ti. The new Radeon cards also support FidelityFX Super Resolution (FSR) version 3, a new version of AMD's GPU-agnostic AI upscaling technology that also promises extra AI-generated frames a la Nvidia's proprietary DLSS 3 and DLSS Frame Generation feature. But unlike Nvidia, AMD isn't restricting FSR 3 to its latest cards, and users of RX 6000-series cards plus recent Nvidia GeForce and Intel Arc cards will be able to benefit, too, at least when games start supporting it.

The Internet

Political Polarization Toned Down Through Anonymous Online Chats (arstechnica.com) 293

An anonymous reader quotes a report from Ars Technica: Political polarization in the US has become a major issue, as Republicans and Democrats increasingly inhabit separate realities on topics as diverse as election results and infectious diseases. [...] Now, a team of researchers has tested whether social media can potentially help the situation by getting people with opposite political leanings talking to each other about controversial topics. While this significantly reduced polarization, it appeared to be more effective for Republican participants. The researchers zeroed in on two concepts to design their approach. The first is the idea that simply getting people to communicate across the political divide might reduce the sense that at least some of their opponents aren't as extreme as they're often made out to be. The second is that anonymity would allow people to focus on the content of their discussion, rather than worrying about whether what they were saying could be traced back to them.

The researchers realized that they couldn't have any sort of control over conversations on existing social networks. So, they built their own application and hired professionals to do the graphics, support, and moderation. [...] People were randomly assigned to a few conditions. Some didn't use the app at all and were simply asked to write an essay on one of the topics under consideration (immigration or gun control). The rest were asked to converse on the platform about one of these topics. Every participant in these conversations was paired with a member of the opposing political party. Their partners were either unlabeled, labeled as belonging to the opposing party, or labeled as belonging to the same party (although the latter is untrue). Both before and after use of the app, participants answered questions about their view of politicized issues, members of their own party, and political opponents. These were analyzed in terms of issues and social influences, as well as rolled into a single index of polarization for the analysis.

The conversations appeared to have an effect, with polarization lowered by about a quarter of a standard deviation among those who engaged with political opponents that were labeled accordingly. Somewhat surprisingly, conversation partners who were mislabeled had a nearly identical effect, presumably because they suggested that a person's own party contained a diversity of perspectives on the topic. In cases where no party affiliation was given, the depolarization was smaller (0.15 standard deviations). The striking thing is that most of the change came from Republican participants. There, polarization was reduced by 0.4 standard deviations. In contrast, Democratic participants only saw it drop by 0.1 standard deviations -- a change that wasn't statistically significant. The error bars of the two groups of party members overlapped, however, so while large, it's not clear what this difference might tell us. The researchers went back and ran the conversations through sentiment analysis and focused on people whose polarization had dropped the most. They found that their conversation partners used less heated language at the start of the conversation. So it appears that displaying respect for your political opponents can still make a difference, at least in one-on-one conversations. While the conversations had a larger impact on people's views of individual issues, it also influenced their opinion of their political opponents more generally, and the difference between the two effects wasn't statistically significant.
The findings have been published in the journal Nature Human Behavior.
United Kingdom

UK To Spend $127M in Global Race To Produce AI Chips (theguardian.com) 24

The UK government will spend $127m to try to win a toe-hold for the nation in the global race to produce computer chips used to power artificial intelligence. From a report: Taxpayer money will be used as part of a drive to build a national AI resource in Britain, similar to those under development in the US and elsewhere. It is understood that the funds will be used to order key components from major chipmakers Nvidia, AMD and Intel. But an official briefed on the plans told the Guardian that the $127m offered by the government is far too low relative to investment by peers in the EU, US and China. The official confirmed, in a move first reported by the Telegraph, which also revealed the investment, that the government is in advanced stages of an order of up to 5,000 graphics processing units (GPUs) from Nvidia. The company, which started out building processing capacity for computer games, has seen a sharp increase in its value as the AI race has heated up. Its chips can run language learning models such as ChatGPT.
PlayStation (Games)

Fan-Made Game Reimagines 'Twin Peaks' with PS1-Style Graphics (engadget.com) 17

An anonymous reader shared this report from IGN: A demo for the unofficial fan game Twin Peaks: Into the Night has been released, allowing players to explore the weird and wonderful world of David Lynch and Mark Frost's '90s TV show in a PS1-style adaptation... developed by Jean Manzoni and Lucas Guibert of the Blue Rose Team. The game is now available to download on PC via itch.io, with its creators welcoming feedback on the gameplay experience... "We hope you'll enjoy playing it. As a quick reminder, this is a free fan game made by a very small team of two on our free time. Please take this into consideration... The demo is intended to show you the direction we're taking, and we've put our hearts at it. We're already working on the next release."

Although the game shares no affiliation with the show or its creators, it promises an "experience that will immerse you directly into the unique atmosphere of the show" by offering players the opportunity to step into the shoes of Cooper to solve the mystery while enjoying a slice of cherry pie and a damn fine cup of coffee.

More details from Engadget: The graphics are retro and decidedly PS1-flavored, which makes sense given how the show premiered in 1990. The gameplay looks to be full of exploration, complete with conversations with the town's many oddball residents, though there's a survival horror element reminiscent of the original Resident Evil titles. This is also an appropriate design choice, as the show pits Agent Cooper against foes both physical and supernatural...

The creators have announced that the game will be free when it launches, so that should clear up any potential legal hurdles moving forward.

IT

DirectX 12 Support Comes To CrossOver on Mac With Latest Update (arstechnica.com) 18

Codeweavers took to its official forums today to announce the release of CrossOver 23.0.0, the new version of its software that aims to make emulating Windows software and games easier on macOS, Linux, and ChromeOS systems. From a report: CrossOver 23 has updated to Wine 8.0.1, and it's loaded with improvements across all its platforms. The most notable, though, is the addition of DirectX 12 support under macOS via VKD3D and MoltenVK. This marks the first time most Mac users have had access to software that relies on DirectX 12; previously, only DirectX 11 was supported, and that went for other software solutions like Parallels, too. This new release adds "initial support" for geometry shaders and transforms feedback on macOS Ventura. Codeweavers claims that will address a lot of problems with "missing graphics or black screens in-game" in titles like MechWarrior 5: Mercenaries, Street Fighter V, Tekken 7, and Octopath Traveler.
Microsoft

Adobe and Microsoft Break Some Old Files By Removing PostScript Font Support (arstechnica.com) 97

Recent developments, such as Adobe ending support for Type 1 fonts in 2023 and Microsoft discontinuing Type 1 font support in Office apps, may impact users who manage their own fonts, potentially leading to compatibility and layout issues in older files. Ars Technica's Andrew Cunningham writes: If you want to know about the history of desktop publishing, you need to know about Adobe's PostScript fonts. PostScript fonts used vector graphics so that they could look crisp and clear no matter what size they were, and Apple licensed PostScript fonts for the original LaserWriter printer; together with publishing software like Aldus PageMaker, they made it possible to create a file that would look exactly the same on your computer screen as it did when you printed it. The most important PostScript fonts were so-called "Type 1" fonts, which Adobe initially didn't publish a specification for. From the 1980s up until roughly the early 2000s or so, if you were working in desktop publishing professionally, you were probably using Type 1 fonts.

Other companies didn't want Adobe to have a monopoly on vector-based fonts or desktop publishing, of course; Apple created the TrueType format in the early 90s and licensed it to Microsoft, which used it in Windows 3.1 and later versions. Adobe and Microsoft later collaborated on a new font format called OpenType that could replace both TrueType and PostScript Type 1, and by the mid-2000s, it had been released as an open standard and had become the predominant font format used across most operating systems and software. For a while after that, apps that had supported PostScript Type 1 fonts continued to support them, with some exceptions (Microsoft Office for Windows dropped support for Type 1 fonts in 2013). But now we're reaching an inflection point; Adobe ended support for PostScript Type 1 fonts in January 2023, a couple of years after announcing the change. Yesterday, a Microsoft Office for Mac update deprecated Type 1 font support for the continuously updated Microsoft 365 versions of Word, Excel, PowerPoint, OneNote, and Outlook for Mac (plus the standalone versions of those apps in Office 2019 and 2021). The LibreOffice suite, otherwise a good way to open ancient Word documents, stopped supporting Type 1 fonts in the 5.3 release in mid-2022.

If you began using Adobe and Microsoft's productivity apps at some point in the last 10 or 15 years and you've stuck mostly with the default fonts -- either the ones included with the software or the ones from Adobe's extensive font library -- it's not too likely that you've been using a Type 1 font unintentionally. For these kinds of users, this change will be effectively invisible. But if you install and manage your own fonts and you've been using the same ones for a while, it's possible that you created a document in 2022 that you simply won't be able to open in 2023. The change will also cause problems if you open and work with decades-old files with any kind of regularity; files that use Type 1 fonts will begin generating lots of "missing font" messages, and the substitution OpenType fonts that apps might try to use instead can introduce layout issues. You'll also either need to convert any specialized PostScript Type 1 font that you may have paid for in the past or pay for an equivalent OpenType alternative.

Google

How Google is Planning To Beat OpenAI (theinformation.com) 21

In April, Alphabet CEO Sundar Pichai took an unusual step: merging two large artificial intelligence teams -- with distinct cultures and code -- to catch up to and surpass OpenAI and other rivals. Now the test of that effort is coming, with hundreds of people scrambling to release a group of large machine-learning models -- one of the highest-stakes products the company has ever built -- this fall. The Information: The models, collectively known as Gemini, are expected to give Google the ability to build products its competitors can't, according to a person involved with Gemini's development. OpenAI's GPT-4 large-language model can understand and produce conversational text. Gemini will go beyond that, combining the text capabilities of LLMs like GPT-4 with the ability to create AI images based on a text description, similar to AI-image generators Midjourney and Stable Diffusion, this person said. Gemini's image capabilities haven't been previously reported.

Google employees have also discussed using Gemini to offer features like analyzing charts or creating graphics with text descriptions and controlling software using text or voice commands. Google is betting on Gemini to power services ranging from its Bard chatbot, which competes with OpenAI's ChatGPT, to enterprise apps like Google Docs and Slides. Google also wants to charge app developers for access to Gemini through its Google Cloud server-rental unit. Google Cloud currently sells access to more primitive Google-made AI models through a product called Vertex AI. Those new features could help Google catch up with Microsoft, which has raced ahead with new AI features for its Office 365 apps and has also been selling access to OpenAI's models to its app customers.

Firefox

Does Desktop Linux Have a Firefox Problem? (osnews.com) 164

OS News' managing editor calls Firefox "the single most important desktop Linux application," shipping in most distros (with some users later opting for a post-installation download of Chrome).

But "I'm genuinely worried about the state of browsers on Linux, and the future of Firefox on Linux in particular..." While both GNOME and KDE nominally invest in their own two browsers, GNOME Web and Falkon, their uptake is limited and releases few and far between. For instance, none of the major Linux distributions ship GNOME Web as their default browser, and it lacks many of the features users come to expect from a browser. Falkon, meanwhile, is updated only sporadically, often going years between releases. Worse yet, Falkon uses Chromium through QtWebEngine, and GNOME Web uses WebKit (which are updated separately from the browser, so browser releases are not always a solid metric!), so both are dependent on the goodwill of two of the most ruthless corporations in the world, Google and Apple respectively.

Even Firefox itself, even though it's clearly the browser of choice of distributions and Linux users alike, does not consider Linux a first-tier platform. Firefox is first and foremost a Windows browser, followed by macOS second, and Linux third. The love the Linux world has for Firefox is not reciprocated by Mozilla in the same way, and this shows in various places where issues fixed and addressed on the Windows side are ignored on the Linux side for years or longer. The best and most visible example of that is hardware video acceleration. This feature has been a default part of the Windows version since forever, but it wasn't enabled by default for Linux until Firefox 115, released only in early July 2023. Even then, the feature is only enabled by default for users of Intel graphics — AMD and Nvidia users need not apply. This lack of video acceleration was — and for AMD and Nvidia users, still is — a major contributing factor to Linux battery life on laptops taking a serious hit compared to their Windows counterparts... It's not just hardware accelerated video decoding. Gesture support has taken much longer to arrive on the Linux version than it did on the Windows version — things like using swipes to go back and forward, or pinch to zoom on images...

I don't see anyone talking about this problem, or planning for the eventual possible demise of Firefox, what that would mean for the Linux desktop, and how it can be avoided or mitigated. In an ideal world, the major stakeholders of the Linux desktop — KDE, GNOME, the various major distributions — would get together and seriously consider a plan of action. The best possible solution, in my view, would be to fork one of the major browser engines (or pick one and significantly invest in it), and modify this engine and tailor it specifically for the Linux desktop. Stop living off the scraps and leftovers thrown across the fence from Windows and macOS browser makers, and focus entirely on making a browser engine that is optimised fully for Linux, its graphics stack, and its desktops. Have the major stakeholders work together on a Linux-first — or even Linux-only — browser engine, leaving the graphical front-end to the various toolkits and desktop environments....

I think it's highly irresponsible of the various prominent players in the desktop Linux community, from GNOME to KDE, from Ubuntu to Fedora, to seemingly have absolutely zero contingency plans for when Firefox enshittifies or dies...

AI

A New Frontier for Travel Scammers: AI-Generated Guidebooks (nytimes.com) 15

Shoddy guidebooks, promoted with deceptive reviews, have flooded Amazon in recent months. Their authors claim to be renowned travel writers.

But do they even exist?

The New York Times: The books are the result of a swirling mix of modern tools: A.I. apps that can produce text and fake portraits; websites with a seemingly endless array of stock photos and graphics; self-publishing platforms -- like Amazon's Kindle Direct Publishing -- with few guardrails against the use of A.I.; and the ability to solicit, purchase and post phony online reviews, which runs counter to Amazon's policies and may soon face increased regulation from the Federal Trade Commission. The use of these tools in tandem has allowed the books to rise near the top of Amazon search results and sometimes garner Amazon endorsements such as "#1 Travel Guide on Alaska." A recent Amazon search for the phrase "Paris Travel Guide 2023," for example, yielded dozens of guides with that exact title. One, whose author is listed as Stuart Hartley, boasts, ungrammatically, that it is "Everything you Need to Know Before Plan a Trip to Paris."

The book itself has no further information about the author or publisher. It also has no photographs or maps, though many of its competitors have art and photography easily traceable to stock-photo sites. More than 10 other guidebooks attributed to Stuart Hartley have appeared on Amazon in recent months that rely on the same cookie-cutter design and use similar promotional language. The Times also found similar books on a much broader range of topics, including cooking, programming, gardening, business, crafts, medicine, religion and mathematics, as well as self-help books and novels, among many other categories. Amazon declined to answer a series of detailed questions about the books.

China

China's Internet Giants Order $5 Billion of Nvidia Chips To Power AI Ambitions 31

According to the Financial Times, China's internet giants have ordered more than $5 billion worth of high-performance Nvidia chips for building generative AI systems. Reuters reports: Baidu, TikTok-owner ByteDance, Tencent and Alibaba have made orders worth $1 billion to acquire about 100,000 A800 processors from the U.S. chipmaker to be delivered this year, the FT reported, citing multiple people familiar with the matter. The Chinese groups had also purchased a further $4 billion worth of graphics processing units to be delivered in 2024, according to the report.

The Biden administration last October issued a sweeping set of rules designed to freeze China's semiconductor industry in place while the U.S. pours billions of dollars in subsidies into its chip industry. Nvidia offers the A800 processor in China to meet export control rules after U.S. officials asked the company to stop exporting its two top computing chips to the country for AI-related work. Nvidia's finance chief said in June that restrictions on exports of AI chips to China "would result in a permanent loss of opportunities for the U.S. industry", though the company expected no immediate material impact.
Intel

Intel's GPU Drivers Now Collect Telemetry, Including 'How You Use Your Computer' (extremetech.com) 44

An anonymous reader quotes a report from ExtremeTech: Intel has introduced a telemetry collection service by default in the latest beta driver for its Arc GPUs. You can opt out of it, but we all know most people just click "yes" to everything during a software installation. Intel's release notes for the drivers don't mention this change to how its drivers work, which is a curious omission. News of Intel adding telemetry collection to its drivers is a significant change to how its GPU drivers work. Intel has even given this new collation routine a cute name -- the Intel Computing Improvement Program. Gee, that sounds pretty wonderful. We want to improve our computing, so let's dive into the details briefly.

According to TechPowerUp, which discovered the change, Intel has created a landing page for the program that explains what is collected and what isn't. At a high level, it states, "This program uses information about your computer's performance to make product improvements that may benefit you in the future." Though that sounds innocuous, Intel provides a long list of the types of data it collects, many unrelated to your computer's performance. Those include the types of websites you visit, which Intel says are dumped into 30 categories and logged without URLs or information that identifies you, including how long and how often you visit certain types of sites. It also collects information on "how you use your computer" but offers no details. It will also identify "Other devices in your computing environment." Numerous performance-related data points are also captured, such as your CPU model, display resolution, how much memory you have, and, oddly, your laptop's average battery life.
The good news is that Intel allows you to opt out of this program, which is not the case with Nvidia. According to TechPowerUp, they don't even ask for permission! As for AMD, they not only give you a choice to opt out but they also explain what data they're collecting.
AI

Nvidia Unveils Faster Chip Aimed at Cementing AI Dominance (bloomberg.com) 18

Nvidia announced an updated AI processor that gives a jolt to the chip's capacity and speed, seeking to cement the company's dominance in a burgeoning market. From a report: The Grace Hopper Superchip, a combination graphics chip and processor, will get a boost from a new type of memory, Nvidia said Tuesday at the Siggraph conference in Los Angeles. The product relies on high-bandwidth memory 3, or HBM3e, which is able to access information at a blazing 5 terabytes per second. The Superchip, known as GH200, will go into production in the second quarter of 2024, Nvidia said. It's part of a new lineup of hardware and software that was announced at the event, a computer-graphics expo where Chief Executive Officer Jensen Huang is speaking.

[...] The latest Nvidia products are designed to spread generative AI -- and its underlying hardware -- to even more industries by making the technology simpler to use. A new version of the company's AI Enterprise software will ease the process of training the models, which can then generate text, images and even video based on simple prompts. The lineup also includes new chips for workstations, computers designed for heavy workloads. New AI Workbench software, meanwhile, helps users switch their work on AI models between different types of computers.

Hardware

Gigabyte's New RTX 4060 GPU Fits Three Fans on a Low-Profile Design (theverge.com) 40

Gigabyte has launched a new low-profile GeForce RTX 4060 OC graphics card that's designed to fit into mini PC builds. Unlike many of the other GPUs meant for compact PCs, this one comes with three fans instead of just two or one. From a report: That three-fan setup might make it a bit difficult to fit into some small form factor cases, as the card measures 182mm long. But it makes up for that with its thin 40mm height and 69mm width.

Despite its slender design, the chip comes outfitted with two DisplayPort and two HDMI ports as well. It also comes with a low-profile bracket, which is a nice touch. While it's nice that Gigabyte has made a 40-series chip specifically for low-profile builds, Nvidia's GeForce RTX 4060 isn't that great of a card to begin with. The GPU barely outpaces the older (and slightly more expensive) 3060 Ti, as it comes with an underwhelming 8GB of VRAM and a 128-bit memory bus.

AMD

AMD Announces Radeon Pro W7600 and W7500 (anandtech.com) 6

As AMD continues to launch their full graphics product stacks based on their latest RDNA 3 architecture GPUs, the company is now preparing their next wave of professional cards under the Radeon Pro lineup. Following the launch of their high-end Radeon Pro W7900 and W7800 graphics cards back in the second quarter of this year, today the company is announcing the low-to-mid-range members of the Radeon Pro W7000 series: the Radeon Pro W7500 and Radeon Pro W7600. From a report: Both based on AMD's monolithic Navi 33 silicon, the latest Radeon Pro parts will hit the shelves a bit later this quarter. The two cards, as a whole, will make up what AMD defines as the mid-range segment of their professional video card market. And like their flagship counterparts, AMD is counting on a combination of RDNA 3's advanced features, including AV1 encoding support, improved compute and ray tracing throughput, and DisplayPort 2.1 outputs to help drive sales of the new video cards. That, and as is tradition, significantly undercutting NVIDIA's competing professional cards.

Not unlike their high-end counterparts, for this generation AMD has decided to expand the size of their mid-range pro graphics lineup. Whereas the previous generation had the sole W6600 (and W6400 at entry-level), the W7000 series gets both a W7600 card and a W7500 card. Besides the obvious performance difference, the other big feature separating the two cards is power consumption. The Radeon Pro W7600 is a full-height video card running at 130W, while the W7500 is explicitly designed as a sub-75W card that can be powered entirely by a PCIe slot, coming in at a cool 70 Watts.
The Radeon Pro W7600 is priced at $599 -- $50 cheaper than its predecessor -- whereas the W7500 will bring up the rear of the W7000 product stack at $429.
Windows

Lenovo Is Working On a Windows PC Gaming Handheld Called the 'Legion Go' (windowscentral.com) 17

According to Windows Central, Lenovo is working on a handheld gaming PC dubbed "Legion Go," featuring Windows 11 and Ryzen chips. From the report: While details are scant right now, we understand this will sport AMD's new Phoenix processors, which the chip firm describes as ultra-thin, focused on gaming, AI, and graphics for ultrabooks. The fact the Legion Go will sport Ryzen chips pretty much guarantees that this is a Windows PC gaming handheld, as part of Lenovo's popular gaming "Legion" brand. As of writing, there's no information on exactly when this device could become available, or if, indeed, it'll become available at all.

According to our information, the Legion Go could sport an 8-inch screen, making it larger than the ASUS ROG Ally or the Steam Deck, both of which have a 7-inch display. PC and console games ported to PC are often designed for larger monitors or even TVs, and on smaller screens, UI elements can be difficult to see, especially if the game doesn't have a UI scaling option. A larger display could give the Legion Go a decent advantage over its competitors if it remains lightweight and balanced, which of course remains to be seen. The AMD Phoenix 7040 series chips are described by the firm as "ultra-thin" for powerful, but elegant ultrabook-style devices. They should lend themselves well to a device like the Legion Go, supporting 15W low-power states for lightweight games and maximized battery life, similar to the Steam Deck and ROG Ally. The Z1 Extreme in the ASUS ROG Ally can perform with a TDP below 15W, however, which could give the ROG Ally some advantages there. There's every chance the Legion Go could have other configurations we're unaware of yet, though, we'll just have to wait and see.

Facebook

Meta and Qualcomm Team Up To Run Big AI Models on Phones (cnbc.com) 17

Qualcomm and Meta will enable the social networking company's new large language model, Llama 2, to run on Qualcomm chips on phones and PCs starting in 2024, the companies announced today. From a report: So far, LLMs have primarily run in large server farms, on Nvidia graphics processors, due to the technology's vast needs for computational power and data, boosting Nvidia stock, which is up more than 220% this year. But the AI boom has largely missed the companies that make leading edge processors for phones and PCs, like Qualcomm. Its stock is up about 10% so far in 2023, trailing the NASDAQ's gain of 36%. The announcement on Tuesday suggests that Qualcomm wants to position its processors as well-suited for A.I. but "on the edge," or on a device, instead of "in the cloud." If large language models can run on phones instead of in large data centers, it could push down the significant cost of running A.I. models, and could lead to better and faster voice assistants and other apps.
AI

Crypto Miner Hive Drops 'Blockchain' From Name in Pivot To AI (bloomberg.com) 19

The crypto-mining company formerly known as Hive Blockchain Technologies is pivoting to artificial intelligence and web3, and has changed its name accordingly. From a report: The Vancouver-based miner has dropped the "blockchain" marker and said that its new branding as Hive Digital Technologies is intended to reflect "its mission to drive advancements" in AI applications like ChatGPT, and to "support the new web3 ecosystem."

Hive intends to use its existing fleet of Nvidia graphics processing units "for computational tasks on a massive scale," according to a July 12 filing with the US Securities and Exchange Commission. The vast majority of crypto-mining companies are focused on Bitcoin and use specialized chips that are different from so-called GPUs. Hive is among a handful of companies that deploy GPUs at scale to mine Ether, the second largest cryptocurrency by market value. A recent set of changes on the Ethereum blockchain has meant that these GPUs are no longer necessary, which is a problem for the Ether miners who hold large stocks of them.

Transportation

Ford Gets $9.2 Billion To Help US Catch Up With China's EV Dominance (bloomberg.com) 82

The US government is providing a conditional $9.2 billion loan to Ford for the construction of three battery factories, the largest government backing for a US automaker since the 2009 financial crisis. "The enormous loan [...] marks a watershed moment for President Joe Biden's aggressive industrial policy meant to help American manufacturers catch up to China in green technologies," reports Bloomberg. From the report: The new factories that will eventually supply Ford's expansion into electric vehicles are already under construction in Kentucky and Tennessee through a joint venture called BlueOval SK, owned by the Michigan automaker and South Korean battery giant SK On Co. Ford plans to make as many as 2 million EVs by 2026, a huge increase from the roughly 132,000 it produced last year. The three-factory buildout by BlueOval plus an adjacent Ford EV assembly unit have an estimated price tag of $11.4 billion. BlueOval was previously awarded subsidies by both state governments. That means taxpayers would be providing low-interest financing for almost all of the cost.

Ford's cars and SUVs made with domestic batteries will also be eligible for billions of dollars in incentives embedded in the Inflation Reduction Act's $370 billion in clean-energy funding, part of the historic climate measure narrowly passed into law about a year ago. The US government will subsidize manufacturing of batteries, and buyers could qualify for additional tax rebates of up to $7,500 per vehicle.

The rush of incentives, government lending and private-sector investment has led to a manufacturing boom in the wake of the IRA. More than 100 battery and electric-vehicle production projects are announced or already under construction in the US, representing about $200 billion in total investments. "Not since the advent of the auto industry 100 years ago have we seen an investment like that," says Gary Silberg, KPMG's global automotive sector leader.

Slashdot Top Deals