×
Wine

Wine 9.0 Released (9to5linux.com) 15

Version 9.0 of Wine, the free and open-source compatibility layer that lets you run Windows apps on Unix-like operating systems, has been released. "Highlights of Wine 9.0 include an experimental Wayland graphics driver with features like basic window management, support for multiple monitors, high-DPI scaling, relative motion events, as well as Vulkan support," reports 9to5Linux. From the report: The Vulkan driver has been updated to support Vulkan 1.3.272 and later, the PostScript driver has been reimplemented to work from Windows-format spool files and avoid any direct calls from the Unix side, and there's now a dark theme option on WinRT theming that can be enabled in WineCfg. Wine 9.0 also adds support for many more instructions to Direct3D 10 effects, implements the Windows Media Video (WMV) decoder DirectX Media Object (DMO), implements the DirectShow Audio Capture and DirectShow MPEG-1 Video Decoder filters, and adds support for video and system streams, as well as audio streams to the DirectShow MPEG-1 Stream Splitter filter.

Desktop integration has been improved in this release to allow users to close the desktop window in full-screen desktop mode by using the "Exit desktop" entry in the Start menu, as well as support for export URL/URI protocol associations as URL handlers to the Linux desktop. Audio support has been enhanced in Wine 9.0 with the implementation of several DirectMusic modules, DLS1 and DLS2 sound font loading, support for the SF2 format for compatibility with Linux standard MIDI sound fonts, Doppler shift support in DirectSound, Indeo IV50 Video for Windows decoder, and MIDI playback in dmsynth.

Among other noteworthy changes, Wine 9.0 brings loader support for ARM64X and ARM64EC modules, along with the ability to run existing Windows binaries on ARM64 systems and initial support for building Wine for the ARM64EC architecture. There's also a new 32-bit x86 emulation interface, a new WoW64 mode that supports running of 32-bit apps on recent macOS versions that don't support 32-bit Unix processes, support for DirectInput action maps to improve compatibility with many old video games that map controller inputs to in-game actions, as well as Windows 10 as the default Windows version for new prefixes. Last but not least, the kernel has been updated to support address space layout randomization (ASLR) for modern PE binaries, better memory allocation performance through the Low Fragmentation Heap (LFH) implementation, and support memory placeholders in the virtual memory allocator to allow apps to reserve virtual space. Wine 9.0 also adds support for smart cards, adds support for Diffie-Hellman keys in BCrypt, implements the Negotiate security package, adds support for network interface change notifications, and fixes many bugs.
For a full list of changes, check out the release notes. You can download Wine 9.0 from WineHQ.
China

China's Chip Imports Fell By a Record 15% Due To US Sanctions, Globally Weaker Demand (tomshardware.com) 49

According to Bloomberg, China's chip import value dropped significantly by 15.4% in 2023, from $413 billion to $349 billion. "Chip sales were down across the board in 2023 thanks to a weakening global economy, but China's chip imports indicate that its economy might be in trouble," reports Tom's Hardware. "The country's inability to import cutting-edge silicon is also certainly a factor in its decreasing chip imports." From the report: In 2022, the value of chip imports to China stood at $413 billion, and in 2023 the country only imported chips worth a total of $349 billion, a 15.4% decrease in value. That a drop happened at all isn't surprising; even TSMC, usually considered to be one of the most advanced fabbing corporation in the world, saw its sales decline by 4.5%. However, a 15.4% decrease in shipments is much more significant, and indicates China has particular issues other than weaker demand across the world.

China's ongoing economic issues, such as its high deflation could play a part. Deflation is when currency increases in value, the polar opposite of inflation, when currency loses value. As inflation has been a significant problem for countries such as the U.S. and UK, deflation might sound much more appealing, but economically it can be problematic. A deflationary economy encourages consumers not to spend, since money is increasing in value, meaning buyers can purchase more if they wait. In other words, deflation decreases demand for products like semiconductors.

However, shipment volume only decreased by 10.8% compared to the 15.4% decline in value, meaning the chips that China didn't buy in 2023 were particularly valuable. This likely reflects U.S. sanctions on China, which prevents it from buying top-end graphics cards, especially from Nvidia. The H100, H200, GH200, and the RTX 4090 are illegal to ship to China, and they're some of Nvidia's best GPUs. The moving target for U.S. sanctions could also make exporters and importers more tepid, as it's hard to tell if more sanctions could suddenly upend plans and business deals.

Classic Games (Games)

Atari Will Release a Mini Edition of Its 1979 Atari 400 (Which Had An 8-Bit MOS 6502 CPU) (extremetech.com) 64

An 1979 Atari 8-bit system re-released in a tiny form factor? Yep.

Retro Games Ltd. is releasing a "half-sized" version of its very first home computer, the Atari 400, "emulating the whole 8-bit Atari range, including the 400/800, XL and XE series, and the 5200 home console. ("In 1979 Atari brought the computer age home," remembers a video announcement, saying the new device represents "The iconic computer now reimagined.")

More info from ExtremeTech: For those of you unfamiliar with it, the Atari 400 and 800 were launched in 1979 as the company's first attempt at a home computer that just happened to double as an incredible game system. That's because, in addition to a faster variant of the excellent 8-bit MOS 6502 CPU found in the Apple II and Commodore PET, they also included Atari's dedicated ANTIC, GTIA, and POKEY coprocessors for graphics and sound, making the Atari 400 and 800 the first true gaming PCs...

If it's as good as the other Retro Games systems, the [new] 400Mini will count as another feather in the cap for Atari Interactive's resurgence following its excellent Atari50 compilation, reissued Atari 2600+ console, and acquisitions of key properties including Digital Eclipse, MobyGames, and AtariAge.

The 2024 version — launching in the U.K. March 28th — will boast high-definition HDMI output at 720p 50 or 60Hz, along with five USB ports. More details from Retro Games Ltd. Also included is THECXSTICK — a superb recreation of the classic Atari CX-40 joystick, with an additional seven seamlessly integrated function buttons. Play one of the included 25 classic Atari games, selected from a simple to use carousel, including all-time greats such as Berzerk, Missile Command, Lee, Millipede, Miner 2049er, M.U.L.E. and Star Raiders II, or play the games you own from USB stick. Plus save and resume your game at any time, or rewind by up to 30 seconds to help you finish those punishingly difficult classics!
Thanks to long-time Slashdot reader elfstones for sharing the article.
AI

Should Chatbots Teach Your Children? 94

"Sal Kahn, the founder and CEO of Khan Academy predicted last year that AI tutoring bots would soon revolutionize education," writes long-time Slashdot reader theodp: theodp writes: His vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could. But some education researchers say schools should be wary of the hype around AI-assisted instruction, warning that generative AI tools may turn out to have harmful or "degenerative" effects on student learning.
A ChatGPT-powered tutoring bot was tested last spring at the Khan Academy — and Bill Gates is enthusiastic about that bot and AI education in general (as well as the Khan Academy and AI-related school curriculums). From the original submission: Explaining his AI vision in November, Bill Gates wrote, "If a tutoring agent knows that a kid likes [Microsoft] Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor's lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today's text-based tutors."

The New York Times article notes that similar enthusiasm greeted automated teaching tools in the 1960s, but predictions that that the mechanical and electronic "teaching machines' — which were programmed to ask students questions on topics like spelling or math — would revolutionize education didn't pan out.

So, is this time different?
Operating Systems

Biggest Linux Kernel Release Ever Welcomes bcachefs File System, Jettisons Itanium (theregister.com) 52

Linux kernel 6.7 has been released, including support for the new next-gen copy-on-write (COW) bcachefs file system. The Register reports: Linus Torvalds announced the release on Sunday, noting that it is "one of the largest kernel releases we've ever had." Among the bigger and more visible changes are a whole new file system, along with fresh functionality for several existing ones; improved graphics support for several vendors' hardware; and the removal of an entire CPU architecture. [...] The single biggest feature of 6.7 is the new bcachefs file system, which we examined in March 2022. As this is the first release of Linux to include the new file system, it definitely would be premature to trust any important data to it yet, but this is a welcome change. The executive summary is that bcachefs is a next-generation file system that, like Btrfs and ZFS, provides COW functionality. COW enables the almost instant creation of "snapshots" of all or part of a drive or volume, which enables the OS to make disk operations transactional: In other words, to provide an "undo" function for complex sets of disk write operations.

Having a COW file system on Linux isn't new. The existing next-gen file system in the kernel, Btrfs, also supports COW snapshots. The version in 6.7 sees several refinements. It inherits a feature implemented for Steam OS: Two Btrfs file systems with the same ID can be mounted simultaneously, for failover scenarios. It also has improved quota support and a new raid_stripe_tree that improves handling of arrays of dissimilar drives. Btrfs remains somewhat controversial. Red Hat banished it from RHEL years ago (although Oracle Linux still offers it) but SUSE's distros depend heavily upon it. It will be interesting to see how quickly SUSE's Snapper tool gains support for bcachefs: This new COW contender may reveal unquestioned assumptions built into the code. Since Snapper is also used in several non-SUSE distros, including Spiral Linux, Garuda, and siduction, they're tied to Btrfs as well.

The other widely used FOSS next-gen file system, OpenZFS, also supports COW, but licensing conflicts prevent ZFS being fully integrated into the Linux kernel. So although multiple distros (such as NixOS, Proxmox, TrueNAS Scale, Ubuntu, and Void Linux) support ZFS, it must remain separate and distinct. This results in limitations, such as the ZFS Advanced Read Cache being separate from Linux's page cache. Bcachefs is all-GPL and doesn't suffer from such limitations. It aims to supply the important features of ZFS, such as integrated volume management, while being as fast as ext4 or XFS, and also surpass Btrfs in both performance and, crucially, reliability.
A full list of changes in this release can be viewed via KernelNewbies.
Television

LG Unveils the World's First Wireless Transparent OLED TV (engadget.com) 26

At CES, LG on Monday unveiled the OLED T, or as the firm describes it, "the first wireless transparent OLED TV," with 4K resolution and LG's wireless transmission tech for audio and video. Engadget: The unit also features a contrast screen that rolls down into a box at its base that you can raise or lower with the press of a bottom. The OLED T is powered by LG's new Alpha 11 AI processor with four times the performance of the previous-gen chip. The extra power offers 70 percent greater graphics performance and 30 percent faster processing speeds, according to the company.

The OLED T model works with the company's Zero Connect Box that debuted on last year's M3 OLED that sends video and audio wirelessly to the TV. You connect all of your streaming devices and game consoles to that box rather than the television. The OLED T's base houses down-firing speakers, which sound surprisingly good, as well as some other components. There are backlights as well, but you can turn those on for a fully-transparent look. LG says the TV will come in standalone, against-the-wall and wall-mounted options.
No word on when the TV will go on sale, or how much it would cost.
Virtualization

How 'Digital Twin' Technology Is Revolutionizing the Auto Industry (motortrend.com) 37

"Digital twin technology is one of the most significant disruptors of global manufacturing seen this century," argues Motor Trend, "and the automobile industry is embracing it in a big way." Roughly three-quarters of auto manufacturers are using digital twins as part of their vehicle development process, evolving not only how they design and develop new cars but also the way they monitor them, fix them, and even build them...

Nvidia, best known for its consumer graphics cards, also has a digital twin solution, called Omniverse, which manufacturers such as Mercedes-Benz are using to design their manufacturing processes. "Their factory planners now have every single element in the factory that they can then put in that virtual digital twin first, lay it all out, and then operate it," Danny Shapiro, VP of automotive at Nvidia said. At that point, those planners can run the entire manufacturing process virtually, ensuring every conveyor feeds the next step in the process, identifying and addressing factory floor headaches long before production begins...

Software developers can run their solutions within digital twins. That includes the code at the lowest level, basic stuff that controls ignition timing within the engine for example, all the way up to the highest level, like touchscreens responding to user inputs. "We're not just simulating the operation outside the car, but the user experience," Nvidia's Shapiro said. "We can simulate and basically run the real software that would be running in that car and display it on the screens." By bringing all these systems together virtually, developers can find and solve issues earlier, preventing costly development delays or, worse yet, buggy releases...

Using unique identifiers, manufacturers can effectively create internal digital copies of vehicles that have been produced. Those copies can be used for ongoing tests and verifications, helping to anticipate things like required maintenance or susceptibility to part failures. By using telematics, in-car services that remotely communicate a car's status back to the manufacturer in real-time, these digital twins can be updated to match the real thing. "By monitoring tire health, tire grip, vehicle weight distribution, and other critical parameters, engineers can anticipate potential problems and schedule maintenance proactively, reducing downtime and extending the vehicle's lifespan," Tactile Mobility's Tzur said.

Graphics

Nvidia Slowed RTX 4090 GPU By 11 Percent, To Make It 100 Percent Legal For Export In China (theregister.com) 22

Nvidia has throttled the performance of its GeForce RTX 4090 GPU by roughly 11%, allowing it to comply with U.S. sanctions and be sold in China. The Register reports: Dubbed the RTX 4090D, the device appeared on Nvidia's Chinese-market website Thursday and boasts performance roughly 10.94 percent lower than the model Nvidia announced in late 2022. This shows up in the form of lower core count, 14,592 CUDA cores versus 16,384 on versions sold outside of China. Nvidia also told The Register today the card's tensor core count has also been been cut down by a similar margin from 512 to 456 on the 4090D variant. Beyond this the card is largely unchanged, with peak clock speeds rated at 2.52 GHz, 24 GB of GDDR6x memory, and a fat 384-bit memory bus.

As we reported at the time, the RTX 4090 was the only consumer graphics card barred from sale in the Middle Kingdom following the October publication of the Biden Administration's most restrictive set of export controls. The problem was the card narrowly exceeded the performance limits on consumer cards with a total processing performance (TPP) of more than 4,800. That number is calculated by doubling the max number of dense tera-operations per second -- floating point or integer -- and multiplying by the bit length of the operation.

The original 4090 clocked a TPP of 5,285 performance, which meant Nvidia needed a US government-issued license to sell the popular gaming card in China. Note, consumer cards aren't subject to the performance density metric that restricts the sale of much less powerful datacenter cards like the Nvidia L4. As it happens, cutting performance by 10.94 percent is enough to bring the card under the metrics that trigger the requirement for the USA's Bureau of Industry and Security (BIS) to consider an export license.
Nvidia notes that the 4090D can be overclocked by end users, effectively allowing customers to recover some performance lost by the lower core count. "In 4K gaming with ray tracing and deep-learning super sampling (DLSS), the GeForce RTX 4090D is about five percent slower than the GeForce RTX 4090 and it operates like every other GeForce GPU, which can be overclocked by end users," an Nvidia spokesperson said in an email.
Desktops (Apple)

Inside Apple's Massive Push To Transform the Mac Into a Gaming Paradise (inverse.com) 144

Apple is reinvesting in gaming with advanced Mac hardware, improvements to Apple silicon, and gaming-focused software, aiming not to repeat its past mistakes and capture a larger share of the gaming market. In an article for Inverse, Raymond Wong provides an in-depth overview of this endeavor, including commentary from Apple's marketing managers Gordon Keppel, Leland Martin, and Doug Brooks. Here's an excerpt from the report: Gaming on the Mac in the 1990s until 2020, when Apple made a big shift to its own custom silicon, could be boiled down to this: Apple was in a hardware arms race with the PC that it couldn't win. Mac gamers were hopeful that the switch from PowerPC to Intel CPUs starting in 2005 would turn things around, but it didn't because by then, GPUs started becoming the more important hardware component for running 3D games, and the Mac's support for third-party GPUs could only be described as lackluster. Fast forward to 2023, and Apple has a renewed interest in gaming on the Mac, the likes of which it hasn't shown in the last 25 years. "Apple silicon has changed all that," Keppel tells Inverse. "Now, every Mac that ships with Apple silicon can play AAA games pretty fantastically. Apple silicon has been transformative of our mainstream systems that got tremendous boosts in graphics with M1, M2, and now with M3."

Ask any gadget reviewer (including myself) and they will tell you Keppel isn't just drinking the Kool-Aid because Apple pays him to. Macs with Apple silicon really are performant computers that can play some of the latest PC and console games. In three generations of desktop-class chip design, Apple has created a platform with "tens of millions of Apple silicon Macs," according to Keppel. That's tens of millions of Macs with monstrous CPU and GPU capabilities for running graphics-intensive games. Apple's upgrades to the GPUs on its silicon are especially impressive. The latest Apple silicon, the M3 family of chips, supports hardware-accelerated ray-tracing and mesh shading, features that only a few years ago didn't seem like they would ever be a priority, let alone ones that are built into the entire spectrum of MacBook Pros.

The "magic" of Apple silicon isn't just performance, says Leland Martin, an Apple software marketing manager. Whereas Apple's fallout with game developers on the Mac previously came down to not supporting specific computer hardware, Martin says Apple silicon started fresh with a unified hardware platform that not only makes it easier for developers to create Mac games for, but will allow for those games to run on other Apple devices. "If you look at the Mac lineup just a few years ago, there was a mix of both integrated and discrete GPUs," Martin says. "That can add complexity when you're developing games. Because you have multiple different hardware permutations to consider. Today, we've effectively eliminated that completely with Apple silicon, creating a unified gaming platform now across iPhone, iPad, and Mac. Once a game is designed for one platform, it's a straightforward process to bring it to the other two. We're seeing this play out with games like Resident Evil Village that launched first [on Mac] followed by iPhone and iPad."

"Gaming was fundamentally part of the Apple silicon design,â Doug Brooks, also on the Mac product marketing team, tells Inverse. "Before a chip even exists, gaming is fundamentally incorporated during those early planning stages and then throughout development. I think, big picture, when we design our chips, we really look at building balanced systems that provide great CPU, GPU, and memory performance. Of course, [games] need powerful GPUs, but they need all of those features, and our chips are designed to deliver on that goal. If you look at the chips that go in the latest consoles, they look a lot like that with integrated CPU, GPU, and memory." [...] "One thing we're excited about with this most recent launch of the M3 family of chips is that we're able to bring these powerful new technologies, Dynamic Caching, as well as ray-tracing and mesh shading across our entire line of chips," Brook adds. "We didn't start at the high end and trickle them down over time. We really wanted to bring that to as many customers as possible."

Intel

12VO Power Standard Appears To Be Gaining Steam, Will Reduce PC Cables and Costs (tomshardware.com) 79

An anonymous reader quotes a report from Tom's Hardware: The 12VO power standard (PDF), developed by Intel, is designed to reduce the number of power cables needed to power a modern PC, ultimately reducing cost. While industry uptake of the standard has been slow, a new slew of products from MSI indicates that 12VO is gaining traction.

MSI is gearing up with two 12VO-compliant motherboards, covering both Intel and AMD platforms, and a 12VO Power Supply that it's releasing simultaneously: The Pro B650 12VO WiFi, Pro H610M 12VO, and MSI 12VO PSU power supply are all 'coming soon,' which presumably means they'll officially launch at CES 2024. HardwareLux got a pretty good look at MSI's offerings during its EHA (European Hardware Awards) tech tour, including the 'Project Zero' we covered earlier. One of the noticeable changes is the absence of a 24-pin ATX connector, as the ATX12VO connectors use only ten-pin connectors. The publications also saw a 12VO-compliant FSP power supply in a compact system with a thick graphics card.

A couple of years ago, we reported on FSP 650-watt and 750-watt SFX 12VO power supply. Apart from that, there is a 1x 6-pin ATX12VO termed 'extra board connector' according to its manual and a 1x 8-pin 12V power connector for the CPU. There are two smaller 4-pin connectors that will provide the 5V power needed for SATA drives. It is likely each of these connectors provides power to two SATA-based drives. Intel proposed the ATX12VO standard several years ago, but adoption has been slow until now. This standard is designed to provide 12v exclusively, completely removing a direct 3.3v and 5v supply. The success of the new standard will depend on the wide availability of the motherboard and power supplies.

AI

Will AI Be a Disaster for the Climate? (theguardian.com) 100

"What would you like OpenAI to build/fix in 2024?" the company's CEO asked on X this weekend.

But "Amid all the hysteria about ChatGPT and co, one thing is being missed," argues the Observer — "how energy-intensive the technology is." The current moral panic also means that a really important question is missing from public discourse: what would a world suffused with this technology do to the planet? Which is worrying because its environmental impact will, at best, be significant and, at worst, could be really problematic.

How come? Basically, because AI requires staggering amounts of computing power. And since computers require electricity, and the necessary GPUs (graphics processing units) run very hot (and therefore need cooling), the technology consumes electricity at a colossal rate. Which, in turn, means CO2 emissions on a large scale — about which the industry is extraordinarily coy, while simultaneously boasting about using offsets and other wheezes to mime carbon neutrality.

The implication is stark: the realisation of the industry's dream of "AI everywhere" (as Google's boss once put it) would bring about a world dependent on a technology that is not only flaky but also has a formidable — and growing — environmental footprint. Shouldn't we be paying more attention to this?

Thanks to long-time Slashdot reader mspohr for sharing the article.
NASA

NASA's Tech Demo Streams First Video From Deep Space Via Laser 24

NASA has successfully beamed an ultra-high definition streaming video from a record-setting 19 million miles away. The Deep Space Optical Communications experiment, as it is called, is part of a NASA technology demonstration aimed at streaming HD video from deep space to enable future human missions beyond Earth orbit. From a NASA press release: The [15-second test] video signal took 101 seconds to reach Earth, sent at the system's maximum bit rate of 267 megabits per second (Mbps). Capable of sending and receiving near-infrared signals, the instrument beamed an encoded near-infrared laser to the Hale Telescope at Caltech's Palomar Observatory in San Diego County, California, where it was downloaded. Each frame from the looping video was then sent "live" to NASA's Jet Propulsion Laboratory in Southern California, where the video was played in real time.

The laser communications demo, which launched with NASA's Psyche mission on Oct. 13, is designed to transmit data from deep space at rates 10 to 100 times greater than the state-of-the-art radio frequency systems used by deep space missions today. As Psyche travels to the main asteroid belt between Mars and Jupiter, the technology demonstration will send high-data-rate signals as far out as the Red Planet's greatest distance from Earth. In doing so, it paves the way for higher-data-rate communications capable of sending complex scientific information, high-definition imagery, and video in support of humanity's next giant leap: sending humans to Mars.

Uploaded before launch, the short ultra-high definition video features an orange tabby cat named Taters, the pet of a JPL employee, chasing a laser pointer, with overlayed graphics. The graphics illustrate several features from the tech demo, such as Psyche's orbital path, Palomar's telescope dome, and technical information about the laser and its data bit rate. Tater's heart rate, color, and breed are also on display. There's also a historical link: Beginning in 1928, a small statue of the popular cartoon character Felix the Cat was featured in television test broadcast transmissions. Today, cat videos and memes are some of the most popular content online.
"Despite transmitting from millions of miles away, it was able to send the video faster than most broadband internet connections," said Ryan Rogalin, the project's receiver electronics lead at JPL. "In fact, after receiving the video at Palomar, it was sent to JPL over the internet, and that connection was slower than the signal coming from deep space. JPL's DesignLab did an amazing job helping us showcase this technology -- everyone loves Taters."
Graphics

Vera Molnar, Pioneer of Computer Art, Dies At 99 (nytimes.com) 16

Alex Williams reports via The New York Times: Vera Molnar, a Hungarian-born artist who has been called the godmother of generative art for her pioneering digital work, which started with the hulking computers of the 1960s and evolved through the current age of NFTs, died on Dec. 7 in Paris. She was 99. Her death was announced on social media by the Pompidou Center in Paris, which is scheduled to present a major exhibition of her work in February. Ms. Molnar had lived in Paris since 1947. While her computer-aided paintings and drawings, which drew inspiration from geometric works by Piet Mondrian and Paul Klee, were eventually exhibited in major museums like the Museum of Modern Art in New York and the Los Angeles County Museum of Art, her work was not always embraced early in her career.

Ms. Molnar in fact began to employ the principles of computation in her work years before she gained access to an actual computer. In 1959, she began implementing a concept she called "Machine Imaginaire" -- imaginary machine. This analog approach involved using simple algorithms to guide the placement of lines and shapes for works that she produced by hand, on grid paper. She took her first step into the silicon age in 1968, when she got access to a computer at a university research laboratory in Paris. In the days when computers were generally reserved for scientific or military applications, it took a combination of gumption and '60s idealism for an artist to attempt to gain access to a machine that was "very complicated and expensive," she once said, adding, "They were selling calculation time in seconds." [...]

Making art on Apollo-era computers was anything but intuitive. Ms. Molnar had to learn early computer languages like Basic and Fortran and enter her data with punch cards, and she had to wait several days for the results, which were transferred to paper with a plotter printer. One early series, "Interruptions," involved a vast sea of tiny lines on a white background. As ARTNews noted in a recent obituary: "She would set up a series of straight lines, then rotate some, causing her rigorous set of marks to be thrown out of alignment. Then, to inject further chaos, she would randomly erase certain portions, resulting in blank areas amid a sea of lines." Another series, "(Des)Ordres" (1974), involved seemingly orderly patterns of concentric squares, which she tweaked to make them appear slightly disordered, as if they were vibrating.

Over the years, Ms. Molnar continued to explore the tensions between machine-like perfection and the chaos of life itself, as with her 1976 plotter drawing "1% of Disorder," another deconstructed pattern of concentric squares. "I love order, but I can't stand it," she told Mr. Obrist. "I make mistakes, I stutter, I mix up my words." And so, she concluded, "chaos, perhaps, came from this." [...] Her career continued to expand in scope in the 1970s. She began using computers with screens, which allowed her to instantly assess the results of her codes and adjust accordingly. With screens, it was "like a conversation, like a real pictorial process," she said in a recent interview with the generative art creator and entrepreneur Erick Calderon. "You move the 'brush' and you see immediately if it suits you or not." [...] Earlier this year, she cemented her legacy in the world of blockchain with "Themes and Variations," a generative art series of more than 500 works using NFT technology that was created in collaboration with the artist and designer Martin Grasser and sold through Sotheby's. The series fetched $1.2 million in sales.

Intel

Intel Core Ultra Processors Debut for AI-powered PCs (venturebeat.com) 27

Intel launched its Intel Core Ultra processors for AI-powered PCs at its AI Everywhere event today. From a report: The big chip maker said these processors spearhead a new era in computing, offering unparalleled power efficiency, superior compute and graphics performance, and an unprecedented AI PC experience to mobile platforms and edge devices. Available immediately, these processors will be used in over 230 AI PCs coming from renowned partners like Acer, ASUS, Dell, Gigabyte, and more.

The Intel Core Ultra processors represent an architectural shift for Intel, marking its largest design change in 40 years. These processors harness the Intel 4 process technology and Foveros 3D advanced packaging, leveraging leading-edge processes for optimal performance and capabilities. The processors boast a performance-core (P-core) architecture enhancing instructions per cycle (IPC). Efficient-cores (E-cores) and low-power Efficient-cores (LP E-cores). They deliver up to 11% more compute power compared to competitors, ensuring superior CPU performance for ultrathin PCs.

Features of Intel Core Ultra
Intel Arc GPU: Featuring up to eight Xe-cores, this GPU incorporates AI-based Xe Super Sampling, offering double the graphics performance compared to prior generations. It includes support for modern graphics features like ray tracing, mesh shading, AV1 encode and decode, HDMI 2.1, and DisplayPort 2.1 20G.
AI Boost NPU: Intel's latest NPU, Intel AI Boost, focuses on low-power, long-running AI tasks, augmenting AI processing on the CPU and GPU, offering 2.5x better power efficiency compared to its predecessors.
Advanced Performance Capabilities: With up to 16 cores, 22 threads, and Intel Thread Director for optimized workload scheduling, these processors boast a maximum turbo frequency of 5.1 GHz and support for up to 96 GB DDR5 memory capacity.
Cutting-edge Connectivity: Integrated Intel Wi-Fi 6E and support for discrete Intel Wi-Fi 7 deliver blazing wireless speeds, while Thunderbolt 4 ensures connectivity to multiple 4K monitors and fast storage with speeds of 40 Gbps.
Enhanced AI Performance: OpenVINO toolkits, ONNX, and ONNX Runtime offer streamlined workflow, automatic device detection, and enhanced AI performance.

AMD

Meta and Microsoft To Buy AMD's New AI Chip As Alternative To Nvidia's (cnbc.com) 16

Meta, OpenAI, and Microsoft said at an AMD investor event today that they will use AMD's newest AI chip, the Instinct MI300X, as an alternative to Nvidia's expensive graphic processors. "If AMD's latest high-end chip is good enough for the technology companies and cloud service providers building and serving AI models when it starts shipping early next year, it could lower costs for developing AI models and put competitive pressure on Nvidia's surging AI chip sales growth," reports CNBC. From the report: "All of the interest is in big iron and big GPUs for the cloud," AMD CEO Lisa Su said Wednesday. AMD says the MI300X is based on a new architecture, which often leads to significant performance gains. Its most distinctive feature is that it has 192GB of a cutting-edge, high-performance type of memory known as HBM3, which transfers data faster and can fit larger AI models. Su directly compared the MI300X and the systems built with it to Nvidia's main AI GPU, the H100. "What this performance does is it just directly translates into a better user experience," Su said. "When you ask a model something, you'd like it to come back faster, especially as responses get more complicated."

The main question facing AMD is whether companies that have been building on Nvidia will invest the time and money to add another GPU supplier. "It takes work to adopt AMD," Su said. AMD on Wednesday told investors and partners that it had improved its software suite called ROCm to compete with Nvidia's industry standard CUDA software, addressing a key shortcoming that had been one of the primary reasons AI developers currently prefer Nvidia. Price will also be important. AMD didn't reveal pricing for the MI300X on Wednesday, but Nvidia's can cost around $40,000 for one chip, and Su told reporters that AMD's chip would have to cost less to purchase and operate than Nvidia's in order to persuade customers to buy it.

On Wednesday, AMD said it had already signed up some of the companies most hungry for GPUs to use the chip. Meta and Microsoft were the two largest purchasers of Nvidia H100 GPUs in 2023, according to a recent report from research firm Omidia. Meta said it will use MI300X GPUs for AI inference workloads such as processing AI stickers, image editing, and operating its assistant. Microsoft's CTO, Kevin Scott, said the company would offer access to MI300X chips through its Azure web service. Oracle's cloud will also use the chips. OpenAI said it would support AMD GPUs in one of its software products, called Triton, which isn't a big large language model like GPT but is used in AI research to access chip features.

Hardware

Apple's Chip Lab: Now 15 Years Old With Thousands of Engineers (cnbc.com) 68

"As of this year, all new Mac computers are powered by Apple's own silicon, ending the company's 15-plus years of reliance on Intel," according to a new report from CNBC.

"Apple's silicon team has grown to thousands of engineers working across labs all over the world, including in Israel, Germany, Austria, the U.K. and Japan. Within the U.S., the company has facilities in Silicon Valley, San Diego and Austin, Texas..." The latest A17 Pro announced in the iPhone 15 Pro and Pro Max in September enables major leaps in features like computational photography and advanced rendering for gaming. "It was actually the biggest redesign in GPU architecture and Apple silicon history," said Kaiann Drance, who leads marketing for the iPhone. "We have hardware accelerated ray tracing for the first time. And we have mesh shading acceleration, which allows game developers to create some really stunning visual effects." That's led to the development of iPhone-native versions from Ubisoft's Assassin's Creed Mirage, The Division Resurgence and Capcom's Resident Evil 4.

Apple says the A17 Pro is the first 3-nanometer chip to ship at high volume. "The reason we use 3-nanometer is it gives us the ability to pack more transistors in a given dimension. That is important for the product and much better power efficiency," said the head of Apple silicon, Johny Srouji . "Even though we're not a chip company, we are leading the industry for a reason." Apple's leap to 3-nanometer continued with the M3 chips for Mac computers, announced in October. Apple says the M3 enables features like 22-hour battery life and, similar to the A17 Pro, boosted graphics performance...

In a major shift for the semiconductor industry, Apple turned away from using Intel's PC processors in 2020, switching to its own M1 chip inside the MacBook Air and other Macs. "It was almost like the laws of physics had changed," Ternus said. "All of a sudden we could build a MacBook Air that's incredibly thin and light, has no fan, 18 hours of battery life, and outperformed the MacBook Pro that we had just been shipping." He said the newest MacBook Pro with Apple's most advanced chip, the M3 Max, "is 11 times faster than the fastest Intel MacBook Pro we were making. And we were shipping that just two years ago." Intel processors are based on x86 architecture, the traditional choice for PC makers, with a lot of software developed for it. Apple bases its processors on rival Arm architecture, known for using less power and helping laptop batteries last longer.

Apple's M1 in 2020 was a proving point for Arm-based processors in high-end computers, with other big names like Qualcomm — and reportedly AMD and Nvidia — also developing Arm-based PC processors. In September, Apple extended its deal with Arm through at least 2040.

Since Apple first debuted its homegrown semiconductors in 2010 in the iPhone 4, other companies started pursuing their own custom semiconductor development, including Amazon, Google, Microsoft and Tesla.

CNBC reports that Apple is also reportedly working on its own Wi-Fi and Bluetooth chip. Apple's Srouji wouldn't comment on "future technologies and products" but told CNBC "we care about cellular, and we have teams enabling that."
AI

AI Chip Contenders Face Daunting 'Moats' 28

Barriers to entry in an industry dominated by TSMC and Nvidia are very high. From a report: In the drama that has just played out in Silicon Valley over the future of OpenAI, one side plot concerned an ambitious chip venture by its chief executive Sam Altman. Before he was ousted and reinstated to the helm of the company, Altman had sought to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to build a rival to compete with sector giants Nvidia and Taiwan Semiconductor Manufacturing Co. This would be a vast undertaking. And one where $100bn may not go very far. Given that the US chip designer and Taiwanese chipmaker are critical to all things generative AI, Altman is unlikely to be the only one with hopes of taking them on. But the barriers to entry -- moats in Silicon Valley parlance -- are formidable.

Nvidia has about 95 per cent of the markets for GPU, or graphics processing units. These computer processors were originally designed for graphics but have become increasingly important in areas such as machine learning. TSMC has about 90 per cent of the world's advanced chip market. These businesses are lucrative. TSMC runs on gross margins of nearly 60 per cent, Nvidia at 74 per cent. TSMC makes $76bn in sales a year. The impressive figures make it seem as though there is plenty of room for more contenders. A global shortage of Nvidia's AI chips makes the prospect of vertical integration yet more attractive. As the number of GPUs required to develop and train advanced AI models grows rapidly, the key to profitability for AI companies lies in having stable access to GPUs.

[...] It is one thing for companies to design customised chips. But Nvidia's profitability comes not from making chips cost-efficient, but by providing a one-stop solution for a wide range of tasks and industries. For example, Nvidia's HGX H100 systems, which can go for about $300,000 each, are used to accelerate workloads for everything from financial applications to analytics. Coming up with a viable rival for the HGX H100 system, which is made up of 35,000 parts, would take much more than just designing a new chip. Nvidia has been developing GPUs for more than two decades. That head start, which includes hardware and related software libraries, is protected by thousands of patents. Even setting aside the challenges of designing a new AI chip, manufacturing is where the real challenge lies.
AI

New 'Stable Video Diffusion' AI Model Can Animate Any Still Image (arstechnica.com) 13

An anonymous reader quotes a report from Ars Technica: On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video -- with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU. [...] Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called "SVD"), and another that generates 25 frames (called "SVD-XT"). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576x1024 resolution.

In our local testing, a 14-frame generation took about 30 minutes to create on an Nvidia RTX 3060 graphics card, but users can experiment with running the models much faster on the cloud through services like Hugging Face and Replicate (some of which you may need to pay for). In our experiments, the generated animation typically keeps a portion of the scene static and adds panning and zooming effects or animates smoke or fire. People depicted in photos often do not move, although we did get one Getty image of Steve Wozniak to slightly come to life.

Given these limitations, Stability emphasizes that the model is still early and is intended for research only. "While we eagerly update our models with the latest advancements and work to incorporate your feedback," the company writes on its website, "this model is not intended for real-world or commercial applications at this stage. Your insights and feedback on safety and quality are important to refining this model for its eventual release." Notably, but perhaps unsurprisingly, the Stable Video Diffusion research paper (PDF) does not reveal the source of the models' training datasets, only saying that the research team used "a large video dataset comprising roughly 600 million samples" that they curated into the Large Video Dataset (LVD), which consists of 580 million annotated video clips that span 212 years of content in duration.

Businesses

EU, Chinese, French Regulators Seeking Info on Graphic Cards, Nvidia Says (reuters.com) 44

Regulators in the European Union, China and France have asked for information on Nvidia's graphic cards, with more requests expected in the future, the U.S. chip giant said in a regulatory filing. From a report: Nvidia is the world's largest maker of chips used both for artificial intelligence and for computer graphics. Demand for its chips jumped following the release of the generative AI application ChatGPT late last year. The California-based company has a market share of around 80% via its chips and other hardware and its powerful software that runs them.

Its graphics cards are high-performance devices that enable powerful graphics rendering and processing for use in video editing, video gaming and other complex computing operations. The company said this has attracted regulatory interest around the world. "For example, the French Competition Authority collected information from us regarding our business and competition in the graphics card and cloud service provider market as part of an ongoing inquiry into competition in those markets," Nvidia said in a regulatory filing dated Nov. 21.

Businesses

Nvidia's Revenue Triples As AI Chip Boom Continues 30

Nvidia's fiscal third-quarter results surpassed Wall Street's predictions, with revenue growing 206% year over year. However, Nvidia shares are down after the company called for a negative impact in the next quarter due to export restrictions affecting sales in China and other countries. CNBC reports: Nvidia's revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago. The company's data center revenue totaled $14.51 billion, up 279% and more than the StreetAccount consensus of $12.97 billion. Half of the data center revenue came from cloud infrastructure providers such as Amazon, and the other from consumer internet entities and large companies, Nvidia said. Healthy uptake came from clouds that specialize in renting out GPUs to clients, Kress said on the call.

The gaming segment contributed $2.86 billion, up 81% and higher than the $2.68 billion StreetAccount consensus. With respect to guidance, Nvidia called for $20 billion in revenue for the fiscal fourth quarter. That implies nearly 231% revenue growth. [...] Nvidia faces obstacles, including competition from AMD and lower revenue because of export restrictions that can limit sales of its GPUs in China. But ahead of Tuesday report, some analysts were nevertheless optimistic.

Slashdot Top Deals