An anonymous reader writes "While everyone was glued to the Xbox One announcement, Nvidia GeForce GTX 780 launch, and Intel's pre-Haswell frenzy, it seems that AMD's launch was overlooked. On Wednesday, AMD launched its latest line of mobile APUs, codenamed Temash, Kabini, and Richland. Temash is targeted towards smaller touchscreen-based devices such as tablets and the various Windows 8 hybrid devices, and comes in dual-core A4 and A6 flavors. Kabini chips are intended for the low-end notebook market, and come in quad-core A4 and A6 models along with a dual-core E2. Richland includes quad-core A8 and A10 models, and is meant for higher-end notebooks — MSI is already on-board for the A10-5750M in their GX series of gaming notebooks. All three new APUs feature AMD HD 8000-series graphics. Tom's Hardware got a prototype notebook featuring the new quad-core A4-5000 with Radeon HD 8300 graphics, and benchmarked it versus a Pentium B960-based Acer Aspire V3 and a Core-i3-based HP Pavillion Sleekbook 15. While Kabini proves more efficient, and features more powerful graphics than the Pentium, it comes up short in CPU-heavy tasks. What's more, the Core-i3 matches the A4-5000 in power efficiency while its HD 4000 graphics completely outpace the APU."
Slashdot is powered by your submissions, so send in your scoop
MojoKid writes "As with any major CPU microarchitecture launch, one can expect the usual 10~15% performance gains, but Intel apparently has put its efficiency focus into overdrive. Haswell should provide 2x the graphics performance, and it's designed to be as power efficient as possible. In addition, the company has further gone on to state that Haswell should enable a 50% battery-life increase over last year's Ivy Bridge. There are a couple of reasons why Haswell is so energy-efficient versus the previous generation, but the major reason is moving the CPU voltage regulator off of the motherboard and into the CPU package, creating a Fully Integrated Voltage Regulator, or FIVR. This is a far more efficient design and with the use of 'enhanced' tri-gate transistors, current leakage has been reduced by about 2x — 3x versus Ivy Bridge."
BStorm writes "The Toronto Mayor Rob Ford has been making headlines around the world, for allegedly smoking crack. This story was first broken by gawker.com, which is now crowd-funding $200,000 to buy the video in question. What do you look for to determine if a video has been faked? Of course I am only interested in the technical details and not the tawdry details related to this case ;) I live in Toronto, so the video still frame posted on Gawker certainly does look like Rob Ford."
Vigile writes "When NVIDIA released the GTX Titan in February, it was the first consumer graphics card to use the GK110 GPU from NVIDIA that included 2,688 CUDA cores / shaders and an impressive 6GB of GDDR5 frame buffer. However, it also had a $1000 price tag that was the limiting specification for most gamers. With today's release of the GeForce GTX 780 they are hoping to utilize more of the GK110 silicon they are getting from TSMC while offering a lower cost version with performance within spitting range. The GTX 780 uses the same chip but disables a handful more compute units to bring the shader count down to 2,304 — still an impressive bump over the 1,536 of the GTX 680. The 384-bit memory bus remains though the frame buffer is cut in half to 3GB. Overall, the performance of the new card sits squarely between the GTX Titan ($1000) and AMD's Radeon HD 7970 GHz Edition ($439), just like its price. The question is, are PC gamers willing to shell out $220+ dollars MORE than the HD 7970 for somewhere in the range of 15-25% more performance?" As you might guess, there's similarly spec-laden coverage at lots of other sites, including Tom's, ExtremeTech, and TechReport. HotHardware, too.
An anonymous reader writes "The open-source Intel Linux graphics driver has hit a milestone of now being faster than Apple's own OpenGL stack on OS X. The Intel Linux driver on Ubuntu 13.04 is now clearly faster than Apple's internally-developed Intel OpenGL driver on OS X 10.8.3. when benchmarked from a 'Sandy Bridge' class Mac Mini. Only some months ago, Apple's GL driver was still trouncing the Intel Linux Mesa driver."
MojoKid writes "In an effort to coax developers to begin taking Atom seriously as an Android platform, Intel has just released a complete suite of tools that should help ease them into things — especially since it can be used for ARM development as well. It's called Beacon Mountain, named after the highest peak outside of Beacon, New York. As you'd expect, Beacon Mountain supports Jelly Bean (4.2) development, and with this suite, you're provided with a collection of important Intel tools: Hardware Accelerated Execution Manager, Integrated Performance Primitives, Graphics and System Performance Analyzers, Threaded Building Blocks and Software Manager. In addition, Android SDK and NDK, Eclipse and Cygwin third-party tools are included to complete the package."
MojoKid writes "AMD is announcing its Radeon HD 8970M. The mobile GPU is based on a design that has a few small feature changes that have led it to be unofficially labeled a Graphics Core Next (GCN) 1.1 part versus AMD's previous gen GCN 1.0 technology. AMD claims that the Radeon HD 8970M is significantly faster than NVIDIA's GeForce GTX 680M in a variety of tests, but high-end laptops that use AMD hardware are harder to find these days."
WheezyJoe writes "A story on NPR reports that the TrackingPoint rifle went on sale today, and can enable a 'novice' to hit a target 500 yards away on the first try. The rifle's scope features a sophisticated color graphics display (video). The shooter locks a laser on the target by pushing a small button by the trigger... But here's where it's different: You pull the trigger but the gun decides when to shoot. It fires only when the weapon has been pointed in exactly the right place, taking into account dozens of variables, including wind, shake and distance to the target. The rifle has a built-in laser range finder, a ballistics computer and a Wi-Fi transmitter to stream live video and audio to a nearby iPad. Every shot is recorded so it can be replayed, or posted to YouTube or Facebook."
New submitter Wisdom writes "1bir (1 Block Interactive Raycaster) is a simple ray casting engine implemented only in 254 bytes to run on a stock, unexpanded Commodore 64. The name comes from the fact that on a C64 floppy disk, 1 block is equivalent to 254 bytes stored on a disk sector. In 254 bytes, 1bir sets up the screen for drawing, creates sine and cosine tables for 256 brads based on a simple approximation, casts rays into a 2D map that lives inside the C64 KERNAL ROM, renders the screen in coordination with KERNAL, evaluates 8-way joystick input and detects collision against walls. The ray casting core employs a brute force algorithm to determine visible walls, while the mapping portion supports both open-ended (infinitely looped) and traditional, closed maps. The source code in 6502 assembly is available, with extensive comments. A YouTube video showcases 1bir in a detailed manner with both kind of maps and more information, while a Vimeo video presents a shorter demonstration."
theodp writes "The latest round of patents granted by the USPTO included one for Cartoon Face Generation, an invention which Microsoft explains 'generates an attractive cartoon face or graphic of a user's facial image'. Microsoft adds, 'The style of cartoon face achieved resembles the likeness of the user more than cartoons generated by conventional vector-based cartooning techniques. The cartoon faces thus achieved provide an attractive facial appearance and thus have wide applicability in art, gaming, and messaging applications in which a pleasing degree of realism is desirable without exaggerated comedy or caricature.' A Microsoft Research Face SDK Beta is available. Hey, too bad Microsoft didn't have this technology when they generated Bob from Ralphie!"
CowboyRobot writes "Two researchers at San Francisco State University has successfully implemented hardware acceleration for realtime audio using graphics processing units (GPUs). 'Suppose you are simulating a metallic plate to generate gong or cymbal-like sounds. By changing the surface area for the same object, you can generate sound corresponding to cymbals or gongs of different sizes. Using the same model, you may also vary the way in which you excite the metallic plate — to generate sounds that result from hitting the plate with a soft mallet, a hard drumstick, or from bowing. By changing these parameters, you may even simulate nonexistent materials or physically impossible geometries or excitation methods. There are various approaches to physical modeling sound synthesis. One such approach, studied extensively by Stefan Bilbao, uses the finite difference approximation to simulate the vibrations of plates and membranes. The finite difference simulation produces realistic and dynamic sounds (examples can be found here). Realtime finite difference-based simulations of large models are typically too computationally-intensive to run on CPUs. In our work, we have implemented finite difference simulations in realtime on GPUs.'"
New submitter Diakoneo writes "According to the BBC, 'Visual effects master Ray Harryhausen, whose stop-motion wizardry graced such films as Jason and the Argonauts and Clash of the Titans, has died aged 92. The American animator made his models by hand and painstakingly shot them frame by frame to create some of the best-known battle sequences in cinema.' Some of my fondest cinematic memories from my youth are from Ray Harryhausen."
crookedvulture writes "Since their debut five years ago, Intel's low-power Atom microprocessors have relied on the same basic CPU core. That changes with the next generation, which will employ an all-new Silvermont microarchitecture built using a customized version of Intel's tri-gate, 22-nm fabrication process. Silvermont ditches the in-order design of previous Atoms in favor of an out-of-order approach based on a dual-core module equipped with 1MB of shared L2 cache. The design boasts improved power sharing between the CPU and integrated graphics, allowing the CPU cores to scale up to higher speeds depending on system load and platform thermals. Individual cores can be shut down completely to provide additional clock headroom or to conserve power. Intel claims Silvermont doubles the single-threaded performance of its Saltwell predecessor at the same power level, and that dual-core variants have lower peak power draw and higher performance than quad-core ARM SoCs. Silvermont also marks the Atom's adoption of the 'tick-tock' update cadence that guides the development of Intel's Core processors. The successor to Silvermont will be built on 14-nm process tech, and an updated microarchitecture is due after that."
An anonymous reader writes "In a 15-way graphics card comparison on Linux of both the open and closed-source drivers, it was found that the open-source AMD Linux graphics driver is much faster than the open-source NVIDIA driver on Ubuntu 13.04. The open-source NVIDIA driver is developed entirely by the community via reverse-engineering, but for Linux desktop users, is this enough? The big issue for the open-source 'Nouveau' driver is that it doesn't yet fully support re-clocking the graphics processor so that the hardware can actually run at its rated speeds. With the closed-source AMD Radeon and NVIDIA GeForce results, the drivers were substantially faster than their respective open-source driver. Between NVIDIA and AMD on Linux, the NVIDIA closed-source driver was generally doing better than AMD Catalyst."
crookedvulture writes "Intel has revealed fresh details about the integrated graphics in upcoming Haswell processors. The fastest variants of the built-in GPU will be known as Iris and Iris Pro graphics, with the latter boasting embedded DRAM. Unlike Ivy Bridge, which reserves its fastest GPU implementations for mobile parts, the Haswell family will include R-series desktop chips with the full-fat GPU. These processors are likely bound for all-in-one systems, and they'll purportedly offer close to three times the graphics performance of their predecessors. Intel says notebook users can look forward to a smaller 2X boost, while 15-17W ultrabook CPUs benefit from an increase closer to 1.5X. Haswell's integrated graphics has other perks aside from better performance, including faster Quick Sync video transcoding, MJPEG acceleration, and support for 4K resolutions. The new IGP will support DirectX 11.1, OpenGL 4.0, and OpenCL 1.2, as well." Note: Same story, different words, at Extreme Tech and Hot Hardware.
Vigile writes "One of the drawbacks to high end graphics has been the lack of low cost and massively-available displays with a resolution higher than 1920x1080. Yes, 25x16/25x14 panels are coming down in price, but it might be the influx of 4K monitors that makes a splash. PC Perspective purchased a 4K TV for under $1500 recently and set to benchmarking high end graphics cards from AMD and NVIDIA at 3840x2160. For under $500, the Radeon HD 7970 provided the best experience, though the GTX Titan was the most powerful single GPU option. At the $1000 price point the GeForce GTX 690 appears to be the card to beat with AMD's continuing problems on CrossFire scaling. PC Perspective has also included YouTube and downloadable 4K video files (~100 mbps) as well as screenshots, in addition to a full suite of benchmarks."
crookedvulture writes "AMD has revealed more details about the unified memory architecture of its next-generation Kaveri APU. The chip's CPU and GPU components will have a shared address space and will also share both physical and virtual memory. GPU compute applications should be able to share data between the processor's CPU cores and graphics ALUs, and the caches on those components will be fully coherent. This so-called heterogeneous uniform memory access, or hUMA, supports configurations with either DDR3 or GDDR5 memory. It's also based entirely in hardware and should work with any operating system. Kaveri is due later this year and will also have updated Steamroller CPU cores and a GPU based on the current Graphics Core Next architecture." bigwophh writes links to the Hot Hardware take on the story, and writes "AMD claims that programming for hUMA-enabled platforms should ease software development and potentially lower development costs as well. The technology is supported by mainstream programming languages like Python, C++, and Java, and should allow developers to more simply code for a particular compute resource with no need for special APIs."
An anonymous reader writes "Today AMD has officially unveiled its long-awaited dual-GPU Tahiti-based card. Codenamed Malta, the $1,000 Radeon HD 7990 is positioned directly against Nvidia's dual-GPU GeForce GTX 690. Tom's Hardware posted the performance data. Because Fraps measures data at a stage in the pipeline before what is actually seen on-screen, they employed Nvidia's FCAT (Frame Capture Analysis Tools). ... The 690 is beating AMD's new flagship in six out of eight titles. ... AMD is bundling eight titles with every 7990, including: BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3: Blood Dragon, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution." OpenGL performance doesn't seem too off from the competing Nvidia card, but the 7990 dominates when using OpenCL. Power management looks decent: ~375W at full load, but a nice 20W at idle (it can turn the second chip off entirely when unneeded). PC Perspective claims there are issues with Crossfire and an un-synchronized rendering pipeline that leads to a slight decrease in the actual frame rate, but that should be fixed by an updated Catalyst this summer.